paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Fidelity-based Deep Adiabatic Scheduling
1 INTRODUCTION . Many of the algorithms developed for quantum computing employ the quantum circuit model , in which a quantum state involving multiple qubits undergoes a series of invertible transformations . However , an alternative model , called Adiabatic Quantum Computation ( AQC ) ( Farhi et al. , 2000 ; McGeoch , 2014 ) , is used in some of the leading quantum computers , such as those manufactured by D-Wave Systems ( Boixo et al. , 2014 ) . AQC algorithms can achieve quantum speedups over classical algorithms ( Albash & Lidar , 2018 ) , and are polynomially equivalent to the quantum circuit model ( Aharonov et al. , 2008 ) . In AQC , given a computational problem Q , e.g. , a specific instance of a 3SAT problem , a physical system is slowly evolved until a specific quantum state that represents a proper solution is achieved . Each AQC run involves three components : 1 . An initial Hamiltonian Hb , chosen such that its ground state ( in matrix terms , the minimal eigenvector of Hb ) is easy to prepare and there is a large spectral gap . This is typically independent of the specific instance of Q . 2 . A final Hamiltonian Hp designed such that its ground state corresponds to the solution of the problem instance Q . 3 . An adiabatic schedule , which is a strictly increasing function s ( t ) that maps a point in time 0 ≤ t ≤ tf , where tf is total computation time , to the entire interval [ 0 , 1 ] ( i.e. , s ( 0 ) = 0 , s ( tf ) = 1 , and s ( t1 ) < s ( t2 ) iff t1 < t2 and vice versa ) . These three components define a single time-dependent HamiltonianH ( t ) , which can be seen as an algorithm for solving Q : H ( t ) = ( 1− s ( t ) ) · Hb + s ( t ) · Hp ( 1 ) At the end of the adiabatic calculation , the quantum state is measured . The square of the overlap between the quantum state and ground state of the final Hamiltonian , is the fidelity , and represents the probability of success in finding the correct solution . An AQC algorithm that is evolved over an insufficient time period ( a schedule that is too fast ) will have a low fidelity . Finding the optimal schedule , i.e. , the one that would lead to a high fidelity and would keep the time complexity of the algorithm minimal is , therefore , of a great value . However , for most problems , an analytical solution for the optimal schedule does not exist ( Albash & Lidar , 2018 ) . Attempts were made to optimize specific aspects of the adiabatic schedule by using iterative methods ( Zeng et al. , 2015 ) or by direct derivations ( Susa et al. , 2018 ) . Performance was evaluated by examining characteristics of the resulting dynamic ( e.g . the minimum energy gap ) and no improvement was demonstrated on the full quantum calculation . Previous attempts to employ AI for the task of finding the optimal schedule have relied on Reinforcement Learning ( Lin et al. , 2020 ; Chen et al. , 2020 ) . While these methods were able to find schedules that are better than the linear path , they are limited to either learning one path for a family of problems ( without considering the specific instance ) or to rerunning the AQC of a specific instance Q multiple times in order to optimize the schedule . In our work , supervised learning is employed in order to generalize from a training set of problems and their optimal paths to new problem instances . Training is done offline and the schedule our neural model outputs is a function of the specific problem instance . The problem instance is encoded in our model either based on the final HamiltonianHp or directly based on the problem . The suggested neural models are tested using several different problem types : Grover search problems , 3SAT and MAX-CUT problems , and randomized QUBO problems . We show that the evolution schedules suggested by our model greatly outperform the naive linear evolution schedule , as well as those schedules provided by the recent RL methods , and allow for much shorter total evolution times . 2 BACKGROUND . The goal of the scheduling task is to find a schedule s ( t ) that maximizes the probability to get the correct answer for instance Q , using Hb and Hp over an adiabatic quantum computer . The solution to Q is coded as the lowest energy eigenstate ofHp . In order to achieve the solution state with high probability , the system must be evolved “ sufficiently slowly ” . The adiabatic theorem ( Roland & Cerf , 2002 ; Albash & Lidar , 2018 ; Rezakhani et al. , 2009 ) is used to analyze how fast could this evolution be . It states that the probability to reach the desired state at the end of the adiabatic calculation is 1− ε2 for ε < < 1 if∣∣〈E1 ( t ) | ddtH ( t ) |E0 ( t ) 〉∣∣ g2 ( t ) ≤ ε ( 2 ) where the Dirac notation ( Tumulka , 2009 ) is used1 , E0 ( t ) ( E1 ( t ) ) is the ground state ( first excited state ) of the time dependent Hamiltonian H ( t ) , i.e. , the eigenstates that corresponds to the lowest ( 2nd lowest ) eigenvalue , and g ( t ) is the time dependent instantaneous spectral gap between the smallest and second smallest eigenvalues ofH ( t ) . Let tf be the total calculation time . let s ( t ) be an evolution schedule , such that s ( 0 ) = 0 , s ( tf ) = 1 . Applying the adiabatic condition for s ( t ) , we get∣∣〈E1 ( s ( t ) ) | dsdt ddsH ( s ( t ) ) |E0 ( s ( t ) ) 〉∣∣ g2 ( s ( t ) ) ≤ ε⇒ ds dt ≤ ε g 2 ( s ) ∣∣〈E1 ( s ) | ddsH ( s ) |E0 ( s ) 〉∣∣ ( 3 ) we could solve for t ( s ) by integration to get t ( s ) = 1 ε s∫ 0 ∣∣〈E1 ( s ) | ddsH ( s ) |E0 ( s ) 〉∣∣ g2 ( s ) ds ( 4 ) and the total required evolution time is tf = t ( s = 1 ) = 1 ε 1∫ 0 ∣∣〈E1 ( s ) | ddsH ( s ) |E0 ( s ) 〉∣∣ g2 ( s ) ds ( 5 ) 1See appendix A for the conventional matrix notation . We note that finding a numerical solution for eq 4 requires calculating the full eigenvalue decomposition ofH ( x ) . 2.1 MOST-RELATED WORK . Two recent contributions use deep learning in order to obtain , for a given tf , a schedule that outperform the linear schedule . Lin et al . ( 2020 ) suggest using deep reinforcement learning in order to find an optimal schedule for each specific class of problems ( e.g. , 3SAT problems of a certain size ) . In contrast , we study the problem of finding schedules for generic problem instances . They train and benchmark their performance by simulating an adiabatic quantum computer , and scoring the computation results for randomly chosen problem instances . Their results are generally better than the naive linear schedule , and the solution produced by their neural network is somewhat transferable for larger problem sizes . Chen et al . ( 2020 ) also use RL to construct , given a tf , a schedule for 3SAT problems . The most successful technique suggested is a Monte Carlo Tree Search ( MCTS , Silver et al . ( 2016 ) ) , which produces results that significantly outperform the linear schedule . This technique requires running the adiabatic evolution process many times for each problem , in order to find a successful schedule . An approach inspired by alpha-zero ( Silver et al. , 2018 ) is used to adapt the generic MCTS solution to specific problem class , while requiring only a few additional rounds of the adiabatic process for each new instance . In our method , we do not require any run given a new problem instance . 3 METHOD . We consider two types of deep neural models . The first model is designed to get the problem Hamiltonian Hp as an input . For an n qubit problem , the problem Hamiltonian is generally of size 2n×2n . In this work , we consider problem Hamiltonians which are diagonal and can be represented by vector of size 2n . This scenario covers both the Grover search problem and the 3SAT problem we present in Sec . 4 . The second model is designed to get a quadratic unconstrained binary optimization ( QUBO ) problem as an input . The QUBO problem has the following form : x̄ = argminx ( x TQx ) , ( 6 ) where x is a vector of binary variables and Q ∈ Rn×n defines the specific QUBO instance . The QUBO problem is NP-Complete , and many types of common problems can be reduced to QUBO ( Glover et al. , 2018 ) . The QUBO formulation is of special interest in the context of adiabatic quantum computing , since it allows a relatively easy mapping to real quantum annealing devices that do not possess full qubit connectivity ( Cruz-Santos et al. , 2019 ) . A QUBO problem Q can be converted to the Hamiltonian form in the following fashion : Hp = n∑ i=1 Qii ( I + σiz 2 ) + ∑ i6=j Qij ( I + σiz 2 ) ( I + σjz 2 ) , ( 7 ) where σiz is the Pauli matrix σz operating only on qubit i ( Liboff , 2003 ) . The resultingHp is of size 2n × 2n and is diagonal . The prediction target of our models is the desired normalized schedule ŝ ( t ) , which is defined over the range [ 0 , 1 ] as ŝ ( t ) = s ( t/tf ) . For the purpose of estimation , it is sampled at 100 points in the interval t = [ 0 , 1 ] . The representation of this schedule is given as a vector d ∈ [ 0 , 1 ] 99 , which captures the temporal derivative of the schedule . In other words , d is trained to hold the differences between consecutive points on the path , i.e. , element i is given by di = ŝ ( ( i+ 1 ) /100 ) − ŝ ( i/100 ) . Note that the sum of d is one . 3.1 UNIVERSALITY OF THE OPTIMAL SCHEDULE . The reason that we work with the normalized schedule is that the optimal evolution schedule is not dependent upon the choice of tf . As shown next , for every time budget tf , the same normalized schedule would provide the highest fidelity ( neglecting decoherence ) . Let s1 ( t ) : [ 0 , tf ] → [ 0 , 1 ] be a suggested evolution schedule , which outperforms a different suggested schedule s2 ( t ) , for a specific tf = τ1 , i.e . it achieves a greater fidelity at the end of the schedule for a specific problem instance Q . Then , Thm . 1 shows that s1 ( t ) outperforms s2 ( t ) for every possible choice of tf for the same problem Q. Theorem 1 . Let s1 ( t ) and s2 ( t ) be two monotonically increasing fully differentiable bijective functions from [ 0 , tf = τ1 ] to [ 0 , 1 ] . Let Q be an optimization problem , and assume that s1 ( t ) achieves a greater fidelity than s2 ( t ) at the end of a quantum adiabatic computation for Q with total evolution time tf = τ1 . Then , for any other choice tf = τ2 , the scaled schedule s1 ( τ2τ1 t ) will achieve a greater fidelity than s2 ( τ2τ1 t ) for an adiabatic computation over the same problem Q with total evolution time tf = τ2 . The proof can be found in appendix B .
the paper proposes to learn parametric form of optimal quantum annealing schedule. Authors construct 2 versions of neural network parameterizations mapping problem data onto an optimal schedule. They train these networks on artifically generated sets of problem of different size and test final models on the Grover search problem as well as 3SAT. Experiments demonstrate improved performance in comparison to existing approaches.
SP:9156d551adff4ed16ba1be79014188caefc901c7
Bayesian Context Aggregation for Neural Processes
1 INTRODUCTION . Estimating statistical relationships between physical quantities from measured data is of central importance in all branches of science and engineering and devising powerful regression models for this purpose forms a major field of study in statistics and machine learning . When judging representative power , neural networks ( NNs ) are arguably the most prominent member of the regression toolbox . NNs cope well with large amounts of training data and are computationally efficient at test time . On the downside , standard NN variants do not provide uncertainty estimates over their predictions and tend to overfit on small datasets . Gaussian processes ( GPs ) may be viewed as complementary to NNs as they provide reliable uncertainty estimates but their cubic ( quadratic ) scaling with the number of context data points at training ( test ) time in their basic formulation affects the application on tasks with large amounts of data or on high-dimensional problems . Recently , a lot of interest in the scientific community is drawn to combinations of aspects of NNs and GPs . Indeed , a prominent formulation of probabilistic regression is as a multi-task learning problem formalized in terms of amortized inference in conditional latent variable ( CLV ) models , which results in NN-based architectures which learn a distribution over target functions . Notable variants are given by the Neural Process ( NP ) ( Garnelo et al. , 2018b ) and the work of Gordon et al . ( 2019 ) , which presents a unifying view on a range of related approaches in the language of CLV models . Inspired by this research , we study context aggregation , a central component of such models , and propose a new , fully Bayesian , aggregation mechanism for CLV-based probabilistic regression models . ∗Correspondence to : Michael.Volpp @ de.bosch.com To transform the information contained in the context data into a latent representation of the target function , current approaches typically employ a mean aggregator and feed the output of this aggregator into a NN to predict a distribution over global latent parameters of the function . Hence , aggregation and latent parameter inference have so far been treated as separate parts of the learning pipeline . Moreover , when using a mean aggregator , every context sample is assumed to carry the same amount of information . Yet , in practice , different input locations have different task ambiguity and , therefore , samples should be assigned different importance in the aggregation process . In contrast , our Bayesian aggregation mechanism treats context aggregation and latent parameter inference as one holistic mechanism , i.e. , the aggregation directly yields the distribution over the latent parameters of the target function . Indeed , we formulate context aggregation as Bayesian inference of latent parameters using Gaussian conditioning in the latent space . Compared to existing methods , the resulting aggregator improves the handling of task ambiguity , as it can assign different variance levels to the context samples . This mechanism improves predictive performance , while it remains conceptually simple and introduces only negligible computational overhead . Moreover , our Bayesian aggregator can also be applied to deterministic model variants like the Conditional NP ( CNP ) ( Garnelo et al. , 2018a ) . In summary , our contributions are ( i ) a novel Bayesian Aggregation ( BA ) mechanism for context aggregation in NP-based models for probabilistic regression , ( ii ) its application to existing CLV architectures as well as to deterministic variants like the CNP , and ( iii ) an exhaustive experimental evaluation , demonstrating BA ’ s superiority over traditional mean aggregation . 2 RELATED WORK . Prominent approaches to probabilistic regression are Bayesian linear regression and its kernelized counterpart , the Gaussian process ( GP ) ( Rasmussen and Williams , 2005 ) . The formal correspondence of GPs with infinite-width Bayesian NNs ( BNNs ) has been established in Neal ( 1996 ) and Williams ( 1996 ) . A broad range of research aims to overcome the cubic scaling behaviour of GPs with the number of context points , e.g. , through sparse GP approximations ( Smola and Bartlett , 2001 ; Lawrence et al. , 2002 ; Snelson and Ghahramani , 2005 ; Quiñonero-Candela and Rasmussen , 2005 ) , by deep kernel learning ( Wilson et al. , 2016 ) , by approximating the posterior distribution of BNNs ( MacKay , 1992 ; Hinton and van Camp , 1993 ; Gal and Ghahramani , 2016 ; Louizos and Welling , 2017 ) , or , by adaptive Bayesian linear regression , i.e. , by performing inference over the last layer of a NN which introduces sparsity through linear combinations of finitely many learned basis functions ( Lazaro-Gredilla and Figueiras-Vidal , 2010 ; Hinton and Salakhutdinov , 2008 ; Snoek et al. , 2012 ; Calandra et al. , 2016 ) . An in a sense complementary approach aims to increase the data-efficiency of deep architectures by a fully Bayesian treatment of hierarchical latent variable models ( “ DeepGPs ” ) ( Damianou and Lawrence , 2013 ) . A parallel line of research studies probabilistic regression in the multi-task setting . Here , the goal is to formulate models which are data-efficient on an unseen target task by training them on data from a set of related source tasks . Bardenet et al . ( 2013 ) ; Yogatama and Mann ( 2014 ) , and Golovin et al . ( 2017 ) study multi-task formulations of GP-based models . More general approaches of this kind employ the meta-learning framework ( Schmidhuber , 1987 ; Thrun and Pratt , 1998 ; Vilalta and Drissi , 2005 ) , where a model ’ s training procedure is formulated in a way which incentivizes it to learn how to solve unseen tasks rapidly with only a few context examples ( “ learning to learn ” , “ few-shot learning ” ( Fei-Fei et al. , 2006 ; Lake et al. , 2011 ) ) . A range of such methods trains a meta-learner to learn how to adjust the parameters of the learner ’ s model ( Bengio et al. , 1991 ; Schmidhuber , 1992 ) , an approach which has recently been applied to few-shot image classification ( Ravi and Larochelle , 2017 ) , or to learning data-efficient optimization algorithms ( Hochreiter et al. , 2001 ; Li and Malik , 2016 ; Andrychowicz et al. , 2016 ; Chen et al. , 2017 ; Perrone et al. , 2018 ; Volpp et al. , 2019 ) . Other branches of meta-learning research aim to learn similarity metrics to determine the relevance of context samples for the target task ( Koch et al. , 2015 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Sung et al. , 2017 ) , or explore the application of memory-augmented neural networks for meta-learning ( Santoro et al. , 2016 ) . Finn et al . ( 2017 ) propose model-agnostic meta-learning ( MAML ) , a general framework for fast parameter adaptation in gradient-based learning methods . A successful formulation of probabilistic regression as a few-shot learning problem in a multi-task setting is enabled by recent advances in the area of probabilistic meta-learning methods which allow a quantitative treatment of the uncertainty arising due to task ambiguity , a feature particularly relevant for few-shot learning problems . One line of work specifically studies probabilistic extensions of MAML ( Grant et al. , 2018 ; Ravi and Larochelle , 2017 ; Rusu et al. , 2018 ; Finn et al. , 2018 ; Kim et al. , 2018 ) . Further important approaches are based on amortized inference in multi-task CLV models ( Heskes , 2000 ; Bakker and Heskes , 2003 ; Kingma and Welling , 2013 ; Rezende et al. , 2014 ; Sohn et al. , 2015 ) , which forms the basis of the Neural Statistician proposed by Edwards and Storkey ( 2017 ) and of the NP model family ( Garnelo et al. , 2018b ; Kim et al. , 2019 ; Louizos et al. , 2019 ) . Gordon et al . ( 2019 ) present a unifying view on many of the aforementioned probabilistic architectures . Building on the conditional NPs ( CNPs ) proposed by Garnelo et al . ( 2018a ) , a range of NP-based architectures , such as Garnelo et al . ( 2018b ) and Kim et al . ( 2019 ) , consider combinations of deterministic and CLV model architectures . Recently , Gordon et al . ( 2020 ) extended CNPs to include translation equivariance in the input space , yielding state-of-the-art predictive performance . In this paper , we also employ a formulation of probabilistic regression in terms of a multi-task CLV model . However , while in previous work the context aggregation mechanism ( Zaheer et al. , 2017 ; Wagstaff et al. , 2019 ) was merely viewed as a necessity to consume context sets of variable size , we take inspiration from Becker et al . ( 2019 ) and emphasize the fundamental connection of latent parameter inference with context aggregation and , hence , base our model on a novel Bayesian aggregation mechanism . 3 PRELIMINARIES . We present the standard multi-task CLV model which forms the basis for our discussion and present traditional mean context aggregation ( MA ) and the variational inference ( VI ) likelihood approximation as employed by the NP model family ( Garnelo et al. , 2018a ; Kim et al. , 2019 ) , as well as an alternative Monte Carlo ( MC ) -based approximation . Problem Statement . We frame probabilistic regression as a multi-task learning problem . Let F denote a family of functions f ` : Rdx → Rdy with some form of shared statistical structure . We assume to have available data sets D ` ≡ { ( x ` , i , y ` , i ) } i of evaluations y ` , i ≡ f ` ( x ` , i ) + ε from a subset of functions ( “ tasks ” ) { f ` } L ` =1 ⊂ F with additive Gaussian noise ε ∼ N ( 0 , σ2n ) . From this data , we aim to learn the posterior predictive distribution p ( y ` |x ` , Dc ` ) over a ( set of ) y ` , given the corresponding ( set of ) inputs x ` as well as a context set Dc ` ⊂ D ` . The Multi-Task CLV Model . We formalize the multitask learning problem in terms of a CLV model ( Heskes , 2000 ; Gordon et al. , 2019 ) as shown in Fig . 1 . The model employs task-specific global latent variables z ` ∈ Rdz , as well as a task-independent latent variable θ , capturing the statistical structure shared between tasks . To learn θ , we split the data into context sets Dc ` ≡ { ( xc ` , n , yc ` , n ) } N ` n=1 and target sets Dt ` ≡ { ( xt ` , m , yt ` , m ) } M ` m=1 and maximize the posterior predictive likelihood function L∏ ` =1 p ( yt ` ,1 : M ` ∣∣xt ` ,1 : M ` , Dc ` , θ ) = L∏ ` =1 ∫ p ( z ` | Dc ` , θ ) M∏̀ m=1 p ( yt ` , m ∣∣ z ` , xt ` , m , θ ) dz ` ( 1 ) w.r.t . θ . In what follows , we omit task indices ` to avoid clutter . Likelihood Approximation . Marginalizing over the task-specific latent variables z is intractable for reasonably complex models , so one has to employ some form of approximation . The NP-family of models ( Garnelo et al. , 2018b ; Kim et al. , 2019 ) uses an approximation of the form log p ( yt1 : M ∣∣xt1 : M , Dc , θ ) ' Eqφ ( z|Dc∪Dt ) [ M∑ m=1 log p ( ytm ∣∣ z , xtm , θ ) + log qφ ( z| Dc ) qφ ( z| Dc ∪ Dt ) ] . ( 2 ) Being derived using a variational approach , this approximation utilizes an approximate posterior distribution qφ ( z| Dc ) ≈ p ( z| Dc , θ ) . Note , however , that it does not constitute a proper evidence lower bound for the posterior predictive likelihood since the intractable latent posterior p ( z| Dc , θ ) has been replaced by qφ ( z| Dc ) in the nominator of the rightmost term ( Le et al. , 2018 ) . An alternative approximation , employed for instance in Gordon et al . ( 2019 ) , also replaces the intractable latent posterior distribution by an approximate distribution qφ ( z| Dc ) ≈ p ( z| Dc , θ ) and uses a Monte-Carlo ( MC ) approximation of the resulting integral based on K latent samples , i.e. , log p ( yt1 : M ∣∣xt1 : M , Dc , θ ) ≈ − logK + log K∑ k=1 M∏ m=1 p ( ytm ∣∣ zk , xtm , θ ) , zk ∼ qφ ( z| Dc ) . ( 3 ) Note that both approaches employ approximations qφ ( z| Dc ) of the latent posterior distribution p ( z| Dc , θ ) and , as indicated by the notation , amortize inference in the sense that one single set of parameters φ is shared between all context data points . This enables efficient inference at test time , as no per-data-point optimization loops are required . As is standard in the literature ( Garnelo et al. , 2018b ; Kim et al. , 2019 ) , we represent qφ ( z| Dc ) and p ( ytm|z , xtm , θ ) by NNs and refer to them as the encoder ( enc , parameters φ ) and decoder ( dec , parameters θ ) networks , respectively . These networks set the means and variances of factorized Gaussian distributions , i.e. , qφ ( z| Dc ) = N ( z|µz , diag ( σ2z ) ) , µz = encµz , φ ( Dc ) , σ2z = encσ2z , φ ( D c ) , ( 4 ) p ( ytm ∣∣ z , xtm , θ ) = N ( ytm∣∣µy , diag ( σ2y ) ) , µy = decµy , θ ( z , xtm ) , σ2y = decσ2y , θ ( z , xtm ) . ( 5 ) Context Aggregation . The latent variable z is global in the sense that it depends on the whole context set Dc . Therefore , some form of aggregation mechanism is required to enable the encoder to consume context sets Dc of variable size . To represent a meaningful operation on sets , such an aggregation mechanism has to be invariant to permutations of the context data points . Zaheer et al . ( 2017 ) characterize possible aggregation mechanisms w.r.t . this permutation invariance condition , resulting in the structure of traditional aggregation mechanisms depicted in Fig . 2 ( a ) . Each context data tuple ( xcn , y c n ) is first mapped onto a latent observation rn = encr , φ ( x c n , y c n ) ∈ Rdr . Then , a permutation-invariant operation is applied to the set { rn } Nn=1 to obtain an aggregated latent observation r̄ . One prominent choice , employed for instance in Garnelo et al . ( 2018a ) , Kim et al . ( 2019 ) , and Gordon et al . ( 2019 ) , is to take the mean , i.e. , r̄ = 1 N N∑ n=1 rn . ( 6 ) Subsequently , r̄ is mapped onto the parameters µz and σ2z of the approximate posterior distribution qφ ( z| Dc ) using additional encoder networks , i.e. , µz = encµz , φ ( r̄ ) and σ2z = encσ2z , φ ( r̄ ) . Note that three encoder networks are employed here : ( i ) encr , φ to map from the context pairs to rn , ( ii ) encµz , φ to compute µz from the aggregated mean r̄ and ( iii ) encσ2z , φ to compute the variance σ 2 z from r̄ . In what follows , we refer to this aggregation mechanism as mean aggregation ( MA ) and to the networks encµz , φ and encσ2z , φ collectively as “ r̄-to-z-networks ” .
The paper builds upon previous lines of research on multi-task learning problem, such as conditional latent variable models including the Neural Process. As shown by the extensive Related Work section, this seems to be an active research direction. This makes it difficult for me to judge originality and significance, but it is well-written and clear.
SP:13fb6d0e4b208c11e5d58df1afac2921c02be269
Multi-agent Deep FBSDE Representation For Large Scale Stochastic Differential Games
1 INTRODUCTION . Stochastic differential games represent a framework for investigating scenarios where multiple players make decisions while operating in a dynamic and stochastic environment . The theory of differential games dates back to the seminal work of Isaacs ( 1965 ) studying two-player zero-sum dynamic games , with a first stochastic extension appearing in Kushner & Chamberlain ( 1969 ) . A key step in the study of games is obtaining the Nash equilibrium among players ( Osborne & Rubinstein , 1994 ) . A Nash equilibrium represents the solution of non-cooperative game where two or more players are involved . Each player can not gain benefit by modifying his/her own strategy given opponents equilibrium strategy . In the context of adversarial multi-objective games , the Nash equilibrium can be represented as a system of coupled Hamilton-Jacobi-Bellman ( HJB ) equations when the system satisfies the Markovian property . Analytic solutions exist only for few special cases . Therefore , obtaining the Nash equilibrium solution is usually done numerically , and this can become challenging as the number of states/agents increases . Despite extensive theoretical work , the algorithmic part has received less attention and mainly addresses special cases of differential games ( e.g. , Duncan & Pasik-Duncan ( 2015 ) ) , or suffers from the curse of dimensionality ( Kushner , 2002 ) . Nevertheless , stochastic differential games have a variety of applications including in robotics and autonomy , economics and management . Relevant examples include Mataramvura & Øksendal ( 2008 ) , which formulate portfolio management as a stochastic differential game in order to obtain a market portfolio that minimizes the convex risk measure of a terminal wealth index value , as well as Prasad & Sethi ( 2004 ) , who investigate optimal advertising spending in duopolistic settings via stochastic differential games . Reinforcement Learning ( RL ) aims in obtaining a policy which can generate optimal sequential decisions while interacting with the environment . Commonly , the policy is trained by collecting histories of states , actions , and rewards , and updating the policy accordingly . Multi-agent Reinforcement Learning ( MARL ) is an extension of RL where several agents compete in a common environment , which is a more complex task due to the interaction between several agents and the environment , as well as between the agents . One approach is to assume agents to be part of environment ( Tan , 1993 ) , but this may lead to unstable learning during policy updates ( Matignon et al. , 2012 ) . On the other hand , a centralized approach considers MARL through an augmented state and action system , reducing its training to that of single agent RL problem . Because of the combinatorial complexity , the centralized learning method can not scale to more than 10 agents ( Yang et al. , 2019 ) . Another method is centralized training and decentralized execute ( CTDE ) , however the challenge therein lies on how to decompose value function in the execute phase for value-based MARL . Sunehag et al . ( 2018 ) and Zhou et al . ( 2019 ) decompose the joint value function into a summation of individual value functions . Rashid et al . ( 2018 ) keep the monotonic trends between centralized and decentralized value functions by augmenting the summation non-linearly and designing a mixing network ( QMIX ) . Further modifications on QMIX include Son et al . ( 2019 ) ; Mahajan et al . ( 2019 ) . The mathematical formulation of a differential game leads to a nonlinear PDE . This motivates algorithmic development for differential games that combine elements of PDE theory with deep learning . Recent encouraging results ( Han et al. , 2018 ; Raissi , 2018 ) in solving nonlinear PDEs within the deep learning community illustrate the scalability and numerical efficiency of neural networks . The transition from a PDE formulation to a trainable neural network is done via the concept of a system of Forward-Backward Stochastic Differential Equations ( FBSDEs ) . Specifically , certain PDE solutions are linked to solutions of FBSDEs , and the latter can be solved using a suitably defined neural network architecture . This is known in the literature as the deep FBSDE approach . Han et al . ( 2018 ) ; Pereira et al . ( 2019 ) ; Wang et al . ( 2019b ) utilize various deep neural network architectures to solve such stochastic systems . However , these algorithms address single agent dynamical systems . Two-player zero-sum games using FBSDEs were initially developed in Exarchos et al . ( 2019 ) and transferred to a deep learning setting in Wang et al . ( 2019a ) . Recently , Hu ( 2019 ) brought deep learning into fictitious play to solve multi-agent non-zero-sum game , Han & Hu ( 2019 ) introduced the deep FBSDEs to a multi-agent scenario and the concept of fictitious play , furthermore , Han et al . ( 2020 ) gives the convergence proof . In this work we propose an alternative deep FBSDE approach to multi-agent non-cooperative differential games , aiming on reducing complexity and increasing the number of agents the framework can handle . The main contribution of our work is threefold : 1 . We introduce an efficient Deep FBSDE framework for solving stochastic multi-agent games via fictitious play that outperforms the current state of the art in Relative Square Error ( RSE ) and runtime/memory efficiency on an inter-bank lending/borrowing example . 2 . We demonstrate that our approach scales to a much larger number of agents ( up to 1,000 agents , compared to 50 in existing work ) . To the best of our knowledge , this represents a new state of the art . 3 . We showcase the applicability of our framework to robotics on a belief-space autonomous racing problem which has larger individual control and state space . The experiments demonstrates that the decoupled BSDE provides the possibility of applications for competitive scenario . The rest of the paper is organized as follows : in Section 2 we present the mathematical preliminaries . In Section 3 we introduce the Deep Fictitious Play Belief FBSDE , with simulation results following in Section 4 . We conclude the paper and discuss some future directions in Section 5 . 2 MULTI-AGENT FICTITIOUS PLAY FBSDE . Fictitious play is a learning rule first introduced in Brown ( 1951 ) where each player presumes other players ’ strategies to be fixed . AnN -player game can then be decoupled intoN individual decisionmaking problems which can be solved iteratively over M stages . When each agent1 converges to a stationary strategy at stage m , this strategy will become the stationary strategy for other players at stage m+ 1 . We consider a N -player non-cooperative stochastic differential game with dynamics dX ( t ) = ( f ( X ( t ) , t ) +G ( X ( t ) , t ) U ( t ) ) dt+ Σ ( X ( t ) , t ) dW ( t ) , X ( 0 ) = X0 , ( 1 ) where X = ( x1 , x2 , . . . , xN ) is a vector containing the state process of all agents generated by their controls U = ( u1 , u2 , . . . , uN ) with xi ∈ Rnx and ui ∈ Rnu . Here , f : Rnx × [ 0 , T ] → Rnx represents the drift dynamics , G : Rnx × [ 0 , T ] → Rnx×nu represents the actuator dynamics , and Σ : [ 0 , T ] ×Rn → Rnx×nw represents the diffusion term . We assume that each agent is only driven by its own controls soG is a block diagonal matrix withGi corresponding to the actuation of agent i . 1Agent and player are used interchangeably in this paper Each agent is also driven by its own nw-dimensional independent Brownian motion Wi , and denote W = ( W1 , W2 , . . . , WN ) . Let Ui be the set of admissible strategies for agent i ∈ I : = { 1 , 2 , . . . , N } and U = ⊗Ni=1Ui as the Kronecker product space of Ui . Given the other agents ’ strategies , the stochastic optimal control problem for agent i under the fictitious play assumption is defined as minimizing the expectation of the cumulative cost functional J it J it ( X , ui , m ; u−i , m−1 ) = E [ g ( X ( T ) ) + ∫ T t Ci ( X ( τ ) , ui , m ( X ( τ ) , τ ) , τ ; u−i , m−1 ) dτ ] , ( 2 ) where g : Rnx → R+ is the terminal cost , and Ci : [ 0 , T ] × Rnx × U → R+ is the running cost for the i-th player . In this paper we assume that the running cost is of the form C ( X , ui , m , t ) = q ( X ) + 1 2u T i , mRui , m + X TQui , m . We use the double subscript ui , m to denote the control of agent i at stage m and the negative subscript−i as the strategies excluding player i , u−i = ( u1 , . . . , ui−1 , ui+1 , . . . , uN ) . We can define value function of each player as V i ( t , X ( t ) ) = inf ui , m∈Ui [ J it ( X , ui , m ; u−i , m−1 ) ] , V i ( T , X ( T ) ) = g ( X ( T ) ) . ( 3 ) Assume that the value function in eq . ( 3 ) is once differentiable w.r.t . t and twice differentiable w.r.t . x . Then , standard stochastic optimal control theory leads to the HJB PDE V i + h+ V iTx ( f +GU0 , −i ) + 1 2 tr ( V ixxΣΣ T ) = 0 , V i ( T , X ) = g ( X ( T ) ) , ( 4 ) where h = Ci∗ + GU∗,0 . The double subscript of U∗,0 denotes the augmentation of the optimal control u∗i , m = −R−1 ( GTi V ix + QTi x ) and zero control u−i , m−1 = 0 , and U0 , −i denotes the augmentation of ui , m = 0 and u−i , m−1 . Here we drop the functional dependencies in the HJB equation for simplicity . The detailed proof is in Appendix A . The value function in the HJB PDE can be related to a set of FBSDEs dX = ( f +GU∗ , −i ) dt+ ΣdW , X ( 0 ) = x0 dV i = − ( h+ V iTx GU∗,0 ) dt+ V Tx ΣdW , V ( T ) = g ( X ( T ) ) , ( 5 ) where the backward process corresponds to the value function . The detailed derivation can be found in Appendix B . Note that the FBSDEs here differ from that of Han & Hu ( 2019 ) in the optimal control of agent i , GU∗ , −i , in the forward process and compensation , V iTx GU∗,0 , in the backward process . This is known as the importance sampling for FBSDEs and allows for the FBSDEs to be guided to explore the state space more efficiently .
Till page 3 the paper was easy to follow, i.e., the analytical expressions in eq(5), and the basic idea of Algorithm 1 (which is same as prior works by Han et al. , Wang et al., Periera et al.) are clear. However, after page 3 the paper is hard to follow. The specific points are as follows:
SP:368ac9d4b7934e68651c1b54286d9332caf16473
Regularized Mutual Information Neural Estimation
1 INTRODUCTION . Identifying a relationship between two variables of interest is one of the great linchpins in mathematics , statistics , and machine learning ( Goodfellow et al. , 2014 ; Ren et al. , 2015 ; He et al. , 2016 ; Vaswani et al. , 2017 ) . Not surprisingly , this problem is closely tied to measuring the relationship between two variables . One of the fundamental approaches is information theory-based measurement , namely the estimation of mutual information ( MI ) . Recently , Belghazi et al . ( 2018 ) proposed a neural network-based MI estimator , which is called Mutual Information Neural Estimator ( MINE ) . Due to its differentiability and applicability , it motivated several researches such as various loss functions bridging the gap between latent variables and representations ( Chen et al. , 2016 ; Belghazi et al. , 2018 ; Oord et al. , 2018 ; Hjelm et al. , 2019 ) , and methodologies identifying the relationship between input , output and hidden variables ( Tishby & Zaslavsky , 2015 ; Shwartz-Ziv & Tishby , 2017 ; Saxe et al. , 2019 ) . Although many have shown the computational tractability and its usefulness , many intriguing questions about the MI estimator itself remain unanswered . • How does the neural network inside MINE behave when estimating MI ? • Why does MINE loss diverge in some cases ? Where does the instability originate from ? • Can we make a more stable estimate on small batch size settings ? • Why does the value of each term in MINE loss are shifting even after the estimated MI converges ? Are there any side effects of this phenomenon ? This study attempts to answer these questions by designing a synthetic dataset to interpret network outputs . Through keen observation , we dissect the Donsker-Varadhan representation ( DV ) one by one and conclude that the instability and the drifting are caused by the interrelationship between the stochastic gradient descent based optimization and the theoretical properties of DV . Based on these insights , we extend DV to draw out a novel lower bound for MI estimation , which mitigates the aforementioned problems , and circumvents the batch size limitation by maintaining the history of network outputs . We furthermore look into the L2 regularizer form of our bound in detail and analyze how various hyper-parameters impact the estimation of MI and its dynamics during the optimization process . Finally , we demonstrate that our method , called ReMINE , performs favorably against other existing estimators in multiple settings . 2 RELATED WORKS . Definition of Mutual Information The mutual information between two random variablesX and Y is defined as I ( X ; Y ) = DKL ( PXY ||PX ⊗ PY ) = EPXY [ log dPXY dPX⊗Y ] ( 1 ) where PXY and PX ⊗ PY are the joint and the marginal distribution , respectively . DKL is the Kullback-Leibler ( KL ) divergence . Without loss of generality , we consider PXY and PX ⊗ PY as being distributions on a compact domain Ω ⊂ Rd . Variational Mutual Information Estimation Recent works on MI estimation focus on training a neural network to represent a tight variational MI lower bound , where there are several types of representations . Although these methods are known to have statistical limitations ( McAllester & Stratos , 2018 ) , their versatility is widely employed nonetheless ( Hjelm et al. , 2019 ; Veličković et al. , 2018 ; Polykovskiy et al. , 2018 ; Ravanelli & Bengio , 2018 ; François-Lavet et al. , 2019 ) . One of the most commonly used is the Donsker-Varadhan representation , which is first used in Belghazi et al . ( 2018 ) to estimate MI through neural networks . Lemma 1 . ( Donsker-Varadhan representation ( DV ) ) I ( X ; Y ) = sup T : Ω→R EPXY [ T ] − log ( EPX⊗PY [ eT ] ) . ( 2 ) where both the expectations EPXY [ T ] and EPX⊗PY [ eT ] are finite . However , as the second term in Eq . ( 2 ) leads to biased gradient estimates with a limited number of samples , MINE uses exponential moving averages of mini-batches to alleviate this problem . To further improve the sampling efficiency of MINE , Lin et al . ( 2019 ) proposes DEMINE that partitions the samples into train and test sets . Other representations based on f-measures are also proposed by Nguyen et al . ( 2010 ) ; Nowozin et al . ( 2016 ) , which produce unbiased estimates and hence eliminating the need for additional techniques . Lemma 2 . ( Nguyen , Wainwright , and Jordan representation ( NWJ ) ) I ( X ; Y ) = sup T : Ω→R EPXY [ T ] − EPX⊗PY [ eT−1 ] , ( 3 ) where the bound is tight when T = log ( dP/dQ ) + 1 . Nevertheless , if MI is too big , estimators exhibit large bias or variation ( McAllester & Stratos , 2018 ; Song & Ermon , 2020 ) . To balance in between , Poole et al . ( 2019 ) design a new estimator Iα that interpolates Contrastive Predictive Coding ( Oord et al. , 2018 ) and NWJ . Yet , these methods concentrate on various stabilization techniques rather than revealing the dynamics inside the black box . In this paper , we focus on the DV representation and provide intuitive understandings of the inner mechanisms of neural network-based estimators . Based on the analysis , we introduce a new regularization term for MINE , which can effectively remedy its weaknesses theoretically and practically . 3 HOW DOES MINE ESTIMATE ? . Before going any further , we first observe the statistics network output in MINE during the optimization process using our novel synthetic dataset , and identify and analyze the following phenomena : • Drifting phenomenon ( Fig . 1a ) , where estimates of EPXY [ T ] and log ( EPX⊗PY [ eT ] ) drifts in parallel even after the MI estimate converges . • Exploding network outputs ( Fig . 1d ) , where smaller batch sizes cause the network outputs to explode , but training with larger batch size reduces the variance of MI estimates ( Fig . 2a ) . • Bimodal distribution of the outputs ( Fig . 2b ) , where the network not only classifies input samples but also clusters the network outputs as the MI estimate converges . Based on these observations , we analyze the inner workings of MINE , and understand how batch size affects MI estimation . 3.1 EXPERIMENT SETTINGS . Dataset . We designed a one-hot discrete dataset with uniform distribution U ( 1 , N ) to estimate I ( X ; X ) = logN with MINE , while easily discerning samples of joint distribution X , X from marginal distribution X ⊗ X . Additionally , we use one-hot representation to increase the input dimension , resulting in more network weights to train . In this paper , we used N = 16 . Network settings . We designed a simple statistics network T with a concatenated vector of dimension N × 2 = 32 as input . We pass the input through two fully connected layers with ReLU activation by widths : 32−256−1 . The last layer outputs a single scalar with no bias and activation . We used stochastic gradient descent ( SGD ) with learning rate 0.1 to optimize the statistics network . 3.2 OBSERVATIONS . We can observe the drifting phenomenon in Fig . 1a , where the statistics of the network output are adrift even after the convergence of MINE loss . The analysis for this phenomenon will be covered in more detail with theoretical results in Section 4 . This section will focus extensively on the relationship between batch size and logsumexp , and the classifying nature of MINE loss . Batch size limitation . MINE estimates in a batch-wise manner , i.e. , MINE uses samples inside a single batch when estimating EPXY [ T ] and log ( EPX⊗PY [ eT ] ) . Consider the empirical DV Î ( X ; Y ) = sup Tθ : Ω→R E ( n ) P̂XY [ Tθ ] − logE ( n ) ̂PX⊗PY [ e Tθ ] , ( 4 ) where E ( n ) P̂ is an empirical average associated with batch size n. Therefore , the variance of Î ( X ; Y ) increases as the batch size decreases . The observation in Fig . 2a is consistent with the batch size limitation problem ( McAllester & Stratos , 2018 ; Song & Ermon , 2020 ) , which shows that MINE must have a batch size proportional to the exponential of true MI to control the variance of the estimation . Exploding network outputs . We can understand the output explosion problem in detail by comparing Fig . 1b and Fig . 1d . During optimization , network outputs of joint samples get increased by the first term of Eq . ( 4 ) , where the inverse of the batch size is multiplied to the gradient of each network output . On the other hand , the output of marginal samples get decreased by the second term of Eq . ( 4 ) which concentrates the gradient to the maximum output . Note that the second term is dominated by the maximum network output due to logsumexp , which is a smooth approximation of max . As a single batch is sampled from the true underlying distribution , joint case samples may or may not exist . If it exists , then the joint sample output dominates the term , and its output gets de- creased accordingly , while other non-joint sample outputs also get slightly decreased . In summary , the second term acts as an occasional restriction for the increase of joint sample network outputs.1 The second term imposes a problem when the batch size is not large enough . With reduced sample size , joint samples that dominate the second term are getting rare . For the case where joint sample does not exist , marginal sample network outputs decrease much faster compared to the opposite case , and joint sample network outputs are more rarely restricted ; thus network outputs diverge in both directions ( Fig . 1d ) , and the second term vibrates between two extreme values depending on whether the joint case occurred ( Fig . 1c ) . This obviously leads to numerical instability and estimation failure . Bimodal distribution of the outputs . We furthermore observed network outputs directly , as both averaging terms of DV can inhibit the observation of how the statistics network acts on each sample . From the neural network viewpoint , whether each sample is realized from the joint or the marginal distribution is not distinguishable for the joint cases in marginal samples . Therefore , the statistics network has no means but to return the same output value , as it can be seen in Fig . 1b , indicating that the network can only separate joint and non-joint cases . This approach provides a clue that the network is solving a classification task , isolating joint samples from marginal samples , although the statistics network is only provided with samples from the joint and marginal distribution . We observed the distribution of network outputs in detail , on the case where only the marginal samples are fed to the statistics network in Fig . 2b . It stands to reason that the network outputs follow a particular distribution , as the network output estimates a log-likelihood ratio between joint and marginal distribution with an added constant ( Lemma 3 ) . Through this , we can view the estimated MI as a sample average ; hence Fig . 2a resembles the Gaussian noise by Central Limit Theorem ( CLT ) . Let us continue by concentrating on each network output . There is no distinction between the loglikelihood ratio of the samples in the same class for the one-hot discrete dataset : j for the joint case and j for the non-joint case . This explains the classifying nature of the statistics network , and there have to be exactly two clusters in Fig . 2b . Also , as j becomes −∞ , exp ( j ) nears 0 , and exp ( j ) is few magnitudes bigger than exp ( j ) ( see Fig . 2b ) . As mentioned above , few joint cases dominate the second term , so the second term becomes inherently noisier than the first term . Note that the effectiveness of conventional methods , such as applying exponential moving average to the second term ( Belghazi et al. , 2018 ) or clipping the network output values to restrict the magnitude of network outputs ( Song & Ermon , 2020 ) can also be understood with the analysis above . In addition , we can not interpret the network outputs directly as the log-likelihood ratio due to unregularized outputs , or the drifting problem . We will look into this fundamental limitation of MINE in more detail in the next section . 1Loosely speaking , the first term slowly increases a lot of joint samples network outputs , in contrary to the second term which quickly decreases a few joint sample network outputs .
This paper attempts to answer the four questions raised from the mutual information estimator. To this end, this paper investigates why the MINE succeeds or fails during the optimization on a synthetic dataset. Based on the observations and discussions, the paper then proposes a novel lower bound to regularize the neural networks and alleviate the problems of MINE.
SP:e4664a073afd05446cb1ddc217163692a9a12c1c
Contextual Dropout: An Efficient Sample-Dependent Dropout Module
1 INTRODUCTION . Deep neural networks ( NNs ) have become ubiquitous and achieved state-of-the-art results in a wide variety of research problems ( LeCun et al. , 2015 ) . To prevent over-parameterized NNs from overfitting , we often need to appropriately regularize their training . One way to do so is to use Bayesian NNs that treat the NN weights as random variables and regularize them with appropriate prior distributions ( MacKay , 1992 ; Neal , 2012 ) . More importantly , we can obtain the model ’ s confidence on its predictions by evaluating the consistency between the predictions that are conditioned on different posterior samples of the NN weights . However , despite significant recent efforts in developing various types of approximate inference for Bayesian NNs ( Graves , 2011 ; Welling & Teh , 2011 ; Li et al. , 2016 ; Blundell et al. , 2015 ; Louizos & Welling , 2017 ; Shi et al. , 2018 ) , the large number of NN weights makes it difficult to scale to real-world applications . Dropout has been demonstrated as another effective regularization strategy , which can be viewed as imposing a distribution over the NN weights ( Gal & Ghahramani , 2016 ) . Relating dropout to Bayesian inference provides a much simpler and more efficient way than using vanilla Bayesian NNs to provide uncertainty estimation ( Gal & Ghahramani , 2016 ) , as there is no more need to explicitly instantiate multiple sets of NN weights . For example , Bernoulli dropout randomly shuts down neurons during training ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ) . Gaussian dropout multiplies the neurons with independent , and identically distributed ( iid ) Gaussian random variables drawn from N ( 1 , α ) , where the variance α is a tuning parameter ( Srivastava et al. , 2014 ) . Variational dropout generalizes Gaussian dropout by reformulating it under a Bayesian setting and allowing α to be learned under a variational objective ( Kingma et al. , 2015 ; Molchanov et al. , 2017 ) . ∗ Equal contribution . Corresponding to : mingyuan.zhou @ mccombs.utexas.edu However , the quality of uncertainty estimation depends heavily on the dropout probabilities ( Gal et al. , 2017 ) . To avoid grid-search over the dropout probabilities , Gal et al . ( 2017 ) and Boluki et al . ( 2020 ) propose to automatically learn the dropout probabilities , which not only leads to a faster experiment cycle but also enables the model to have different dropout probabilities for each layer , bringing greater flexibility into uncertainty modeling . But , these methods still impose the restrictive assumption that dropout probabilities are global parameters shared across all data samples . By contrast , we consider parameterizing dropout probabilities as a function of input covariates , treating them as data-dependent local variables . Applying covariate-dependent dropouts allows different data to have different distributions over the NN weights . This generalization has the potential to greatly enhance the expressiveness of a Bayesian NN . However , learning covariate-dependent dropout rates is challenging . Ba & Frey ( 2013 ) propose standout , where a binary belief network is laid over the original network , and develop a heuristic approximation to optimize free energy . But , as pointed out by Gal et al . ( 2017 ) , it is not scalable due to its need to significantly increase the model size . In this paper , we propose a simple and scalable contextual dropout module , whose dropout rates depend on the covariates x , as a new approximate Bayesian inference method for NNs . With a novel design that reuses the main network to define how the covariate-dependent dropout rates are produced , it boosts the performance while only slightly increases the memory and computational cost . Our method greatly enhances the flexibility of modeling , maintains the inherent advantages of dropout over conventional Bayesian NNs , and is generally simple to implement and scalable to the large-scale applications . We plug the contextual dropout module into various types of NN layers , including fully connected , convolutional , and attention layers . On a variety of supervised learning tasks , contextual dropout achieves good performance in terms of accuracy and quality of uncertainty estimation . 2 CONTEXTUAL DROPOUT . We introduce an efficient solution for data-dependent dropout : ( 1 ) treat the dropout probabilities as sample-dependent local random variables , ( 2 ) propose an efficient parameterization of dropout probabilities by sharing parameters between the encoder and decoder , and ( 3 ) learn the dropout distribution with a variational objective . 2.1 BACKGROUND ON DROPOUT MODULES . Consider a supervised learning problem with training data D : = { xi , yi } Ni=1 , where we model the conditional probability pθ ( yi |xi ) using a NN parameterized by θ . Applying dropout to a NN often means element-wisely reweighing each layer with a data-specific Bernoulli/Gaussian distributed random mask zi , which are iid drawn from a prior pη ( z ) parameterized by η ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ) . This implies dropout training can be viewed as approximate Bayesian inference ( Gal & Ghahramani , 2016 ) . More specifically , one may view the learning objective of a supervised learning model with dropout as a log-marginal-likelihood : log ∫ ∏N i=1 p ( yi |xi , z ) p ( z ) dz . To maximize this often intractable log-marginal , it is common to resort to variational inference ( Hoffman et al. , 2013 ; Blei et al. , 2017 ) that introduces a variational distribution q ( z ) on the random mask z and optimizes an evidence lower bound ( ELBO ) : L ( D ) = Eq ( z ) [ log ∏N i=1 pθ ( yi |xi , z ) pη ( z ) q ( z ) ] = ( ∑N i=1 Ezi∼q ( z ) [ log pθ ( yi |xi , zi ) ] ) − KL ( q ( z ) ||pη ( z ) ) , ( 1 ) where KL ( q ( z ) ||pη ( z ) ) = Eq ( z ) [ log q ( z ) − log p ( z ) ] is a Kullback–Leibler ( KL ) divergence based regularization term . Whether the KL term is explicitly imposed is a key distinction between regular dropout ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ) and their Bayesian generalizations ( Gal & Ghahramani , 2016 ; Gal et al. , 2017 ; Kingma et al. , 2015 ; Molchanov et al. , 2017 ; Boluki et al. , 2020 ) . 2.2 COVARIATE-DEPENDENT WEIGHT UNCERTAINTY . In regular dropout , as shown in ( 1 ) , while we make the dropout masks data specific during optimization , we keep their distributions the same . This implies that while the NN weights can vary from data to data , their distribution is kept data invariant . In this paper , we propose contextual dropout , in which the distributions of dropout masks zi depend on covariates xi for each sample ( xi , yi ) . Specifically , we define the variational distribution as qφ ( zi |xi ) , where φ denotes its NN parameters . In the framework of amortized variational Bayes ( Kingma & Welling , 2013 ; Rezende et al. , 2014 ) , we can view qφ as an inference network ( encoder ) trying to approximate the posterior p ( zi | yi , xi ) ∝ p ( yi |xi , zi ) p ( zi ) . Note as we have no access to yi during testing , we parameterize our encoder in a way that it depends on xi but not yi . From the optimization point of view , what we propose corresponds to the ELBO of log ∏N i=1 ∫ p ( yi |xi , zi ) p ( zi ) dzi given qφ ( zi |xi ) as the encoder , which can be expressed as L ( D ) = ∑N i=1 L ( xi , yi ) , L ( xi , yi ) = Ezi∼qφ ( · |xi ) [ log pθ ( yi |xi , zi ) ] − KL ( qφ ( zi |xi ) ||pη ( zi ) ) . ( 2 ) This ELBO differs from that of regular dropout in ( 1 ) in that the dropout distributions for zi are now parameterized by xi and a single KL regularization term is replaced with the aggregation of N data-dependent KL terms . Unlike conventional Bayesian NNs , as zi is now a local random variable , the impact of the KL terms will not diminish as N increases , and from the viewpoint of uncertainty quantification , contextual dropout relies only on aleatoric uncertainty to model its uncertainty on yi given xi . Like conventional BNNs , we may add epistemic uncertainty by imposing a prior distribution on θ and/or φ , and infer their posterior given D. As contextual dropout with a point estimate on both θ and φ is already achieving state-of-the-art performance , we leave that extension for future research . In what follows , we omit the data index i for simplification and formally define its model structure . Cross-layer dependence : For a NN with L layers , we denote z = { z1 , . . . , zL } , with zl representing the dropout masks at layer l. As we expect zl to be dependent on the dropout masks in previous layers { zj } j < l , we introduce an autoregressive distribution as qφ ( z |x ) = ∏L l=1 qφ ( z l |xl−1 ) , where xl−1 , the output of layer l − 1 , is a function of { z1 , . . . , zl−1 , x } . Parameter sharing between encoder and decoder : We aim to build an encoder by modeling qφ ( zl |xl−1 ) , where x may come from complex and highly structured data such as images and natural languages . Thus , extracting useful features from x to learn the encoder distribution qφ itself becomes a problem as challenging as the original one , i.e. , extracting discriminative features from x to predict y . As intermediate layers in the decoder network pθ are already learning useful features from the input , we choose to reuse them in the encoder , instead of extracting the features from scratch . If we denote layer l of the decoder network by glθ , then the output of layer l , given its input xl−1 , would be Ul = glθ ( x l−1 ) . Considering this as a learned feature for x , as illustrated in Figure 1 , we build the encoder on this output as αl = hlϕ ( U l ) , draw zl conditioning on αl , and element-wisely multiply zl with Ul ( with broadcast if needed ) to produce the output of layer l as xl . In this way , we use { θ , ϕ } to parameterize the encoder , which reuses parameters θ of the decoder . To produce the dropout rates of the encoder , we only need extra parameters ϕ , the added memory and computational cost of which are often insignificant in comparison to these of the decoder . 2.3 EFFICIENT PARAMETERIZATION OF CONTEXTUAL DROPOUT MODULE . Denote the output of layer l by a multidimensional array ( tensor ) Ul = glθ ( x l−1 ) ∈ RC l 1× ... ×C l Dl , where Dl denotes the number of the dimensions of Ul and Cld denotes the number of elements along dimension d ∈ { 1 , . . . , Dl } . For efficiency , the output shape of hlϕ is not matched to the shape of Ul . Instead , we make it smaller and broadcast the contextual dropout masks zl across the dimensions of Ul ( Tompson et al. , 2015 ) . Specifically , we parameterize dropout logits αl of the variational distribution to have Cld elements , where d ∈ { 1 , .... , Dl } is a specified dimension of Ul . We sample zl from the encoder and broadcast them across all but dimension d of Ul . We sample zl ∼ Ber ( σ ( αl ) ) under contextual Bernoulli dropout , and follow Srivastava et al . ( 2014 ) to use zl ∼ N ( 1 , σ ( αl ) / ( 1− σ ( αl ) ) ) for contextual Gaussian dropout . To obtain αl ∈ RCld , we first take the average pooling of Ul across all but dimension d , with the output denoted as Favepool , d ( Ul ) , and then apply two fully-connected layers Φl1 and Φ l 2 connected by FNL , a ( Leaky ) ReLU based nonlinear activation function , as αl = hlϕ ( U l ) = Φl2 ( FNL ( Φ l 1 ( Favepool , d ( U l ) ) ) ) , ( 3 ) where Φl1 is a linear transformation mapping from RC l d to RCld/γ , while Φl2 is from RC l d/γ back to RCld , with γ being a reduction ratio controlling the complexity of hlϕ . Below we describe how to apply contextual dropout to three representative types of NN layers . Contextual dropout module for fully-connected layers2 : If layer l is a fully-connected layer and Ul ∈ RC l 1×···×C l Dl , we set αl ∈ RC l Dl , where Dl is the dimension that the linear transformation is applied to . Note , if Ul ∈ RCl1 , then αl ∈ RCl1 , and Favepool,1 is an identity map , so αl = Φl2 ( FNL ( Φl1 ( Ul ) ) ) . Contextual dropout module for convolutional layers : Assume layer l is a convolutional layer with Cl3 as convolutional channels and U l ∈ RCl1×Cl2×Cl3 . Similar to Spatial Dropout ( Tompson et al. , 2015 ) , we set αl ∈ RCl3 and broadcast its corresponding zl spatially as illustrated in Figure 2 . Such parameterization is similar to the squeeze-and-excitation unit for convolutional layers , which has been shown to be effective in image classification tasks ( Hu et al. , 2018 ) . However , in squeeze-andexcitation , σ ( αl ) is used as channel-wise soft attention weights instead of dropout probabilities , therefore it serves as a deterministic mapping in the model instead of a stochastic unit used in the inference network . Contextual dropout module for attention layers : Dropout has been widely used in attention layers ( Xu et al. , 2015b ; Vaswani et al. , 2017 ; Yu et al. , 2019 ) . For example , it can be applied to multi-head attention weights after the softmax operation ( see illustrations in Figure 2 ) . The weights are of dimension [ H , NK , NQ ] , where H is the number of heads , NK the number of keys , and NQ the number of queries . In this case , we find that setting αl ∈ RH gives good performance . Intuitively , this coincides with the choice of channel dimension for convolutional layers , as heads in attention could be analogized as channels in convolution .
The paper proposes contextual dropout as a sample-dependent dropout module, which can be applied to different models at the expense of marginal memory and computational overhead. The authors chose to focus on Visual Question Answering and Image classification tasks. The results in the paper show the contextual dropbox can improve accuracy on ImageNet and VQA2.0 datasets.
SP:b1c7e0c9656a0ec0399b6602f89f46323ff3436b
Net-DNF: Effective Deep Modeling of Tabular Data
1 INTRODUCTION . A key point in successfully applying deep neural models is the construction of architecture families that contain inductive bias relevant to the application domain . Architectures such as CNNs and RNNs have become the preeminent favorites for modeling images and sequential data , respectively . For example , the inductive bias of CNNs favors locality , as well as translation and scale invariances . With these properties , CNNs work extremely well on image data , and are capable of generating problem-dependent representations that almost completely overcome the need for expert knowledge . Similarly , the inductive bias promoted by RNNs and LSTMs ( and more recent models such as transformers ) favors both locality and temporal stationarity . When considering tabular data , however , neural networks are not the hypothesis class of choice . Most often , the winning class in learning problems involving tabular data is decision forests . In Kaggle competitions , for example , gradient boosting of decision trees ( GBDTs ) ( Chen & Guestrin , 2016 ; Friedman , 2001 ; Prokhorenkova et al. , 2018 ; Ke et al. , 2017 ) are generally the superior model . While it is quite practical to use GBDTs for medium size datasets , it is extremely hard to scale these methods to very large datasets . Scaling up the gradient boosting models was addressed by several papers ( Ye et al. , 2009 ; Tyree et al. , 2011 ; Fu et al. , 2019 ; Vasiloudis et al. , 2019 ) . The most significant computational disadvantage of GBDTs is the need to store ( almost ) the entire dataset in memory1 . Moreover , handling multi-modal data , which involves both tabular and spatial data ( e.g. , medical records and images ) , is problematic . Thus , since GBDTs and neural networks can not be organically optimized , such multi-modal tasks are left with sub-optimal solutions . The creation of a purely neural model for tabular data , which can be trained with SGD end-to-end , is therefore a prime open objective . A few works have aimed at constructing neural models for tabular data ( see Section 5 ) . Currently , however , there is still no widely accepted end-to-end neural architecture that can handle tabular data and consistently replace fully-connected architectures , or better yet , replace GBDTs . Here we present Net-DNFs , a family of neural network architectures whose primary inductive bias is an ensemble comprising a disjunctive normal form ( DNF ) formulas over linear separators . This family also promotes ( input ) feature selection and spatial localization of ensemble members . These inductive 1This disadvantage is shared among popular GBDT implementations : XGBoost , LightGBM , and CatBoost . biases have been included by design to promote conceptually similar elements that are inherent in GBDTs and random forests . Appealingly , the Net-DNF architecture can be trained end-to-end using standard gradient-based optimization . Importantly , it consistently and significantly outperforms FCNs on tabular data , and can sometime even outperform GBDTs . The choice of appropriate inductive bias for specialized hypothesis classes for tabular data is challenging since , clearly , there are many different kinds of such data . Nevertheless , the “ universality ” of forest methods in handling a wide variety of tabular data suggests that it might be beneficial to emulate , using neural networks , the important elements that are part of the tree ensemble representation and algorithms . Concretely , every decision tree is equivalent to some DNF formula over axis-aligned linear separators ( see details in Section 3 ) . This makes DNFs an essential element in any such construction . Secondly , all contemporary forest ensemble methods rely heavily on feature selection . This feature selection is manifested both during the induction of each individual tree , where features are sequentially and greedily selected using information gain or other related heuristics , and by uniform sampling features for each ensemble member . Finally , forest methods include an important localization element – GBDTs with their sequential construction within a boosting approach , where each tree re-weights the instance domain differently – and random forests with their reliance on bootstrap sampling . Net-DNFs are designed to include precisely these three elements . After introducing Net-DNF , we include a Vapnik-Chervonenkins ( VC ) comparative analysis of DNFs and trees showing that DNFs potentially have advantage over trees when the input dimension is large and vice versa . We then present an extensive empirical study . We begin with an ablation study over three real-life tabular data prediction tasks that convincingly demonstrates the importance of all three elements included in the Net-DNF design . Second , we analyze our novel feature selection component over controlled synthetic experiments , which indicate that this component is of independent interest . Finally , we compare Net-DNFs to FCNs and GBDTs over several large classification tasks , including two past Kaggle competitions . Our results indicate that Net-DNFs consistently outperform FCNs , and can sometime even outperform GBDTs . 2 DISJUNCTIVE NORMAL FORM NETWORKS ( NET-DNFS ) . In this section we introduce the Net-DNF architecture , which consists of three elements . The main component is a block of layers emulating a DNF formula . This block will be referred to as a Disjunctive Normal Neural Form ( DNNF ) . The second and third components , respectively , are a feature selection module , and a localization one . In the remainder of this section we describe each component in detail . Throughout our description we denote by x ∈ Rd a column of input feature vectors , by xi , its ith entry , and by σ ( · ) the sigmoid function . 2.1 A DISJUNCTIVE NORMAL NEURAL FORM ( DNNF ) BLOCK . A disjunctive normal neural form ( DNNF ) block is assembled using a two-hidden-layer network . The first layer creates affine “ literals ” ( features ) and is trainable . The second layer implements a number of soft conjunctions over the literals , and the third output layer is a neural OR gate . Importantly , only the first layer is trainable , while the two other are binary and fixed . We begin by describing the neural AND and OR gates . For an input vector x , we define soft , differentiable versions of such gates as OR ( x ) , tanh ( d∑ i=1 xi + d− 1.5 ) , AND ( x ) , tanh ( d∑ i=1 xi − d+ 1.5 ) . These definitions are straightforwardly motivated by the precise neural implementation of the corresponding binary gates . Notice that by replacing tanh by a binary activation and changing the bias constant from 1.5 to 1 , we obtain an exact implementation of the corresponding logical gates for binary input vectors ( Anthony , 2005 ; Shalev-Shwartz & Ben-David , 2014 ) ; see a proof of this statement in Appendix A . Notably , each unit does not have any trainable parameters . We now define the AND gate in a vector form to project the logical operation over a subset of variables . The projection is controlled by an indicator column vector ( a mask ) u ∈ { 0 , 1 } d. With respect to such a projection vector u , we define the corresponding projected gate as ANDu ( x ) , tanh ( uTx− ||u||1 + 1.5 ) . Equipped with these definitions , a DNNF ( x ) : Rd → R with k conjunctions over m literals is , L ( x ) , tanh ( xTW + b ) ∈ Rm ( 1 ) DNNF ( x ) , OR ( [ ANDc1 ( L ( x ) ) , ANDc2 ( L ( x ) ) , . . . , ANDck ( L ( x ) ) ] ) . ( 2 ) Equation ( 1 ) defines L ( x ) that generates m “ neural literals ” , each of which is the result of a tanhactivation of a ( trainable ) affine transformation . The ( trainable ) matrix W ∈ Rd×m , as well as the row vector bias term b ∈ Rm , determine the affine transformations for each literal such that each of its columns corresponds to one literal . Equation ( 2 ) defines a DNNF . In this equation , the vectors ci ∈ { 0 , 1 } m , 1 ≤ i ≤ k , are binary indicators such that cij = 1 iff the jth literal belongs to the ith conjunction . In our design , each literal belongs to a single conjunction . These indicator vectors are defined and fixed according to the number and length of the conjunctions ( See Appendix D.2 ) . 2.2 NET-DNFS . The embedding layer of a Net-DNF with n DNNF blocks is a simple concatenation E ( x ) , [ DNNF1 ( x ) , DNNF2 ( x ) , . . . , DNNFn ( x ) ] . ( 3 ) Depending on the application , the final Net-DNF is a composition of an output layer over E ( x ) . For example , for binary classification ( logistic output layer ) , Net-DNF ( x ) : Rd → ( 0 , 1 ) is , Net-DNF ( x ) , σ ( n∑ i=1 wiDNNFi ( x ) + bi ) . ( 4 ) To summarize , a Net-DNF is always a four-layer network ( including the output layer ) , and only the first and last layers are learned . Each DNNF block has two parameters : the number of conjunctions k and the length m of these conjunctions , allowing for a variety of Net-DNF architectures . In all our experiments we considered a single Net-DNF architecture that has a fixed diversity of DNNF blocks which includes a number of different DNNF groups with different k , each of which has a number of conjunction sizes m ( see details in Appendix D.2 ) . The number n of DNNFs was treated as a hyperparameter , and selected based on a validation set as described on Appendix D.1 . 2.3 FEATURE SELECTION . One key strategy in decision tree training is greedy feature selection , which is performed hierarchically at any split , and allows decision trees to exclude irrelevant features . Additionally , decision tree ensemble algorithms apply random sampling to select a subset of the features , which is used to promote diversity , and prevent different trees focusing on the same set of dominant features in their greedy selection . In line with these strategies , we include in our Net-DNFs conceptually similar feature selection elements : ( 1 ) a subset of features uniformly and randomly sampled for each DNNF ; ( 2 ) a trainable mechanism for feature selection , applied on the resulting random subset . These two elements are combined and implemented in the affine literal generation layer described in Equation ( 1 ) , and applied independently for each DNNF . We now describe these techniques in detail . Recalling that d is the input dimension , the random selection is made by generating a stochastic binary mask , ms ∈ { 0 , 1 } d ( each block has its own mask ) , such that the probability of any entry being 1 is p ( see Appendix D.2 for details on setting this parameter ) . For a given mask ms , this selection can be applied over affine literals using a simple product diag ( ms ) W , where W is the matrix of Equation ( 1 ) . We then construct a trainable mask mt ∈ Rd , which will be applied on the features that are kept by ms . We introduce a novel trainable feature selection component that combines binary quantization of the mask together with modified elastic-net regularization . To train a binarized vector we resort to the straight-through estimator ( Hinton , 2012 ; Hubara et al. , 2017 ) , which can be used effectively to train non-differentiable step functions such as a threshold or sign . The trick is to compute the step function exactly in the forward pass , and utilize a differentiable proxy in the backward pass . We use a version of the straight-through estimator for the sign function ( Bengio et al. , 2013 ) , Φ ( x ) , { sign ( x ) , forward pass ; tanh ( x ) , backward pass . Using the estimator Φ ( x ) , we define a differentiable binary threshold function T ( x ) = 12Φ ( |x|− ) + 1 2 , where ∈ R defines an epsilon neighborhood around zero for which the output of T ( x ) is zero , and one outside of this neighborhood ( in all our experiments , we set = 1 and initialize the entries of mt above this threshold ) . We then apply this selection by diag ( T ( mt ) ) W . Given a fixed stochastic selection ms , to train the binarized selection mt we employ regularization . Specifically , we consider a modified version of the elastic net regularization , R ( mt , ms ) , which is tailored to our task . The modifications are reflected in two parts . First , the balancing between the L1 and L2 regularization is controlled by a trainable parameter α ∈ R. Second , the expressions of the L1 and L2 regularization are replaced by R1 ( mt , ms ) , R2 ( mt , ms ) , respectively ( defined below ) . Moreover , since we want to take into account only features that were selected by the random component , the regularization is applied on the vector mts = mt ms , where is element-wise multiplication . The functional form of the modified elastic net regularization is as follows , R2 ( mt , ms ) , ∣∣∣∣ ||mts||22||ms||1 − β 2 ∣∣∣∣ , R1 ( mt , ms ) , ∣∣∣∣ ||mts||1||ms||1 − β ∣∣∣∣ R ( mt , ms ) , 1− σ ( α ) 2 R2 ( mt , ms ) + σ ( α ) R1 ( mt , ms ) . The above formulation of R2 ( · ) and R1 ( · ) is motivated as follows . First , we normalize both norms by dividing with the effective input dimension , ||ms||1 , which is done to be invariant to the ( effective ) input size . Second , we define R2 and R1 as absolute errors , which encourages each entry to be , on average , approximately equal to the threshold . The reason is that the vector mt passes through a binary threshold , and though the exact values of its entries are irrelevant . What is relevant is whether these values are within epsilon neighborhood of zero or not . Thus , when the values are roughly equal to the threshold , it is more likely to converge to a balanced point where the regularization term is low and the relevant features were selected . The threshold term is controlled by β ( a hyperparameter ) , which controls the cardinality of mt , where smaller values of β lead to sparser mt . To summarize , feature selection is manifested by both architecture and loss . Architecture relies on the masks mt , ms , while the loss function uses R ( mt , ms ) . Finally , the functional form of a DNNF block with the feature selection component is obtained by plugging the masks into Equation ( 2 ) , L ( x ) , tanh ( xT diag ( T ( mt ) ) diag ( ms ) W + b ) ∈ Rm . Additionally , the mean over R ( mt , ms ) in all DNNFs is added to the loss function as a regularizer .
The authors propose a end-to-end deep learning model called Net-DNF to handle tabular data. The architecture of Net-DNF has four layers: the first layer is a dense layer (learnable weights) with tanh activation eq(1). The second layer (DNNF) is formed by binary conjunctions over literals eq(2). The third layer is an embedding formed by n DNNF blocks eq(3). the last layer is a linear transformation of the embedding with a sigmoid activation eq(4). The authors also propose a feature selection method based on a trainable binarized selection with a modified L1 and L2 regularization. In the experimental analysis, Net-DNF outperforms fully connected networks.
SP:ee9764a48b109b9860c0a6f657a6cdd819237e7e
Decorrelated Double Q-learning
1 INTRODUCTION . Q-learning Watkins & Dayan ( 1992 ) as a model free reinforcement learning approach has gained popularity , especially under the advance of deep neural networks Mnih et al . ( 2013 ) . In general , it combines the neural network approximators with the actor-critic architectures Witten ( 1977 ) ; Konda & Tsitsiklis ( 1999 ) , which has an actor network to control how the agent behaves and a critic to evaluate how good the action taken is . The Deep Q-Network ( DQN ) algorithm Mnih et al . ( 2013 ) firstly applied the deep neural network to approximate the action-value function in Q-learning and shown remarkably good and stable results by introducing a target network and Experience Replay buffer to stabilize the training . Lillicrap et al . proposes DDPG Lillicrap et al . ( 2015 ) , which extends Q-learning to handle continuous action space with target networks . Except the training stability , another issue Q-learning suffered is overestimation bias , which was first investigated in Thrun & Schwartz ( 1993 ) . Because of the noise in function approximation , the maximum operator in Q-learning can lead to overestimation of state-action values . And , the overestimation property is also observed in deterministic continuous policy control Silver & Lever ( 2014 ) . In particular , with the imprecise function approximation , the maximization of a noisy value will induce overestimation to the action value function . This inaccuracy could be even worse ( e.g . error accumulation ) under temporal difference learning Sutton & Barto ( 1998 ) , in which bootstrapping method is used to update the value function using the estimate of a subsequent state . Given overestimation bias caused by maximum operator of noise estimate , many methods have been proposed to address this issue . Double Q-learning van Hasselt ( 2010 ) mitigates the overestimation effect by introducing two independently critics to estimate the maximum value of a set of stochastic values . Averaged-DQN Anschel et al . ( 2017 ) takes the average of previously learned Q-values estimates , which results in a more stable training procedure , as well as reduces approximation error variance in the target values . Recently , Twin Delayed Deep Deterministic Policy Gradients ( TD3 ) Fujimoto et al . ( 2018 ) extends the Double Q-learning , by using the minimum of two critics to limit the overestimated bias in actor-critic network . A soft Q-learning algorithm Haarnoja et al . ( 2018 ) , called soft actor-critic , leverages the similar strategy as TD3 , while including the maximum entropy to balance exploration and exploitation . Maxmin Q-learning Lan et al . ( 2020 ) proposes the use of an ensembling scheme to handle overestimation bias in Q-Learning . This work suggests an alternative solution to the overestimation phenomena , called decorrelated double Q-learning , based on reducing the noise estimate in Q-values . On the one hand , we want to make the two value function approximators as independent as possible to mitigate overestima- tion bias . On the other hand , we should reduce the variance caused by imprecise estimate . Our decorrelated double Q-learning proposes an objective function to minimize the correlation of two critics , and meanwhile reduces the target approximation error variance with control variate methods . Finally , we provide experimental results on MuJoCo games and show significant improvement compared to competitive baselines . The paper is organized as follows . In Section 2 , we introduce reinforcement learning problems , notations and two existed Q-learning variants to address overestimation bias . Then we present our D2Q algorithm in Section 3 and also prove that in the limit , this algorithm converges to the optimal solution . In Section 4 we show the experimental results on MuJoCo continuous control tasks , and compare it to the current state of the art . Some related work and discussion is presented in Section 5 and finally Section 6 concludes the paper . 2 BACKGROUND . In this section , we introduce the reinforcement learning problems and Q-learning , as well as notions that will be used in the following sections . 2.1 PROBLEM SETTING AND NOTATIONS . We consider the model-free reinforcement learning problem ( i.e . optimal policy existed ) with sequential interactions between an agent and its environment Sutton & Barto ( 1998 ) in order to maximize a cumulative return . At every time step t , the agent selects an action at in the state st according its policy and receives a scalar reward rt ( st , at ) , and then transit to the next state st+1 . The problem is modeled as Markov decision process ( MDP ) with tuple : ( S , A , p ( s0 ) , p ( st+1|st , at ) , r ( st , at ) , γ ) . Here , S and A indicate the state and action space respectively , p ( s0 ) is the initial state distribution . p ( st+1|st , at ) is the state transition probability to st+1 given the current state st and action at , r ( st , at ) is reward from the environment after the agent taking action at in state st and γ is discount factor , which is necessary to decay the future rewards ensuring finite returns . We model the agent ’ s behavior with πθ ( a|s ) , which is a parametric distribution from a neural network . Suppose we have the finite length trajectory while the agent interacting with the environment . The return under the policy π for a trajectory τ = ( st , at ) T t=0 J ( θ ) = Eτ∼πθ ( τ ) [ r ( τ ) ] = Eτ∼πθ ( τ ) [ R T 0 ] = Eτ∼πθ ( τ ) [ T∑ t=0 γtr ( st , at ) ] ( 1 ) where πθ ( τ ) denotes the distribution of trajectories , p ( τ ) = π ( s0 , a0 , s1 , ... , sT , aT ) = p ( s0 ) T∏ t=0 πθ ( at|st ) p ( st+1|st , at ) ( 2 ) The goal of reinforcement learning is to learn a policy π which can maximize the expected returns θ = arg max θ J ( θ ) = arg maxEτ∼πθ ( τ ) [ R T 0 ] ( 3 ) The action-value function describes what the expected return of the agent is in state s and action a under the policy π . The advantage of action value function is to make actions explicit , so we can select actions even in the model-free environment . After taking an action at in state st and thereafter following policy π , the action value function is formatted as : Qπ ( st , at ) = Esi∼pπ , ai∼π [ Rt|st , at ] = Esi∼pπ , ai∼π [ T∑ i=t γ ( i−t ) r ( si , ai ) |st , at ] ( 4 ) To get the optimal value function , we can use the maximum over actions , denoted as Q∗ ( st , at ) = maxπ Q π ( st , at ) , and the corresponding optimal policy π can be easily derived by π∗ ( s ) ∈ arg maxat Q ∗ ( st , at ) . 2.2 Q-LEARNING . Q-learning , as an off-policy RL algorithm , has been extensively studied since it was proposed Watkins & Dayan ( 1992 ) . Suppose we use neural network parametrized by θQ to approximate Q-value in the continuous environment . To update Q-value function , we minimize the follow loss : L ( θQ ) = Esi∼pπ , ai∼π [ ( Q ( st , at ; θQ ) − yt ) 2 ] ( 5 ) where yt = r ( st , at ) + γmaxat+1 Q ( st+1 , at+1 ; θ Q ) is from Bellman equation , and its action at+1 is taken from frozen policy network ( actor ) to stabilizing the learning . In actor-critic methods , the policy π : S 7→ A , known as the actor with parameters θπ , can be updated through the chain rule in the deterministic policy gradient algorithm Silver & Lever ( 2014 ) ∇J ( θπ ) = Es∼pπ [ ∇aQ ( s , a ; θQ ) |a=π ( s ; θπ ) ∇θπ ( π ( s ; θπ ) ) ] ( 6 ) where Q ( s , a ) is the expected return while taking action a in state s , and following π after . One issue has attracted great attention is overestimation bias , which may exacerbate the situation into a more significant bias over the following updates if left unchecked . Moreover , an inaccurate value estimate may lead to poor policy updates . To address it , Double Q-learning van Hasselt ( 2010 ) use two independent critics q1 ( st , at ) and q2 ( st , at ) , where policy selection uses a different critic network than value estimation q1 ( st , at ) = r ( st , at ) + γq2 ( st+1 , arg max at+1 q1 ( st+1 , at+1 ; θ q1 ) ; θq2 ) q2 ( st , at ) = r ( st , at ) + γq1 ( st+1 , arg max at+1 q2 ( st+1 , at+1 ; θ q2 ) ; θq1 ) Recently , TD3 Fujimoto et al . ( 2018 ) uses the similar two q-value functions , but taking the minimum of them below : yt = r ( st , at ) + γmin ( q1 ( st+1 , π ( st+1 ) ) , q2 ( st+1 , π ( st+1 ) ) ) ( 7 ) Then the same square loss in Eq . 5 can be used to learn model parameters . 3 DECORRELATED DOUBLE Q-LEARNING . In this section , we present Decorrelated Double Q-learning ( D2Q ) for continuous action control with attempt to address overestimation bias . Similar to Double Q-learning , we use two q-value functions to approximate Q ( st , at ) . Our main contribution is to borrow the idea from control variates to decorrelate these two value functions , which can further reduce the overestimation risk . 3.1 Q-VALUE FUNCTION . Suppose we have two approximators q1 ( st , at ) and q2 ( st , at ) , D2Q uses the weighted difference of double q-value functions to approximate the action-value function at ( st , at ) . Thus , we define Q-value as following : Q ( st , at ) = q1 ( st , at ) − β ( q2 ( st , at ) − E ( q2 ( st , at ) ) ) ( 8 ) where q2 ( st , at ) −E ( q2 ( st , at ) ) is to model the noise in state st and action at , and β is the correlation coefficient of q1 ( st , at ) and q2 ( st , at ) . To understand the expectation E ( q2 ( st , at ) ) , it is the average over all possible runs . Thus , the weighted difference between q1 ( st , at ) and q2 ( st , at ) attempts to reduce the variance and remove the noise effects in Q-learning . To update q1 and q2 , we minimize the following loss : L ( θQ ) = Esi∼pπ , ai∼π [ ( q1 ( st , at ; θq1 ) − yt ) 2 ] + Esi∼pπ , ai∼π [ ( q2 ( st , at ; θq2 ) − yt ) 2 ] + λEsi∼pπ , ai∼π [ corr ( q1 ( st , at ; θq1 ) , q2 ( st , at ; θq2 ) ) ] 2 ( 9 ) where θQ = { θq1 , θq2 } , and yt can be defined as yt = r ( st , at ) + γQ ( st+1 , at+1 ) ( 10 ) where Q ( st+1 , at+1 ) is the action-value function defined in Eq . 8 to decorrelate q1 ( st+1 , at+1 ) and q2 ( st+1 , at+1 ) , which are both from the frozen target networks . In addition , we want these two qvalue functions as independent as possible . Thus , we introduce corr ( q1 ( st , at ; θq1 ) , q2 ( st , at ; θq1 ) ) , which measures similarity between these two q-value approximators . In the experiment , our method using Eq . 10 can get good results on Halfcheetah , but it did not perform well on other MuJoCo tasks . To stabilize the target value , we take the minimum of Q ( st+1 , at+1 ) and q2 ( st+1 , at+1 ) in Eq . 10 as TD3 Fujimoto et al . ( 2018 ) . Then , it gives the target update of D2Q algorithm below yt = r ( st , at ) + γmin ( Q ( st+1 , at+1 ) , q2 ( st+1 , at+1 ) ) ( 11 ) And the action at+1 is from policy at+1 = π ( st+1 ; θπ ) , which can take a similar policy gradient as in Eq . 6 . Our D2Q leverages the parametric actor-critic algorithm , which maintains two q-value approixmators and a single actor . Thus , the loss in Eq . 9 tries to minimize the three terms below , as corr ( q1 ( st , at ; θ q1 ) , q2 ( st , at ; θ q2 ) ) → 0 q1 ( st , at ; θ q1 ) → yt q2 ( st , at ; θ q2 ) → yt At each time step , we update the pair of critics towards the minimum target value in Eq . 11 , while reducing the correlation between them . The purposes that we introduce control variate q2 ( st , at ) are following : ( 1 ) Since we use q2 ( st , at ) − E ( q2 ( st , at ) ) to model noise , if there is no noise , such that q2 ( st , at ) − E ( q2 ( st , at ) ) = 0 , then we have yt = r ( st , at ) + min ( Qπ ( st , at ) , q2 ( st , at ) ) = r ( st , at ) + min ( q1 ( st , at ) , q2 ( st , at ) ) via Eq . 11 , which is exactly the same as TD3 . ( 2 ) In fact , because of the noise in value estimate , we have q2 ( st , at ) − E ( q2 ( st , at ) ) 6= 0 . The purpose we introduce q2 ( st , at ) is to mitigate overestimate bias in Q-learning . The control variate introduced by q2 ( st , at ) will reduce the variance of Q ( st , at ) to stabilize the learning of value function . Convergence analysis : we claim that our D2Q algorithm is to converge the optimal in the finite MDP settings . There is existed theorem in Jaakkola et al . ( 1994 ) , given the random process { ∆t } taking value in Rn and defined as ∆t+1 ( st , at ) = ( 1− αt ( st , at ) ) ∆t ( st , at ) + αt ( st , at ) Ft ( st , at ) ( 12 ) Then ∆t converges to zero with probability 1 under the following assumptions : 1 . 0 < αt < 1 , ∑ t αt ( x ) =∞ and ∑ t α 2 t ( x ) < ∞ 2 . ||E [ Ft ( x ) |Ft ] ||W ≤ γ||∆t||W + ct with 0 < γ < 1 and ct p→ 0 = 1 3. var [ Ft ( x ) |Ft ] ≤ C ( 1 + ||∆t||2W ) for C > 0 where Ft is a sequence of increasing σ-field such that αt ( st , at ) and ∆t are Ft measurable for t = 1 , 2 , .... Based on the theorem above , we provide sketch of proof which borrows heavily from the proof of convergence of Double Q-learning and TD3 as below : Firstly , the learning rate αt satisfies the condition 1 . Secondly , variance of r ( st , at ) is limit , so condition 3 holds . Finally , we will prove that condition 2 holds below . ∆t+1 ( st , at ) = ( 1− αt ( st , at ) ) ( Q ( st , at ) −Q∗ ( st , at ) ) + αt ( st , at ) ( rt + γmin ( Q ( st , at ) , q2 ( st , at ) ) −Q∗ ( st , at ) ) = ( 1− αt ( st , at ) ) ∆t ( st , at ) + αt ( st , at ) Ft ( st , at ) ( 13 ) where Ft ( st , at ) is defined as : Ft ( st , at ) = rt + γmin ( Q ( st , at ) , q2 ( st , at ) ) −Q∗ ( st , at ) = rt + γmin ( Q ( st , at ) , q2 ( st , at ) ) −Q∗ ( st , at ) + γQ ( st , at ) − γQ ( st , at ) = rt + γQ ( st , at ) −Q∗ ( st , at ) + γmin ( Q ( st , at ) , q2 ( st , at ) ) − γQ ( st , at ) = FQt ( st , at ) + ct ( 14 ) Since we have E [ FQt ( st , at ) |Ft ] ≤ γ||∆t|| under Q-learning , so the condition 2 holds . Then we need to prove ct = min ( Q ( st , at ) , q2 ( st , at ) ) −Q ( st , at ) converges to 0 with probability 1. min ( Q ( st , at ) , q2 ( st , at ) ) −Q ( st , at ) = min ( Q ( st , at ) , q2 ( st , at ) ) − q2 ( st , at ) + q2 ( st , at ) −Q ( st , at ) = min ( Q ( st , at ) − q2 ( st , at ) , 0 ) − ( Q ( st , at ) − q2 ( st , at ) ) = min ( q1 ( st , at ) − q2 ( st , at ) − β ( q2 ( st , at ) − E ( q2 ( st , at ) ) ) , 0 ) + q1 ( st , at ) − q2 ( st , at ) − β ( q2 ( st , at ) − E ( q2 ( st , at ) ) ) ( 15 ) Suppose there exists very small δ1 and δ2 , such that |q1 ( st , at ) − q2 ( st , at ) | ≤ δ1 and |q2 ( st , at ) − E ( q2 ( st , at ) ) | ≤ δ2 , then we have min ( Q ( st , at ) , q2 ( st , at ) ) −Q ( st , at ) ≤2 ( |q1 ( st , at ) − q2 ( st , at ) |+ β|q2 ( st , at ) − E ( q2 ( st , at ) ) | ) =2 ( δ1 + βδ2 ) < 4δ ( 16 ) where δ = max ( δ1 , δ2 ) . Note that ∃δ1 , |q1 ( st , at ) − q2 ( st , at ) | ≤ δ1 holds because ∆t ( q1 , q2 ) = |q1 ( st , at ) − q2 ( st , at ) | converges to zero . According Eq . 9 , both q1 ( st , at ) and q2 ( st , at ) are updated with following qt+1 ( st , at ) = qt ( st , at ) + αt ( st , at ) ( yt − qt ( st , at ) ) ( 17 ) Then we have ∆t+1 ( q1 , q2 ) = ∆t ( q1 , q2 ) − αt ( st , at ) ∆t ( q1 , q2 ) = ( 1 − αt ( st , at ) ) ∆t ( q1 , q2 ) converges to 0 as the learning rate satisfies 0 < αt ( st , at ) < 1 .
The paper suggests an improvement over double-Q learning by applying the control variates technique to the target Q, in the form of $(q1 - \beta (q2 - E(q2))$ (eqn (8)). To minimize the variance, it suggests minimizing the correlation between $q1$ and $q2$. In addition, it applies the TD3 trick. The resulting algorithm, D2Q, outperforms DDPG and competes with TD3.
SP:9962a592fe8663bbcfe752b83aa9b666fe3a9456
Linking average- and worst-case perturbation robustness via class selectivity and dimensionality
1 INTRODUCTION . Methods for understanding deep neural networks ( DNNs ) often attempt to find individual neurons or small sets of neurons that are representative of a network ’ s decision ( Erhan et al. , 2009 ; Zeiler and Fergus , 2014 ; Karpathy et al. , 2016 ; Amjad et al. , 2018 ; Lillian et al. , 2018 ; Dhamdhere et al. , 2019 ; Olah et al. , 2020 ) . Selectivity in individual units ( i.e . variability in a neuron ’ s activations across semantically-relevant data features ) has been of particular interest to researchers trying to better understand deep neural networks ( DNNs ) ( Zhou et al. , 2015 ; Olah et al. , 2017 ; Morcos et al. , 2018 ; Zhou et al. , 2018 ; Meyes et al. , 2019 ; Na et al. , 2019 ; Zhou et al. , 2019 ; Rafegas et al. , 2019 ; Bau et al. , 2020 ; Leavitt and Morcos , 2020 ) . However , recent work has shown that selective neurons can be irrelevant , or even detrimental to network performance , emphasizing the importance of examining distributed representations for understanding DNNs ( Morcos et al. , 2018 ; Donnelly and Roegiest , 2019 ; Dalvi et al. , 2019b ; Leavitt and Morcos , 2020 ) . In parallel , work on robustness seeks to build models that are robust to perturbed inputs ( Szegedy et al. , 2013 ; Carlini and Wagner , 2017a ; b ; Vasiljevic et al. , 2016 ; Kurakin et al. , 2017 ; Gilmer et al. , 2018 ; Zheng et al. , 2016 ) . Hendrycks and Dietterich ( 2019 ) distinguish between two types of robustness : corruption robustness , which measures a classifier ’ s performance on low-quality or naturalistically-perturbed inputs—and thus is an `` average-case '' measure—and adversarial robustness , which measures a classifier ’ s performance on small , additive perturbations that are tailored to the classifier—and thus is a `` worst-case '' measure.1 Research on robustness has been predominantly focused on worst-case perturbations , which is affected by weight and activation sparsity ( Madry et al. , 2018 ; Balda et al. , 2020 ; Ye et al. , 2018 ; Guo et al. , 2018 ; Dhillon et al. , 2018 ) and representational dimensionality ( Langeberg et al. , 2019 ; Sanyal et al. , 2020 ; Nayebi and Ganguli , 2017 ) . But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness . Some techniques for improving worst-case robustness also improve average-case robustness ( Hendrycks and Dietterich , 2019 ; Ford et al. , 2019 ; Yin et al. , 2019 ) , thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness . Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented.2 And because class selectivity regularization provides a method for controlling selectivity , and class selectivity regularization has been shown to improve test accuracy on unperturbed data ( Leavitt and Morcos , 2020 ) , we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it . In this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs . To do so , we used a recently-developed class selectivity regularizer ( Leavitt and Morcos , 2020 ) to directly modify the amount of class selectivity learned by DNNs , and examined how this affected the DNNs ’ robustness to worst-case and average-case perturbations . Our findings are as follows : • Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations , while networks with higher class selectivity are generally less robust to average-case perturbations , as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets . The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions . • In contrast to its impact on average-case perturbations , decreasing class selectivity reduces robustness to worst-case perturbations in both tested models , as assessed using gradient-based white-box attacks . • The variability of the input-unit gradient across samples and units is proportional to a network ’ s overall class selectivity , indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness . • The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types , but is larger for worst-case perturbations and low-selectivity networks . This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness . Our results demonstrate that changing class selectivity , and hence the sparsity of semantic representations , can confer robustness to average-case or worst-case perturbations , but not both simultaneously . They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off . 2 RELATED WORK . 2.1 PERTURBATION ROBUSTNESS . The most commonly studied form of robustness in DNNs is robustness to adversarial attacks , in which an input is perturbed in a manner that maximizes the change in the network ’ s output while 1We use the terms `` worst-case perturbation '' and `` average-case perturbation '' instead of `` adversarial attack '' and `` corruption '' , respectively , because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms . Also note that while Hendrycks and Dietterich ( 2019 ) assign specific and distinct meanings to `` perturbation '' and `` corruption '' , we use the term `` perturbation '' more generally to refer to any change to an input . 2Class information is semantic . And because class selectivity measures the degree to which class information is represented in individual neurons , it can be considered a form of sparsity . For example , if a network has high test accuracy on a classification task , it is necessarily representing class ( semantic ) information . But if the mean class selectivity across units is low , then the individual units do not contain much class information , thus the class information must be distributed across units ; the semantic representation in this case is not sparse , it is distributed . attempting to minimize or maintain below some threshold the magnitude of the change to the input ( Serban et al. , 2019 ; Warde-Farley and Goodfellow , 2017 ) . Because white-box adversarial attacks are optimized to best confuse a given network , robustness to adversarial attacks are a `` worst-case '' measure of robustness . Two factors that have been proposed to account for DNN robustness to worst-case perturbations are particularly relevant to the present study : sparsity and dimensionality . Multiple studies have linked activation and weight sparsity with robustness to worst-case perturbations . Adversarial training improves worst-case robustness Goodfellow et al . ( 2015 ) ; Huang et al . ( 2016 ) and results in sparser weight matrices ( Madry et al. , 2018 ; Balda et al. , 2020 ) . Methods for increasing the sparsity of weight matrices ( Ye et al. , 2018 ; Guo et al. , 2018 ) and activations ( Dhillon et al. , 2018 ) likewise improve worst-case robustness , indicating that the weight sparsity caused by worst-case perturbation training is not simply a side-effect . Researchers have also attempted to understand the nature of worst-case robustness from a perspective complementary to that of sparsity : dimensionality . Like sparsity , worst-case perturbation training reduces the rank of weight matrices and representations , and regularizing weight matrices and representations to be low-rank can improve worst-case perturbation robustness ( Langeberg et al. , 2019 ; Sanyal et al. , 2020 ; Nayebi and Ganguli , 2017 ) . Taken together , these studies support the notion that networks with low-dimensional representations are more robust to worst-case perturbations . Comparatively less research has been conducted to understand the factors underlying averagecase robustness . Certain techniques for improving worst-case perturbation robustness also help against average-case perturbations ( Hendrycks and Dietterich , 2019 ; Geirhos et al. , 2018 ; Ford et al. , 2019 ) . Examining the frequency domain has elucidated one mechanism : worst-case perturbations for `` baseline '' models tend to be in the high frequency domain , and improvements in averagecase robustness resulting from worst-case robustness training are at least partially ascribable to models becoming less reliant on high-frequency information ( Yin et al. , 2019 ; Tsuzuku and Sato , 2019 ; Geirhos et al. , 2018 ) . But it remains unknown whether other factors such as sparsity and dimensionality link these two forms of robustness . 2.2 CLASS SELECTIVITY . One technique that has been of particular interest to researchers trying to better understand deep ( and biological ) neural networks is examining the selectivity of individual units ( Zhou et al. , 2015 ; Olah et al. , 2017 ; Morcos et al. , 2018 ; Zhou et al. , 2018 ; Meyes et al. , 2019 ; Na et al. , 2019 ; Zhou et al. , 2019 ; Rafegas et al. , 2019 ; Bau et al. , 2020 ; Leavitt and Morcos , 2020 ; Sherrington , 1906 ; Kandel et al. , 2000 ) . Evidence regarding the importance of selectivity has mostly relied on single unit ablation , and has been equivocal ( Radford et al. , 2017 ; Morcos et al. , 2018 ; Amjad et al. , 2018 ; Zhou et al. , 2018 ; Donnelly and Roegiest , 2019 ; Dalvi et al. , 2019a ) . However Leavitt and Morcos ( 2020 ) examined the role of single unit selectivity in network performance by regularizing for or against class selectivity in the loss function , which sidesteps the limitations of single unit ablation and correlative approaches and allowed them to investigate the causal effect of class selectivity . They found that reducing class selectivity has little negative impact on—and can even improve—test accuracy in CNNs trained on image recognition tasks , but that increasing class selectivity has significant negative effects on test accuracy . However , their study focused on examining the effects of class selectivity on test accuracy in unperturbed ( clean ) inputs . Thus it remains unknown how class selectivity affects robustness to perturbed inputs , and whether class selectivity can serve as or elucidate a link between worst-case and average-case robustness . 3 APPROACH . A detailed description of our approach is provided in Appendix A.1 . Models and training protocols Our experiments were performed on ResNet18 and ResNet50 ( He et al. , 2016 ) trained on Tiny ImageNet ( Fei-Fei et al. , 2015 ) , and ResNet20 ( He et al. , 2016 ) trained on CIFAR10 ( Krizhevsky , 2009 ) . We focus primarily on the results for ResNet18 trained on Tiny ImageNet in the main text for space , though results were qualitatively similar for ResNet50 , and ResNet20 trained on CIFAR10 . Experimental results were obtained with model parameters from the epoch that achieved the highest validation set accuracy over the training epochs , and 20 replicate models ( ResNet18 and ResNet20 ) or 5 replicate models ( Resnet50 ) with different random seeds were run for each hyperparameter set . Class selectivity index Following ( Leavitt and Morcos , 2020 ) . A unit ’ s class selectivity index is calculated as follows : At every ReLU , the activation in response to a single sample was averaged across all elements of the filter map ( which we refer to as a `` unit '' ) . The class-conditional mean activation was then calculated across all samples in the clean test set , and the class selectivity index ( SI ) was calculated as follows : SI = µmax − µ−max µmax + µ−max ( 1 ) where µmax is the largest class-conditional mean activation and µ−max is the mean response to the remaining ( i.e . non-µmax ) classes . The selectivity index ranges from 0 to 1 . A unit with identical average activity for all classes would have a selectivity of 0 , and a unit that only responds to a single class would have a selectivity of 1 . As Morcos et al . ( 2018 ) note , the selectivity index is not a perfect measure of information content in single units . For example , a unit with a litte bit of information about many classes would have a low selectivity index . However , it identifies units that are class-selective similarly to prior studies ( Zhou et al. , 2018 ) . Most importantly , it is differentiable with respect to the model parameters . Class selectivity regularization We used ( Leavitt and Morcos , 2020 ) ’ s class selectivity regularizer to control the levels of class selectivity learned by units in a network during training . Class selectivity regularization is achieved by minimizing the following loss function during training : loss = − C∑ c yc· log ( ŷc ) − αµSI ( 2 ) The left-hand term in the loss function is the standard classification cross-entropy , where c is the class index , C is the number of classes , yc is the true class label , and ŷc is the predicted class probability . The right-hand component of the loss function , −αµSI , is the class selectivity regularizer . The regularizer consists of two terms : the selectivity term , µSI = 1 L L∑ l 1 U U∑ u SIu , l ( 3 ) where l is a convolutional layer , L is number of layers , u is a unit , U is the number of units in a given layer , and SIu is the class selectivity index of unit u . The selectivity term of the regularizer is obtained by computing the selectivity index for each unit in a layer , then computing the mean selectivity index across units within each layer , then computing the mean selectivity index across layers . Computing the mean within layers before computing the mean across layers ( as compared to computing the mean across all units in the network ) mitigates the biases induced by the larger numbers of units in deeper layers . The other term in the regularizer is α , the regularization scale , which determines whether class selectivity is promoted or discouraged . Negative values of α discourage class selectivity in individual units and positive values encourage it . The magnitude of α controls the contribution of the selectivity term to the overall loss . During training , the class selectivity index was computed for each minibatch . The final ( logit ) layer was not subject to selectivity regularization or included in our analyses because by definition , the logit layer must be class selective in a classification task . Measuring average-case robustness To evaluate robustness to average-case perturbations , we tested our networks on CIFAR10C and Tiny ImageNetC , two benchmark datasets consisting of the CIFAR10 or Tiny ImageNet data , respectively , to which a set of naturalistic corruptions have been applied ( Hendrycks and Dietterich , 2019 , examples in Figure A1 ) . We average across all corruption types and severities ( see Appendix A.1.2 for details ) when reporting corrupted test accuracy . Measuring worst-case robustness We tested our models ’ worst-case ( i.e . adversarial ) robustness using two methods . The fast gradient sign method ( FGSM ) ( Goodfellow et al. , 2015 ) is a simple attack that computes the gradient of the loss with respect to the input image , then scales the image ’ s pixels ( within some bound ) in the direction that increases the loss . The second method , projected gradient descent ( PGD ) ( Kurakin et al. , 2016 ; Madry et al. , 2018 ) , is an iterated version of FGSM . We used a step size of 0.0001 and an l∞ norm perturbation budget ( ) of 16/255 . Computing the stability of units and layers To quantify variation in networks ’ perturbability , we first computed the l2 norm of the input-unit gradient for each unit u in a network . We then computed the mean ( µu ) and standard deviation ( σu ) of the norm across samples for each unit . σu/µu yields a ) Shot Noise Brightness Pixelate Glass Blur Motion Blur Impulse Noise Frost Jpeg Compression Contrast Defocus Blur Elastic Transform Snow Fog Gaussian Noise Zoom Blur 0 5 10 15 20 25 30 35 Te st A cc ur ac y Class Selectivity Regularization Scale ( α ) -2.0 Low Selectivity 2.0 High Selectivity 0 Baseline Selectivity -1.0 -0.7 -0.4 -0.2 0.2 0.4 0.7 1.0 b ) -2.0 -1.0 -0.7 -0.4 -0.2 -0.0 0.2 0.4 0.7 1.0 2.0 Regularization Scale ( α ) 13 14 15 16 17 Te st A cc ur ac y Figure 1 : Reducing class selectivity improves average-case robustness . Test accuracy ( y-axis ) as a function of corruption type ( x-axis ) , class selectivity regularization scale ( α ; color ) , and corruption severity ( ordering along y-axis ) . Test accuracy is reduced proportionally to corruption severity , leading to an ordering along the y-axis ; corruption severity 1 ( least severe ) is at the top , corruption severity 5 ( most severe ) at the bottom . ( b ) Mean test accuracy across all corruptions and severities ( y-axis ) as a function of α ( x-axis ) . Results shown are for ResNet18 trained on Tiny ImageNet and tested on Tiny ImageNetC . Error bars = 95 % confidence intervals of the mean . See Figure A6 for CIFAR10C results . the coefficient of variation ( Everitt , 2002 ) for a unit ( CVu ) , a measure of variation in perturbability for individual units . We also quantified the variation across units in a layer by computing the standard deviation of µu across units in a layer l , σ ( µu ) = σl , and dividing this by the corresponding mean across units µ ( µu ) = µl , to yield the CV across units σl/µl = CVl .
This work empirically studies the relationship between robustness and class selectivity, a measure of neuron variability between classes. Robustness to both adversarial ("worst-case") perturbations and corruptions ("average-case") are considered. This work builds off the recent work of Leavitt and Morcos (2020) (currently in review at ICLR 2021) who claim empirical evidence that class selectivity may be harmful for generalization. The experiments in this paper examine the robustness (in both senses) of networks explicitly regularized for class selectivity. The main empirical claims are that (1) class sensitivity is negatively correlated with robustness to corruptions (2) class sensitivity is positively correlated with robustness to adversarial perturbations.
SP:73f0f92f476990989fa8339f789a77fadb5c1e26
Isotropy in the Contextual Embedding Space: Clusters and Manifolds
1 INTRODUCTION . The polysemous English word “ bank ” has two common senses : 1. the money sense , a place that people save or borrow money ; 2. the river sense , a slope of earth that prevents the flooding . In modern usage , the two senses are very different from one another , though interestingly , both senses share similar etymologies ( and both can be traced back to the same word in Proto-Germanic ) . In the static embedding , multiple instances of the same word ( e.g . “ bank ” ) will be represented using the same vector . On the contrary , the contextual embedding assigns different vectors to different instances of the same word , depending on the context . Historically , static embedding models like Word2vec ( Mikolov et al. , 2013b ) and GloVe ( Pennington et al. , 2014 ) , predated contextual embedding models such as ELMo ( Peters et al. , 2018 ) , GPT ( Radford et al. , 2018 ) , BERT ( Devlin et al. , 2018 ) and ERNIE ( Sun et al. , 2019 ) . Much of the literature on language modeling has moved to contextual embeddings recently , largely because of their superior performance on the downstreaming tasks . 1.1 RELATED WORK . The static embeddings are often found to be easier to interpret . For example , the Word2Vec and GloVe papers discuss adding and subtracting vectors , such as : vec ( king ) - vec ( man ) + vec ( women ) = vec ( queen ) . Inspired by this relationship , researchers started to explore geometric properties of static embedding spaces . For example , Mu & Viswanath ( 2018 ) proposed a very counter-intuitive method that removes the top principle components ( the dominating directions in the transformed embedding space ) , which surprisingly improved the word representations . Rather than completely discarding the principle components , Liu et al . ( 2019 ) proposed to use a technique called Conceptor Negation , to softly suppress transformed dimensions with larger variances . Both approaches , simply removing certain principle components as well as Conceptor Negation , produce significant improvements over vanilla embeddings obtained by static language models . In Huang et al . ( 2020 ) , the authors studied how to effectively transform static word embeddings from one language to another . Unfortunately , the strong illustrative representation like the king-queen example above , is no longer obvious in a general contextual embedding space . Arguing that syntax structure indeed exists in the contextual embeddings , Hewitt & Manning ( 2019 ) proposed a structural probe to identify the syntax trees buried in the space , and found the evidence of implicit syntax tree in BERT and ELMo . The advantage of contextual embedding over the static counterpart , mainly come from its capability to assign different vectors to the same word , depending on the word sense in the context . Researchers in ( Reif et al. , 2019 ) found such a geometric representation of word senses in the BERT model . These papers reveal the existence of linguistic features embedded implicitly in the contextual vector spaces . The geometric properties of contextual embedding space are also investigated and compared with the static embedding space . Mimno & Thompson ( 2017 ) found anisotropy when negative sampling is used . In ( Ethayarajh , 2019 ) , the authors characterize how vectors are distributed in the contextual space . They found that most vectors occupy in a relatively narrow cone in the space . Pairs of vectors within this cone have large cosines . This phenomenon can be found in most state-of-the-art contextual embedding models . In ( Gao et al. , 2019 ) , the authors named this phenomenon ” representation degeneration ” , and attempted to mitigate the problem by introducing a regularization term that minimizes cosine similarities between vectors . In a very recent work , Demeter et al . ( 2020 ) suggest there is a structure weakness in the space that leads to bias when using soft-max , as is common with deep language models . 1.2 MOTIVATION AND CONTRIBUTIONS . Isotropy often makes the space more effectively utilized and more robust to perturbations ( no extreme directions that lead to high condition number ) . It is counter-intuitive and not clear why those contextual embedding models perform remarkably well on many tasks given their anisotropic embeddings bring all the vectors close together , hard to distinguish one from another . On one hand , it is widely believed that contextual embeddings encode the relevant linguistic information ( e.g . ( Reif et al. , 2019 ) ) , but on the other hand , it is also widely believed that the contextual space is anisotropic that representations become degenerated ( e.g . ( Mimno & Thompson , 2017 ) , ( Gao et al. , 2019 ) , ( Ethayarajh , 2019 ) ) . These motivate us to find a reasonable understanding that bridges this gap . This paper is similar in spirit to ( Mu & Viswanath , 2018 ) , but different in three aspects . First , we generalize their work on traditional static embeddings to more modern contextual embeddings . Second , we introduce clustering methods to isolate the space , whereas they used PCA to remove dominant dimensions ( that tend to dominate the variance ) . Finally , we identify low dimensional manifolds in the space , and introduce an alternative approach ( LID ) to characterize local subspaces . Key Contributions : This paper takes a deeper look into the contextual embedding spaces of popular pre-trained models . It identifies the following facts that were misunderstood or not known before : 1 ) We find isotropy within clusters in the contextual embedding space , in contrast to previous reports of anisotropy ( caused by misleading isolated clusters ) . We introduce clustering and center shifting to reveal the isotropy , and show more consistent layer-wise behavior across models . 2 ) We find a Swiss-Roll manifold in GPT/GPT2 embeddings , but not in BERT/DistilBERT embeddings . The manifold is related to word frequency , suggesting a difference in how models evolve as they see more data . We use approximate Local Intrinsic Dimension ( LID ) to characterize the manifold , and find contextual embedding models , including all BERT , GPT families and ELMo , often have small LIDs . The small LIDs can be viewed as the local anisotropy of the space . The code for this paper could be found at https : //github.com/TideDancer/IsotropyContxt . 2 ANALYSIS SETTINGS . 2.1 MODELS AND DATASETS . In this paper , we consider popular pre-trained contextual embedding models , including BERT , DistilBERT ( Sanh et al. , 2019 ) ( or denoted as D-BERT in the rest of the paper ) , GPT , GPT2 ( Radford et al. , 2019 ) and ELMo . For the BERT and GPT families , we perform our evaluations on the pretrained uncased base models from Huggingface ( https : //huggingface.co/transformers/index.html # ) . The pre-trained ELMo model is from AllenNLP ( https : //docs.allennlp.org/v1.0.0/ ) . BERT and DBERT are non-causal models because of their attention mechanism , where tokens can attend to any token in the input , regardless of their relative positions . In contrast , GPT and GPT2 are causal models because attention is limited to the tokens previously seen in the input . Different models achieve contextual embedding in different ways . For instance , BERT adds positional embeddings to the token embeddings , while ELMo performs vector concatenation . Most models start with an initial layer that maps token ids to vectors . This paper is not concerned with that lookup table layer , and only focuses on the layers after that . The base BERT , GPT and GPT2 models have 12 layers of interest , indexed from 0 to 11 , while D-BERT has 6 layers and ELMo has two . We use Penn Tree Bank ( PTB ) ( Marcus et al. , 1993 ) and WikiText-2 ( Merity et al. , 2016 ) datasets . The PTB has 0.88 million words and WikiText-2 has 2 million . Both of them are the standard datasets for language models . In the rest of the paper , we report on PTB since we see similar results with both datasets . Details on WikiText-2 analysis could be found in Appendix . 2.2 NOTATION . For each position in a corpus , we have a word . Words are converted into tokens , using the appropriate tokenizer for the model . Tokenizers could split some words into subwords , therefore , the number of obtained tokens ( denoted as n ) could be more than number of words in the corpus . PTB , for example , contains 0.88 million words , but has n = 1.2 million tokens , when processed by BERT ’ s tokenizer . Let V be the vocabulary , a set of distinct tokens . For any element in the vocabulary V , we call it a type . For example , BERT has a vocabulary of roughly 30 , 000 types . We may mix using “ word ” and “ type ” for ease of reading . We denote the i-th type in V as ti . Let Φ ( ti ) = { φ1 ( ti ) , φ2 ( ti ) , . . . } be the set of all embedding instances of ti ( note that different contexts in the corpus yield different embeddings of ti ) . By construction , ∑ t |Φ ( t ) | = n. We define the inter-type cosine similarity as Sinter , Ei 6=j [ cos ( φ ( ti ) , φ ( tj ) ) ] ( 1 ) where φ ( ti ) is one random sample from Φ ( ti ) , and the same for φ ( tj ) ∈ Φ ( tj ) . The expectation is taken over all pairs of different types . Similarly , we define the intra-type cosine similarity as Sintra , Ei [ Ek 6=l [ cos ( φk ( ti ) , φl ( ti ) ) ] ] ( 2 ) where the inner expectation is over different embeddings φ ( ti ) for the same type ti , and the outer expectation is over all types . Both Sinter and Sintra take values between −1 and 1 . Note that for i.i.d . Gaussian random samples x , y , the expected cosine similarity E [ cos ( x , y ) ] = 0 . A cosine value closer to 0 often indicates strong isotropy . Clearly , the inter-type metric describes the similarity between different types , where the intra-type one measures similarity between same type ’ s embedding instances . Our definitions of Sinter and Sintra are similar to the measures used in Ethayarajh ( 2019 ) , but at the corpus level . Note that some types are more frequent than others , especially under a Zipfian distribution ( Piantadosi , 2014 ) , and therefore , the size of Φ ( t ) varies dramatically with the frequency of type t . 2.3 AN INITIAL LOOK AT ANISOTROPY . Inspired by Ethayarajh ( 2019 ) , we follow their procedure and take a first look at the anisotropy identified by Mimno & Thompson ( 2017 ) and Ethayarajh ( 2019 ) , in the contextual embedding space . Figure 1 shows strong anisotropy effects in a number of models . These findings are consistent with Ethayarajh ( 2019 ) , though we use slightly different metrics . The plots show expected cosine ( Sinter and Sintra ) as a function of layer . For efficiency , we approximate Sintra by imposing a limit of 1,000 samples for frequent types , t , if |Φ ( t ) | > 1000 . From the figure we can see the following : • Both Sinter and Sintra are high ( 0 ) across almost all the layers and all the models . In particular , the same as reported in Ethayarajh ( 2019 ) , GPT2 is relatively more anisotropic . • Sinter tends to increase with layer , in contrast with Sintra which in general decreases but with fluctuations . This means that embeddings for different types are moving closer to one another at deeper layers , while embeddings for the same type ’ s instances are spreading away . • The last layer is often special . Note that the last layer has smaller cosines than the second last in most cases , with the notable exception of GPT2 . In summary , we observe large cosines ( across layers/models ) , especially for the GPT2 model . When cosines are close to 1 , embeddings lie in a subspace defined by a very narrow cone ( Ethayarajh , 2019 ) . One might expect embeddings to be more effective if they took advantage of a larger subspace . Are these models missing an opportunity to have the benefits from isotropy ( Mu & Viswanath , 2018 ) ? We answer this question in the following sections .
The authors investigate the token embedding space of a variety of contextual embedding models for natural language. Using techniques based on nearest neighbors, clustering, and PCA, they report a variety of results on local dimensionality / anisotropy / clustering / manifold structure in these embedding models which are of general interest to scientists and practitioners hoping to understand these models. These include findings of (local) isotropy in the embeddings when appropriately clustered and shifted, and an apparent manifold structure in the GPT models.
SP:8fe8ad33a783b2f98816e57e88d20b67fed50e8d
In Search of Lost Domain Generalization
1 INTRODUCTION . Machine learning systems often fail to generalize out-of-distribution , crashing in spectacular ways when tested outside the domain of training examples ( Torralba and Efros , 2011 ) . The overreliance of learning systems on the training distribution manifests widely . For instance , self-driving car systems struggle to perform under conditions different to those of training , including variations in light ( Dai and Van Gool , 2018 ) , weather ( Volk et al. , 2019 ) , and object poses ( Alcorn et al. , 2019 ) . As another example , systems trained on medical data collected in one hospital do not generalize to other health centers ( Castro et al. , 2019 ; AlBadawy et al. , 2018 ; Perone et al. , 2019 ; Heaven , 2020 ) . Arjovsky et al . ( 2019 ) suggest that failing to generalize out-of-distribution is failing to capture the causal factors of variation in data , clinging instead to easier-to-fit spurious correlations prone to change across domains . Examples of spurious correlations commonly absorbed by learning machines include racial biases ( Stock and Cisse , 2018 ) , texture statistics ( Geirhos et al. , 2018 ) , and object backgrounds ( Beery et al. , 2018 ) . Alas , the capricious behaviour of machine learning systems out-of-distribution is a roadblock to their deployment in critical applications . Aware of this problem , the research community has spent significant efforts during the last decade to develop algorithms able to generalize out-of-distribution . In particular , the literature in Domain Generalization ( DG ) assumes access to multiple datasets during training , each of them containing examples about the same task , but collected under a different domain or experimental condition ( Blanchard et al. , 2011 ; Muandet et al. , 2013 ) . The goal of DG algorithms is to incorporate the invariances across these training domains into a classifier , in hopes that such invariances will also hold in novel test domains . Different DG solutions assume different types of invariances , and propose algorithms to estimate them from data . Despite the enormous importance of DG , the literature is scattered : a plethora of different algorithms appear yearly , each of them evaluated under different datasets , neural network architectures , and model selection criteria . Borrowing from the success of standardized computer vision benchmarks ∗Alphabetical order , equal contribution . Work done while IG was at Facebook AI Research . This paper is a living benchmark , always refer to the latest version available at https : //arxiv.org/abs/2007.01434 such as ImageNet ( Russakovsky et al. , 2015 ) , the purpose of this work is to perform a rigorous comparison of DG algorithms , as well as to open-source our software for anyone to replicate and extend our analyses . This manuscript investigates the question : How useful are different DG algorithms when evaluated in a consistent and realistic setting ? To answer this question , we implement and tune fourteen DG algorithms carefully , to compare them across seven benchmark datasets and three model selection criteria . There are three major takeaways from our investigations : • Claim 1 : A careful implementation of ERM outperforms the state-of-the-art in terms of average performance across common benchmarks ( Table 1 , full list in Appendix A.5 ) . • Claim 2 : When implementing fourteen DG algorithms in a consistent and realistic setting , no competitor outperforms ERM by more than one point ( Table 3 ) . • Claim 3 : Model selection is non-trivial for DG , yet affects results ( Table 3 ) . As such , we argue that DG algorithms should specify their own model selection criteria . As a result of our research , we release DOMAINBED , a framework to streamline rigorous and reproducible experimentation in DG . Using DOMAINBED , adding a new algorithm or dataset is a matter of a few lines of code . A single command runs all experiments , performs all model selections , and auto-generates all the result tables included in this work . DOMAINBED is a living project : we welcome pull requests from fellow researchers to update the available algorithms , datasets , model selection criteria , and result tables . Section 2 kicks off our exposition with a review of the DG setup . Section 3 discusses the difficulties of model selection in DG and makes recommendations for a path forward . Section 4 introduces DOMAINBED , describing the features included in the initial release . Section 5 discusses the experimental results of running the entire DOMAINBED suite , illustrating the competitive performance of ERM and the importance of model selection criteria . Finally , Section 6 offers our view on future research directions in DG . Appendix A reviews one hundred articles spanning a decade of research in DG , summarizing the experimental performance of over thirty algorithms . 2 THE PROBLEM OF DOMAIN GENERALIZATION . The goal of supervised learning is to predict values y ∈ Y of a target random variable Y , given values x ∈ X of an input random variable X . Predictions ŷ = f ( x ) about x originate from a predictor f : X → Y . We often decompose predictors as f = w ◦ φ , where we call φ : X → H the featurizer , and w : H → Y the classifier . To solve the prediction task we collect the training dataset D = { ( xi , yi ) } ni=1 , which contains identically and independently distributed ( i.i.d . ) examples from the joint probability distribution P ( X , Y ) . Given a loss function ` : Y × Y → [ 0 , ∞ ) measuring prediction error , supervised learning seeks the predictor minimizing the risk E ( x , y ) ∼P [ ` ( f ( x ) , y ) ] . Since we only have access to the data distribution P ( X , Y ) via the dataset D , we instead search a predictor minimizing the empirical risk 1n ∑n i=1 ` ( f ( xi ) , yi ) ( Vapnik , 1998 ) . The rest of this paper studies the problem of Domain Generalization ( DG ) , an extension of supervised learning where training datasets from multiple domains ( or environments ) are available to train our predictor ( Blanchard et al. , 2011 ) . Each domain d produces a dataset Dd = { ( xdi , ydi ) } nd d=1 containing i.i.d . examples from some probability distribution P ( Xd , Y d ) , for all training domains d ∈ { 1 , . . . , dtr } . The goal of DG is out-of-distribution generalization : learning a predictor able to perform well at some unseen test domain dtr + 1 . Since no data about the test domain is available during training , we must assume the existence of statistical invariances across training and testing domains , and incorporate such invariances ( but nothing else ) into our predictor . The type of invariance assumed , as well as how to estimate it from the training datasets , varies between DG algorithms . We review a hundred articles in DG spanning a decade of research and thirty algorithms in Appendix A.5 . DG differs from unsupervised domain adaptation . In the latter , unlabeled data from the test domain is available during training ( Pan and Yang , 2009 ; Patel et al. , 2015 ; Wilson and Cook , 2018 ) . Table 2 compares different machine learning setups to highlight the nature of DG problems . The causality literature refers to DG as learning from multiple environments ( Peters et al. , 2016 ; Arjovsky et al. , 2019 ) . Although challenging , the DG framework can capture some of the difficulty of real prediction problems , where unforeseen distributional discrepancies between training and testing data are surely expected . At the same time , the framework can be limiting : in many real world scenarios there may be external variables informing about task relatedness ( space , time , annotations ) that the DG framework ignores . 3 MODEL SELECTION AS PART OF THE LEARNING PROBLEM . Here we discuss issues surrounding model selection ( choosing hyperparameters , training checkpoints , architecture variants ) in DG and make specific recommendations for a path forward . Because we lack access to a validation set identically distributed to the test data , model selection in DG is not as straightforward as in supervised learning . Some works adopt heuristic strategies whose behavior is not well-studied , while others simply omit a description of how to choose hyperparameters . This leaves open the possibility that hyperparameters were chosen using the test data , which is not methodologically sound . Differences in results arising from inconsistent tuning practices may be misattributed to the algorithms under study , complicating fair assessments . We believe that much of the confusion surrounding model selection in DG arises from treating it as merely a question of experimental design . To the contrary , model selection requires making theoretical assumptions about how the test data relates to the training data . Different DG algorithms make different assumptions , and it is not clear a priori which ones are correct , or how they influence the model selection criterion . Indeed , choosing reasonable assumptions is at the heart of DG research . Therefore , a DG algorithm without a strategy to choose its hyperparameters should be regarded as incomplete . Recommendation 1 A DG algorithm should be responsible for specifying a model selection method . While algorithms without well-justified model selection methods are incomplete , they may be useful stepping-stones in a research agenda . In this case , instead of using an ad-hoc model selection method , we can evaluate incomplete algorithms by considering an oracle model selection method , where we select hyperparameters using some data from the test domain . Of course , it is important to avoid invalid comparisons between oracle results and baselines tuned without an oracle method . Also , unless we restrict access to the test domain data somehow , we risk obtaining meaningless results ( we could just train on such test domain data using supervised learning ) . Recommendation 2 Researchers should disclaim any oracle-selection results as such and specify policies to limit access to the test domain . 3.1 THREE MODEL SELECTION METHODS FOR DG . Having made broad recommendations , we review and justify three model selection criteria for DG . Appendix B.3 illustrates these with an specific example . Training-domain validation We split each training domain into training and validation subsets . We train models using the training subsets , and choose the model maximizing the accuracy on the union of validation subsets . This strategy assumes that the training and test examples follow similar distributions . For example , Ben-David et al . ( 2010 ) bound the test error of a classifier with the divergence between training and test domains . Leave-one-domain-out validation Given dtr training domains , we train dtr models with equal hyperparameters , each holding one of the training domains out . We evaluate each model on its held-out domain , and average the accuracies of these dtr models over their held-out domains . Finally , we choose the model maximizing this average accuracy , retrained on all dtr domains . This strategy assumes that training and test domains follow a meta-distribution over domains , and that our goal is to maximize the expected performance under this meta-distribution . Note that leaving k > 1 domains out would increase greatly the number of experiments , and introduces a hyperparameter k. Test-domain validation ( oracle ) We choose the model maximizing the accuracy on a validation set that follows the distribution of the test domain . Following our earlier recommendation to limit test domain access , we allow one query ( the last checkpoint ) per choice of hyperparameters , disallowing early stopping . Recall that this is not a valid benchmarking methodology . Oracle-based results can be either optimistic , because we select models using the test distribution , or pessimistic , because the query limit reduces the number of considered hyperparameters . We also tried limiting the size of the oracle test set instead of the number of queries , but this led to unacceptably high variance .
This paper critically re-examines research in domain generalisation (DG), ie building models that robustly generalise to out-of-distribution data. It observes that existing methods are hard to compare, in particular due to unclear hyper-parameter and model selection criteria. It introduces a common benchmark suite including a well designed model selection procedure, and re-evaluates existing methods on this suite. The results show that under such controlled evaluation, the benefit of existing DG methods over vanilla empirical risk minimisation (ERM) largely disappear. This raises the concern that existing DG methods might be over-tuned and hard to replicate. By releasing the controlled benchmark suite, future research progress can be more reliably measured.
SP:9e4a85fa5d76f345b5a38b6f86710a53e1d08503
Sparse Uncertainty Representation in Deep Learning with Inducing Weights
1 Introduction . Deep learning models are becoming deeper and wider than ever before . From image recognition models such as ResNet-101 ( He et al. , 2016a ) and DenseNet ( Huang et al. , 2017 ) to BERT ( Xu et al. , 2019 ) and GPT-3 ( Brown et al. , 2020 ) for language modelling , deep neural networks have found consistent success in fitting large-scale data . As these models are increasingly deployed in real-world applications , calibrated uncertainty estimates for their predictions become crucial , especially in safety-critical areas such as healthcare . In this regard , Bayesian Neural Networks ( BNNs ) ( MacKay , 1995 ; Blundell et al. , 2015 ; Gal & Ghahramani , 2016 ; Zhang et al. , 2020 ) and deep ensembles ( Lakshminarayanan et al. , 2017 ) represent two popular paradigms for estimating uncertainty , which have shown promising results in applications such as ( medical ) image processing ( Kendall & Gal , 2017 ; Tanno et al. , 2017 ) and out-of-distribution detection ( Ovadia et al. , 2019 ) . Though progress has been made , one major obstacle to scaling up BNNs and deep ensembles is their high storage cost . Both approaches require the parameter counts to be several times higher than their deterministic counterparts . Although recent efforts have improved memory efficiency ( Louizos & Welling , 2017 ; Świątkowski et al. , 2020 ; Wen et al. , 2020 ; Dusenberry et al. , 2020 ) , these still use more parameters than a deterministic neural network . This is particularly problematic in hardware-constrained edge devices , when on-device storage is required due to privacy regulations . Meanwhile , an infinitely wide BNN becomes a Gaussian process ( GP ) that is known for good uncertainty estimates ( Neal , 1995 ; Matthews et al. , 2018 ; Lee et al. , 2018 ) . But perhaps surprisingly , this infinitely wide BNN is “ parameter efficient ” , as its “ parameters ” are effectively the datapoints , which have a considerably smaller memory footprint than explicitly storing the network weights . In addition , sparse posterior approximations store a smaller number of inducing points instead ( Snelson & Ghahramani , 2006 ; Titsias , 2009 ) , making sparse GPs even more memory efficient . ∗Work done at Microsoft Research Cambridge . 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Can we bring the advantages of sparse approximations in GPs — which are infinitely-wide neural networks — to finite width deep learning models ? We provide an affirmative answer regarding memory efficiency , by proposing an uncertainty quantification framework based on sparse uncertainty representations . We present our approach in BNN context , but the proposed approach is also applicable to deep ensembles . In detail , our contributions are as follows : • We introduce inducing weights — an auxiliary variable method with lower dimensional counterparts to the actual weight matrices — for variational inference in BNNs , as well as a memory efficient parameterisation and an extension to ensemble methods ( Section 3.1 ) . • We extend Matheron ’ s rule to facilitate efficient posterior sampling ( Section 3.2 ) . • We provide an in-depth computation complexity analysis ( Section 3.3 ) , showing the signifi- cant advantage in terms of parameter efficiency . • We show the connection to sparse ( deep ) GPs , in that inducing weights can be viewed as projected noisy inducing outputs in pre-activation output space ( Section 5.1 ) . • We apply the proposed approach to BNNs and deep ensembles . Experiments in classification , model robustness and out-of-distribution detection tasks show that our inducing weight approaches achieve competitive performance to their counterparts in the original weight space on modern deep architectures for image classification , while reducing the parameter count to ≤ 24.3 % of that of a single network . We open-source our proposed inducing weight approach , together with baseline methods reported in the experiments , as a PyTorch ( Paszke et al. , 2019 ) wrapper named bayesianize : https : //github.com/microsoft/bayesianize . As demonstrated in Appendix I , our software makes the conversion of a deterministic neural network to a Bayesian one with a few lines of code : import bnn # our pytorch wrapper package net = torchvision.models.resnet18 ( ) # construct a deterministic ResNet18 bnn.bayesianize_ ( net , inference= '' inducing '' ) # convert it into a Bayesian one 2 Inducing variables for variational inference . Our work is built on variational inference and inducing variables for posterior approximations . Given observations D = { X , Y } with X = [ x1 , ... , xN ] , Y = [ y1 , ... , yN ] , we would like to fit a neural network p ( y|x , W1 : L ) with weights W1 : L to the data . BNNs posit a prior distribution p ( W1 : L ) over the weights , and construct an approximate posterior q ( W1 : L ) to the exact posterior p ( W1 : L|D ) ∝ p ( D|W1 : L ) p ( W1 : L ) , where p ( D|W1 : L ) = p ( Y|X , W1 : L ) = ∏N n=1 p ( yn|xn , W1 : L ) . Variational inference Variational inference ( Hinton & Van Camp , 1993 ; Jordan et al. , 1999 ; Zhang et al. , 2018a ) constructs an approximation q ( θ ) to the posterior p ( θ|D ) ∝ p ( θ ) p ( D|θ ) by maximising a variational lower-bound : log p ( D ) ≥ L ( q ( θ ) ) : = Eq ( θ ) [ log p ( D|θ ) ] −KL [ q ( θ ) ||p ( θ ) ] . ( 1 ) For BNNs , θ = { W1 : L } , and a simple choice of q is a Fully-factorized Gaussian ( FFG ) : q ( W1 : L ) = ∏L l=1 ∏dlout i=1 ∏dlin j=1N ( m ( i , j ) l , v ( i , j ) l ) , withm ( i , j ) l , v ( i , j ) l the mean and variance of W ( i , j ) l and dlin , d l out the respective number of inputs and outputs to layer l. The variational parameters are then φ = { m ( i , j ) l , v ( i , j ) l } Ll=1 . Gradients of L w.r.t . φ can be estimated with mini-batches of data ( Hoffman et al. , 2013 ) and with Monte Carlo sampling from the q distribution ( Titsias & LázaroGredilla , 2014 ; Kingma & Welling , 2014 ) . By setting q to an BNN , a variational BNN can be trained with similar computational requirements as a deterministic network ( Blundell et al. , 2015 ) . Improved posterior approximation with inducing variables Auxiliary variable approaches ( Agakov & Barber , 2004 ; Salimans et al. , 2015 ; Ranganath et al. , 2016 ) construct the q ( θ ) distribution with an auxiliary variable a : q ( θ ) = ∫ q ( θ|a ) q ( a ) da , with the hope that a potentially richer mixture distribution q ( θ ) can achieve better approximations . As then q ( θ ) becomes intractable , an auxiliary variational lower-bound is used to optimise q ( θ , a ) ( see Appendix B ) : log p ( D ) ≥ L ( q ( θ , a ) ) = Eq ( θ , a ) [ log p ( D|θ ) ] + Eq ( θ , a ) [ log p ( θ ) r ( a|θ ) q ( θ|a ) q ( a ) ] . ( 2 ) Here r ( a|θ ) is an auxiliary distribution that needs to be specified , where existing approaches often use a “ reverse model ” for r ( a|θ ) . Instead , we define r ( a|θ ) in a generative manner : r ( a|θ ) is the “ posterior ” of the following “ generative model ” , whose “ evidence ” is exactly the prior of θ : r ( a|θ ) = p̃ ( a|θ ) ∝ p̃ ( a ) p̃ ( θ|a ) , such that p̃ ( θ ) : = ∫ p̃ ( a ) p̃ ( θ|a ) da = p ( θ ) . ( 3 ) Plugging Eq . ( 3 ) into Eq . ( 2 ) : L ( q ( θ , a ) ) = Eq ( θ ) [ log p ( D|θ ) ] − Eq ( a ) [ KL [ q ( θ|a ) ||p̃ ( θ|a ) ] ] −KL [ q ( a ) ||p̃ ( a ) ] . ( 4 ) This approach yields an efficient approximate inference algorithm , translating the complexity of inference in θ to a . If dim ( a ) < dim ( θ ) and q ( θ , a ) = q ( θ|a ) q ( a ) has the following properties : 1 . A “ pseudo prior ” p̃ ( a ) p̃ ( θ|a ) is defined such that ∫ p̃ ( a ) p̃ ( θ|a ) da = p ( θ ) ; 2 . The conditionals q ( θ|a ) and p̃ ( θ|a ) are in the same parametric family , so can share parameters ; 3 . Both sampling θ ∼ q ( θ ) and computing KL [ q ( θ|a ) ||p̃ ( θ|a ) ] can be done efficiently ; 4 . The designs of q ( a ) and p̃ ( a ) can potentially provide extra advantages ( in time and space complexities and/or optimisation easiness ) . We call a the inducing variable of θ , which is inspired by variationally sparse GP ( SVGP ) with inducing points ( Snelson & Ghahramani , 2006 ; Titsias , 2009 ) . Indeed SVGP is a special case ( see Appendix C ) : θ = f , a = u , the GP prior is p ( f |X ) = GP ( 0 , KXX ) , p ( u ) = GP ( 0 , KZZ ) , p̃ ( f , u ) = p ( u ) p ( f |X , u ) , q ( f |u ) = p ( f |X , u ) , q ( f , u ) = p ( f |X , u ) q ( u ) , and Z are the optimisable inducing inputs . The variational lower-bound is L ( q ( f , u ) ) = Eq ( f ) [ log p ( Y|f ) ] −KL [ q ( u ) ||p ( u ) ] , and the variational parameters are φ = { Z , distribution parameters of q ( u ) } . SVGP satisfies the marginalisation constraint Eq . ( 3 ) by definition , and it has KL [ q ( f |u ) ||p̃ ( f |u ) ] = 0 . Also by using small M = dim ( u ) and exploiting the q distribution design , SVGP reduces run-time from O ( N3 ) to O ( NM2 + M3 ) where N is the number of inputs in X , meanwhile it also makes storing a full Gaussian q ( u ) affordable . Lastly , u can be whitened , leading to the “ pseudo prior ” p̃ ( f , v ) = p ( f |X , u = K1/2ZZ v ) p̃ ( v ) , p̃ ( v ) = N ( v ; 0 , I ) which could bring potential benefits in optimisation . We emphasise that the introduction of “ pseudo prior ” does not change the probabilistic model as long as the marginalisation constraint Eq . ( 3 ) is satisfied . In the rest of the paper we assume the constraint Eq . ( 3 ) holds and write p ( θ , a ) : = p̃ ( θ , a ) . It might seem unclear how to design such p̃ ( θ , a ) for an arbitrary probabilistic model , however , for a Gaussian prior on θ the rules for computing conditional Gaussian distributions can be used to construct p̃ . In Section 3 we exploit these rules to develop an efficient approximate inference method for Bayesian neural networks with inducing weights .
This work proposes a specific parametrisation for the Gaussian prior and approximate posterior distribution in variational Bayesian neural networks in terms of inducing weights. The general idea is an instance of the sparse variational inference scheme for GPs proposed by Titsias back in 2009; for a given model with a prior p(W) perform variational inference on an extended model with a hierarchical prior p(U) p(W | U), that has the same marginal p(W) = \int p(U)p(W | U)dU as the original model. The authors then consider “U” to be auxiliary weights that are jointly Gaussian with the actual weights “W” and then use the decomposition p(W|U)p(U), q(W|U)q(U) for the prior and approximate posterior (which can easily be computed via the conditional Gaussian rules). Furthermore, they “tie” (almost) all of the parameters between q(W|U) and p(W|U) (similarly to Titsias, 2009). The main benefit from these two things is that since the mean and covariance of the Gaussian distribution over W conditioned on U can be efficiently represented as functions of U, whenever dim(U) << dim(W) we get reductions in memory for storing the distributions over the parameters in the network. The authors furthermore, discuss how to efficiently parametrize the joint distribution over W, U, discuss different choices for q(U) (that can lead to either traditional VI or something like deep ensembles). In addition, they also discuss how more efficient sampling from q(W|U) can be realised via an extension of the Matheron’s rule to the case of matrix random variables. Finally, they evaluate their method against traditional mean field variational Bayesian neural networks and deep ensembles on several tasks that include regression, classification, calibration and OOD performance.
SP:04abdf6d039513f23e00e6686832cd4b950f1d75
Hybrid-Regressive Neural Machine Translation
1 INTRODUCTION . Although autoregressive translation ( AT ) has become the de facto standard for Neural Machine Translation ( Bahdanau et al. , 2015 ) , its nature of generating target sentences sequentially ( e.g. , from left to right ) makes it challenging to respond quickly in a production environment . One straightforward solution is the non-autoregressive translation ( NAT ) ( Gu et al. , 2017 ) , which predicts the entire target sequence in one shot . However , such one-pass NAT models lack dependencies between target words and still struggles to produce smooth translations , despite many efforts developed ( Ma et al. , 2019 ; Guo et al. , 2019a ; Wang et al. , 2019b ; Shao et al. , 2019 ; Sun et al. , 2019 ) . Recent studies show that extending one-pass NAT to multi-pass NAT , so-called iterative refinement ( IR-NAT ) , is expected to break the performance bottleneck ( Lee et al. , 2018 ; Ghazvininejad et al. , 2019 ; Gu et al. , 2019 ; Guo et al. , 2020 ; Kasai et al. , 2020a ) . Unlike onepass NAT , which outputs the prediction immediately , IR-NAT takes the translation hypothesis from the previous iteration as a reference and regularly polishes the new translation until achieving the predefined iteration count I or no changes appear in the translation . Compared with AT , IR-NAT with I=10 runs 2-5 times faster with a considerable translation accuracy , as reported by Guo et al . ( 2020 ) . However , we highlight that the fast decoding of IR-NAT heavily relies on small batch size and GPU , which is rarely mentioned in prior studies 1 . Without loss of generality , we take Mask-Predict ( MP ) 1Unfortunately , such a decoding setting is not common in practice . NMT systems deployed on GPUs tend to use larger batches to increase translation throughput , while the batch size of 1 is used more frequently in offline systems running on CPUs . e.g. , smartphones . ( Ghazvininejad et al. , 2019 ) as an example , a typical IR-NAT paradigm based on the conditional masked language model . Figure 1 illustrates that when the batch exceeds 8 , MP ( I=10 ) is already running slower than AT , and the situation is even worse on CPU . Further analysis shows that the increase in batch size leads to the efficiency degradation of parallel computing in NAT models 2 . To tackle this problem , we first design a synthetic experiment to understand the relationship between target context and iteration times . We mask some proportion tokens on the translation generated by a pretrained AT and take it as the decoder input of the pretrained MP . Then we surprisingly found that even masking 70 % AT hypothesis , and the remaining target context can help MP ( I=1 ) to compete with the standard MP ( I=10 ) ( Figure 2 ) . This result confirms that decoding with multiple iterations in NAT is unnecessary when providing a good ( partial ) reference hypothesis . Inspired by this , we propose a two-stage translation prototype——Hybrid-Regressive Translation ( HRT ) . After encoding , HRT first uses an autoregressive decoder ( called Skip-AT ) to produce a discontinuous translation hypothesis . Concretely , at decoding step i , the SKip-AT decoder immediately predicts the ( i + k ) -th token yi+k without generating yi+1 , . . . , yi+k−1 , where k is a hyperparameter and k > 1 . Then , a non-autoregressive decoder like MP ( called Skip-MP ) predicts previously skipped tokens with one iteration according to the deterministic context provided by Skip-AT . Since both Skip-AT and Skip-MP share the same model parameters , HRT does not increase parameters significantly . To train HRT effectively and efficiently , we further propose joint training guided by curriculum learning and mixed distillation . Experimental results on WMT En↔Ro and En↔De show that HRT is far superior to existing IR-NATs and achieves comparable or even better accuracy than the original AT 3 with a consistent 50 % decoding speedup on varying batch sizes and devices ( GPU , CPU ) . 2 BACKGROUND . Given a source sentence x = { x1 , x2 , . . . , xM } and a target sentence y = { y1 , y2 , . . . , yN } , there are several ways to model P ( y|x ) : Autoregressive translation ( AT ) is the dominant approach in NMT , which decomposes P ( y|x ) by chain rules : P ( y|x ) = N∏ t=1 P ( yt|x , y < t ) ( 1 ) where y < t denotes the generated prefix translation before time step t. However , the existence of y < t requires the model must wait for yt−1 to be produced before predicting yt , which hinders the possibility of parallel computation along with time step . Non-autoregressive translation ( NAT ) is first proposed by Gu et al . ( 2017 ) , allowing the model to generate all target tokens simultaneously . NAT replaces y < t with target-independent input z and rewrites Eq . 1 as : P ( y|x ) = P ( N |x ) N∏ t=1 P ( yt|x , z ) ( 2 ) In Gu et al . ( 2017 ) , they monotonically copy the source embedding as z according to a fertility model . Subsequently , the researchers developed more advanced methods to enhance z , such as adversarial source embedding ( Guo et al. , 2019a ) , reordered source sentence ( Ran et al. , 2019 ) , latent variables ( Ma et al. , 2019 ; Shu et al. , 2019 ) etc , but there still is a huge performance gap between AT and NAT . Iterative refinement based non-autoregressive translation ( IR-NAT ) extends the traditional onepass NAT by introducing the multi-pass decoding mechanism ( Lee et al. , 2018 ; Ghazvininejad et al. , 2019 ; Gu et al. , 2019 ; Guo et al. , 2020 ; Kasai et al. , 2020a ) . IR-NAT applies a conversion function 2Early experiment shows that when the batch size increases from 1 to 32 , the latency of AT is reduced by 22 times , while MP ( I=10 ) only reduces by four times . Latency is measured by the average time of translating a sentence on a constant test set . See Appendix A for details . 3Thanks to the proposed training algorithm , a single HRT model can support both hybrid-regressive decoding and autoregressive decoding at inference . Here , the AT model refers to the autoregressive teacher model that generates the distillation data . F on the deterministic hypothesis of previous iteration y′ as the alternative to z . Common implementations of F include identity ( Lee et al. , 2018 ) , random masking ( Ghazvininejad et al. , 2019 ) or random deletion ( Gu et al. , 2019 ) etc . Thus , we can predict y by : P ( y|x ) = N ′∏ t=1 P ( y′m ( t ) |x , F ( y′ ) ) ( 3 ) where N ′ is the number of refined tokens in F ( y′ ) , m ( t ) is the real position of t-th refined token in y′ . In this way , the generation process of IR-NAT is simple : first , the NAT model produces an inaccurate translation as the initial hypothesis , and then iteratively refines it until converge or reaching the maximum number of iterations . Mask-Predict ( MP ) is a typical instance of IR-NAT , trained by a conditional masked language model objective like BERT ( Devlin et al. , 2019 ) . In this work , we use MP as the representation of IR-NAT due to its excellent performance and simplification . In MP , F randomly masks some tokens over the sequence in training but selects those predicted tokens with low confidences at inference . 3 IS ITERATIVE REFINEMENT ALL YOU NEED ? . As mentioned earlier , IR-NAT with multiple iterations slows down severely in some cases . It is natural to think of reducing iterations to alleviate it . This section starts from synthetic experiments on WMT ’ 16 En→Ro and WMT ’ 14 En→De to verify the assumption that a sufficiently good decoder input can help reduce iterations . Here we construct the “ good ” decoder input from the translation hypothesis produced by an AT model . Models We use the official MP models released by Ghazvininejad et al . ( 2019 ) 4 . Since the authors did not publish their AT baselines , we use the same data to retrain AT models with the standard Transformer-Base configuration ( Vaswani et al. , 2017 ) and obtain comparable performance with theirs ( see Appendix B for more details ) . Decoding AT models decode with beam sizes of 5 on both tasks . Then , we replace a certain percentage of AT translation tokens with < mask > and use it as input to the MP model ( see below for replacement strategy ) . Unlike the standard MP model that uses a large beam size ( e.g. , 5 ) and iterates several times ( e.g. , 10 ) , the MP model used here only iterates once with beam size 1 . We substitute all input < mask > with MP ’ s predictions to obtain the final translation . We report casesensitive tokenized BLEU score by multi-bleu.perl . Mask Strategy We tested 4 strategies to mask AT translations : Head , Tail , Random and Chunk . Given the masking rate pmask and the translation length N , the number of masked tokens is Nmask=max ( 1 , bN×pmaskc ) . Then Head/Tail always masks the first/last Nmask tokens , while Random masks the translation randomly . Chunk is slightly different from the above strategies . It first divides the target sentence into C chunks , where C = Ceil ( N/k ) and k is the chunk size . Then in each chunk , we retain the first token , but mask other k-1 tokens . Thus , the actual masking rate in Chunk is 1-1/k instead of pmask . To exclude randomness , we ran Random three times with different seeds and report the average results . 3.1 RESULTS . The experimental results are illustrated in Figure 2 , where we can see that : A balanced bidirectional context is critical . Compared with Tail and Head , it is obvious that Rand and Chunk both have better performance . We attribute it to the benefit of the bidirectional context in Rand and Chunk ( Devlin et al. , 2019 ) , because Tail and Head can only provide unidirectional context ( i.e. , prefix or suffix ) . In addition , compare Chunk with Random , we find that Chunk is moderately but consistently superior to Random , even if more tokens are masked . For instance , on the WMT En-De task , when the chunk size is 4 ( the masking rate is 75 % ) , the BLEU score of Chunk is 27.03 , which is +0.3 BLEU higher than that of Random with the masking rate of 4https : //github.com/facebookresearch/Mask-Predict 70 % . Because the difference between Chunk and Random lies only in the distribution of < mask > , this experiment indicates that making < mask > uniformly on sequence is better than random 5 . Small beams and one iteration are sufficient . Compared with the standard MP with the beam size of 5 and 10 iterations , it is interesting to find that even if only 30 % -40 % of the AT translations are exposed , our MP using greedy search and one iteration can achieve quite comparable performance .
This paper proposes a hybrid-regressive machine translation (HRT) approach—combining autoregressive (AT) and non-autoregressive (NAT) translation paradigms: it first uses an AT model to generate a “gappy” sketch (every other token in a sentence), and then applies a NAT model to fill in the gaps with a single pass. As a result the AT part latency is roughly reduced by half compared to a full AT baseline. The AT and NAT models share a majority part of the parameters and can be trained jointly with a carefully designed curriculum learning procedure. Experiments on several MT benchmarks show that the proposed approach achieves speedup over the full AT baseline with comparable translation quality.
SP:4b4f70092c9fceabdc76c6ed5c5cf83c7791e119
D3C: Reducing the Price of Anarchy in Multi-Agent Learning
1 INTRODUCTION . We consider a setting composed of multiple interacting artificially intelligent agents . These agents will be instantiated by humans , corporations , or machines with specific individual incentives . However , it is well known that the interactions between individual agent goals can lead to inefficiencies at the group level , for example , in environments exhibiting social dilemmas ( Braess , 1968 ; Hardin , 1968 ; Leibo et al. , 2017 ) . In order to resolve these inefficiencies , agents must reach a compromise . Any arbitration mechanism that leverages a central coordinator1 faces challenges when attempting to scale to large populations . The coordinator ’ s task becomes intractable as it must both query preferences from a larger population and make a decision accounting for the exponential growth of agent interactions . If agents or their designers are permitted to modify their incentives over time , the principal must collect all this information again , exacerbating the computational burden . A central coordinator represents a single point of failure for the system whereas one motivation for multi-agent systems research inspired by nature ( e.g. , humans , ants , the body , etc . ) is robustness to node failures ( Edelman and Gally , 2001 ) . Therefore , we focus on decentralized approaches . A trivial form of decentralized compromise is to require every agent to minimize group loss ( maximize welfare ) . Leaving the optimization problem aside , this removes inefficiency , but similar to a mechanism with a central coordinator , requires communicating all goals between all agents , an expensive step and one with real consequences for existing distributed systems like wireless sensor networks ( Kulkarni et al. , 2010 ) where transmitting a signal saps a node ’ s energy budget . There is also the obvious issue that this compromise may not appeal to an individual agent , especially one that is expected to trade its low-loss state for a higher average group loss . One additional , more subtle consequence of optimizing group loss is that it can not distinguish between behaviors in environments with a group loss that is constant sum , for instance , in zero-sum games . But zero-sum games have rich structure to which we would like agents to respond . Electing a team leader ( or voting on a decision ) implies one candidate ( decision ) wins while another loses . Imagine two agents differ on their binary preference with each trying to minimize their probability of losing . A group loss is indifferent ; we prefer the agents play the game ( and in this , case argue their points ) . Design Criteria : We seek an approach to compromise in multi-agent systems that applies to the setting just described . The celebrated Myerson-Satterthwaite theorem ( Arrow , 1970 ; Satterthwaite , 1975 ; Green and Laffont , 1977 ; Myerson and Satterthwaite , 1983 ) states that no mechanism exists that simultaneously achieves optimal efficiency ( welfare-maximizing behavior ) , budget-balance ( no taxing agents and burning side-payments ) , appeals to rational individuals ( individuals want to opt-in to the mechanism ) , and is incentive compatible ( resulting behavior is a Nash equilibrium ) . Given 1For example , the VCG mechanism ( Clarke , 1971 ) . this impossibility result , we aim to design a mechanism that approximates weaker notions of these criteria . In addition , the mechanism should be decentralized , extensible to large populations , and adapt to learning agents with evolving incentives in possibly non-stationary environments . Design : We formulate compromise as agents mixing their incentives with others . In other words , an agent may become incentivized to minimize a mixture of their loss and other agents ’ losses . We design a decentralized meta-algorithm to search over the space of these possible mixtures . We model the problem of efficiency using price of anarchy . The price of anarchy , ⇢ 2 [ 1,1 ) , is a measure of inefficiency from algorithmic game theory with lower values indicating more efficient games ( Nisan et al. , 2007 ) . Forcing agents to minimize a group ( average ) loss with a single local minimum results in a “ game ” with ⇢ = 1 . Note that any optimal group loss solution is also Paretoefficient . Computing the price of anarchy of a game is intractable in general . Instead , we derive a differentiable upper bound on the price of anarchy that agents can optimize incrementally over time . Differentiability of the bound makes it easy to pair the proposed mechanism with , for example , deep learning agents that optimize via gradient descent ( Lerer and Peysakhovich , 2017 ; OpenAI et al. , 2019 ) . Budget balance is achieved exactly by placing constraints on the allowable mixtures of losses . We appeal to individual rationality in three ways . One , we initialize all agents to optimize only their own losses . Two , we include penalties for agents that deviate from this state and mix their losses with others . Three , we show empirically on several domains that opting into the proposed mechanism results in better individual outcomes . We also provide specific , albeit narrow , conditions under which agents may achieve a Nash equilibrium , i.e . the mechanism is incentive compatible , and demonstrate the agents achieving a Nash equilibrium under our proposed mechanism in a traffic network problem . The approach we propose divides the loss mixture coefficients among the agents to be learned individually ; critically , the agents do not need to observe or directly differentiate with respect to the other agent strategies . In this work , we do not tackle the challenge of scaling communication of incentives to very large populations ; we leave this to future work . Under our approach , scale can be achieved through randomly sharing incentives according to the learned mixture weights or sparse optimization over the simplex ( Pilanci et al. , 2012 ; Kyrillidis et al. , 2013 ; Li et al. , 2016 ) . Our Contribution : We propose a differentiable , local estimator of game inefficiency , as measured by price of anarchy . We then present two instantiations of a single decentralized meta-algorithm , one 1st order ( gradient-feedback ) and one 0th order ( bandit-feedback ) , that reduce this inefficiency . This meta-algorithm is general and can be applied to any group of individual agent learning algorithms . This paper focuses on how to enable a group of agents to respond to an unknown environment and minimize overall inefficiency . Agents with distinct losses may find their incentives well aligned to the given task , however , they may instead encounter a social dilemma ( Sec . 3 ) . We also show that our approach leads to interesting behavior in scenarios where agents may need to sacrifice team reward to save an individual ( Sec . F.4 ) or need to form parties and vote on a new team direction ( Sec . 3.4 ) . Ideally , one meta-algorithm would allow a multi-agent system to perform sufficiently well in all these scenarios . The approach we propose , D3C ( Sec . 2 ) , is not that meta-algorithm , but it represents a holistic effort to combine critical ingredients that we hope takes a step in the right direction.2 2 DYNAMICALLY CHANGING THE GAME . In our approach , agents may consider slight re-definitions of their original losses , thereby changing the definition of the original game . Critically , this is done in a way that conserves the original sum of losses ( budget-balanced ) so that the original group loss can still be measured . In this section , we derive our approach to minimizing the price of anarchy in several steps . First we formulate minimizing the price of anarchy via compromise as an optimization problem . Second we specifically consider compromise as the linear mixing of agent incentives . Next , we define a local price of anarchy and derive an upper bound that agents can differentiate . Then , we decompose this bound into a set of differentiable objectives , one for each agent . Finally , we develop a gradient estimator to minimize the agent objectives in settings with bandit feedback ( e.g. , RL ) that enables scalable decentralization . 2D3C is agnostic to any action or strategy semantics . We are interested in rich environments where high level actions with semantics such as “ cooperation ” and “ defection ” are not easily extracted or do not exist . 2.1 NOTATION AND TRANSFORMED LOSSES . Let agent i ’ s loss be fi ( x ) : x 2 X ! R where x is the joint strategy of all agents . We denote the joint strategy at iteration t by xt when considering discrete updates and x ( t ) when considering continuous time dynamics . Let fA i ( x ) denote agent i ’ s transformed loss which mixes losses among agents . Let f ( x ) = [ f1 ( x ) , . . . , fn ( x ) ] > and fA ( x ) = [ fA1 ( x ) , . . . , fAn ( x ) ] > where n 2 Z denotes the number of agents . In general , we require fA i ( x ) > 0 and P i f A i ( x ) = P i fi ( x ) so that total loss is conserved3 ; note that the agents are simply exploring the space of possible non-negative group loss decompositions . We consider transformations of the form fA ( x ) = A > f ( x ) where each agent i controls row i of A with each row constrained to the simplex , i.e . Ai 2 n 1 . X ⇤ denotes the set of Nash equilibria . [ a ; b ] = [ a > , b > ] > signifies row stacking of vectors . 2.2 PRICE OF ANARCHY . Nisan et al . ( 2007 ) define price of anarchy as the worst value of an equilibrium divided by the best value in the game . Here , value means sum of player losses , best means lowest , and Nash is the equilibrium . It is well known that Nash can be arbitrarily bad from both an individual agent and group perspective ; Appendix B presents a simple example and demonstrates how opponent shaping is not a balm for these issues ( Foerster et al. , 2018 ; Letcher et al. , 2018 ) . With the above notation , the price of anarchy , ⇢ , is defined as ⇢X ( f A ) def = maxX⇤ P i f A i ( x⇤ ) minX P i f A i ( x ) 1 . ( 1 ) Note that computing the price of anarchy precisely requires solving for both the optimal welfare and the worst case Nash equilibrium . We explain how we circumvent this issue with a local approximation in §2.4 . 2.3 COMPROMISE AS AN OPTIMIZATION PROBLEM . Given a game , we want to minimize the price of anarchy by perturbing the original agent losses : min f 0= A ( f ) 1 > f 0=1 > f ⇢X ( f 0 ) + ⌫D ( f , f 0 ) ( 2 ) where f and f 0 = A ( f ) denote the vectors of original and perturbed losses respectively , A : Rn ! Rn is parameterized by weights A , ⌫ is a regularization hyperparameter , and D penalizes deviation of the perturbed losses from the originals or represents constraints through an indicator function . To ensure minimizing the price of anarchy of the perturbed game improves on the original , we incorporate the constraint that the sum of perturbed losses equals the sum of original losses . We refer to this approach as ⇢-minimization . Our agents reconstruct their losses using the losses of all agents as a basis . For simplicity , we consider linear transformations of their loss functions , although the theoretical bounds hereafter are independent of this simplification . We also restrict ourselves to convex combinations so that agents do not learn incentives that are directly adverse to other agents . The problem can now be reformulated . Let A ( f ) = A > f and D ( f , f 0 ) = P i DKL ( ei || Ai ) where A 2 Rn⇥n is a right stochastic matrix ( rows are non-negative and sum to 1 ) , ei 2 Rn is a unit vector with a 1 at index i , and DKL denotes the Kullback-Liebler divergence . Note OpenAI Five ( OpenAI et al. , 2019 ) also used a linear mixing approach where the “ team spirit '' mixture parameter ( ⌧ ) is manually annealed throughout training from 0.3 to 1.0 ( i.e. , Aii = 1 0.8⌧ , Aij = 0.2⌧ , j 6= i ) . The A matrix is interpretable and reveals the structure of `` teams '' that evolve and develop over training . In experiments we measure relative reward attention for each agent i as ln ( ( n 1 ) Aii ) ln ( P j 6=i Aji ) to reveal how much agent i attends to their own loss versus the other agents on average ( e.g. , Figure 4b ) . This number is 0 when Aij = 1n for all i , j . Positive values indicate agent i mostly attends to its own loss . Negative values indicate agent i attends to others ’ losses more than its own . We also discuss the final A in the election example in §3.4 . 3The price of anarchy assumes positive losses . This is accounted for in §2.5 to allow for losses in R .
This paper proposes a (decentralized) method for online adjustment of agent incentives in multi-agent learning scenarios, as a means to obtain higher outcomes for each agent and for the group as a whole. The paper uses the “price of anarchy” (the worst value of an equilibrium divided by the best value in the game) as a proxy for the efficiency of the game outcome, and derive an upper bound on a local price of anarchy that agents can differentiate. In several experiments (a traffic network, the coin game, Cleanup), their method leads to improved individual agent and group outcomes relative to baselines, while avoiding cases of stark division of labor that sometimes emerges when agents directly optimize the sum of all agent rewards.
SP:41b23082a1439aa8601439e27c9abaa33e06959c
Representation and Bias in Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling
Inspired by the phenomenon of performance disparity between languages in machine translation , we investigate whether and to what extent languages are equally hard to “ conditional-language-model ” . Our goal is to improve our understanding and expectation of the relationship between language , data representation , size , and performance . We study one-to-one , bilingual conditional language modeling through a series of systematically controlled experiments with the Transformer and the 6 languages from the United Nations Parallel Corpus . We examine character , byte , and word models in 30 language directions and 5 data sizes , and observe indications suggesting a script bias on the character level , a length bias on the byte level , and a word bias that gives rise to a hierarchy in performance across languages . We also identify two types of sample-wise non-monotonicity — while word-based representations are prone to exhibit Double Descent , length can induce unstable performance across the size range studied in a novel meta phenomenon which we term erraticity . By eliminating statistically significant performance disparity on the character and byte levels by normalizing length and vocabulary in the data , we show that , in the context of computing with the Transformer , there is no complexity intrinsic to languages other than that related to their statistical attributes and that performance disparity is not a necessary condition but a byproduct of word segmentation . Our application of statistical comparisons as a fairness measure also serves as a novel rigorous method for the intrinsic evaluation of languages , resolving a decades-long debate on language complexity . While all these quantitative biases leading to disparity are mitigable through a shallower network , we find room for a human bias to be reflected upon . We hope our work helps open up new directions in the area of language and computing that would be fairer and more flexible and foster a new transdisciplinary perspective for DL-inspired scientific progress . 1 INTRODUCTION . With a transdisciplinary approach to explore a space at the intersection of Deep Learning ( DL ) / Neural Networks ( NNs ) , language sciences , and language engineering , we report our undertaking in use-inspired basic research — with an application-related phenomenon as inspiration , we seek fundamental scientific understanding through empirical experimentation . This is not an application or machine translation ( MT ) paper , but one that strives to evaluate and seek new insights on language in the context of DL with a consideration to contribute to our evaluation , segmentation , and model interpretation practice in multilingual Natural Language Processing ( NLP ) . Our inspiration : performance disparity in MT The use case that inspired our investigation is the disparity of MT results reported in Junczys-Dowmunt et al . ( 2016 ) . Of the 6 official languages of the United Nations ( UN ) — Arabic ( AR ) , English ( EN ) , Spanish ( ES ) , French ( FR ) , Russian ( RU ) , and Chinese ( ZH ) , results with target languages AR , RU , and ZH seem to be worse than those with EN/ES/FR , regardless of the algorithm , may it be from phrased-based Statistical MT ( SMT/Moses ( Koehn et al. , 2007 ) ) or Neural MT ( NMT ) .1 The languages have the same amount of line-aligned , high-quality parallel data available for training , evaluation , and testing . This prompts the question : are some languages indeed harder to translate from or to ? Problem statement : are all languages equally hard to Conditional-Language-Model ( CLM ) ? A similar question concerning ( monolingual ) language modeling ( LMing ) was posed in Cotterell et al . ( 2018 ) and Mielke et al . ( 2019 ) along with the introduction of a method to evaluate LMs with multiway parallel corpora ( multitexts ) in information-theoretic terms . To explicitly focus on modeling the complexities that may or may not be intrinsic to the languages , we study the more fundamental process of CLMing without performing any translation . This allows us to eliminate confounds associated with generation and other evaluation metrics . One could think of our effort as estimating conditional probabilities with the Transformer , with a bilingual setup where perplexity of one target language ( ltrg ) is estimated given the parallel data in one source language ( lsrc ) , where lsrc 6= ltrg . We focus on the very basics and examine the first step in our pipeline — input representation , holding everything else constant . Instead of measuring absolute cross-entropy scores , we evaluate the relative differences between languages from across 5 magnitudes of data sizes in 3 different representation types/levels . We consider bias to be present when performance disparity in our Transformer models is statistically significant . 1.1 SUMMARY OF FINDINGS AND CONTRIBUTIONS . In investigating performance disparity as a function of size and data with respect to language and representation on the Transformer in the context of CLMing , we find : 1. in a bilingual ( one-to-one ) CLMing setup , there is neutralization of source language instances , i.e . there are no statistically significant differences between source language pairs . Only pairs of target languages differ significantly ( see Table 1 ) . 2 . We identify 2 types of sample-wise non-monotonicity on each of the primary representation levels we studied : ( a ) Double Descent ( Belkin et al. , 2019 ; Nakkiran et al. , 2020 ) : on the word level , for all languages , performance at 102 lines is typically better than at 103 before it improves again at 104 and beyond . This phenomenon can also be observed in character models with ZH as a target language as well as on the word level with non-neural n-gram LMs ; ( b ) erraticity : performance is irregular and exhibits great variance across runs . We find sequence length to be predictive of this phenomenon . We show that this can be rectified by data transformation or hyperparameter tuning . In our study , erraticity affects AR and RU on the byte level where the sequences are too long with UTF-8 encoding and ZH when decomposed into strokes on the character level . 3 . In eliminating performance disparity through lossless data transformation on the character and byte levels , we resolve language complexity ( § 4 and App . J ) . We show that , in the context of computing with the Transformer , unless word-based methods are used , there is no linguistic/morphological complexity applicable or necessary . There is no complexity that is intrinsic to a language aside from its statistical properties . Hardness in modeling is relative to and bounded by its representation level ( representation relativity ) . On the character and byte levels , hardness is correlated with statistical properties concerning sequence length and vocabulary of a language , irrespective of its linguistic typological , phylogenetic , historical , or geographical profile , and can be eliminated . On the word level , hardness is correlated with vocabulary , and a complexity hierarchy arises through the manual preprocessing step of word tokenization . This complexity/disparity effected by word segmentation can not be eliminated due to the fundamental qualitative differences in the definition of a “ word ” being one that neither holds universally nor is suitable/consistent for fair crosslinguistic comparisons . We find clarification of this expectation of disparity necessary because more diligent error analyses need to be afforded instead of simply accepting massively disparate results or inappropriately attributing under-performance to linguistic reasons . 4 . Representational units of finer granularity can help close the gap in performance disparity . 5 . Bigger/overparameterized models can magnify/exacerbate the effects of differences in data statistics . Quantitative biases that lead to disparity are mitigable through numerical methods . 1We provide a re-visualization of these grouped in 6 facets by target language in Figure 4 in Appendix A . Outline of the paper In § 2 , we define our method and experimental setup . We present our results and analyses on the primary representations in § 3 and those from secondary set of controls in § 4 in a progressive manner to ease understanding . Meta analyses on fairness evaluation , non-monotonic behavior , and discussion on biases are in § 5 . Additional related work is in § 6 . We refer our readers to the Appendices for more detailed descriptions/discussions and reports on supplementary experiments . 2 METHOD AND DEFINITIONS . Controlled experiments as basic research for scientific understanding Using the United Nations Parallel Corpus ( Ziemski et al. , 2016 ) , the data from which the MT results in Junczys-Dowmunt et al . ( 2016 ) stem , we perform a series of controlled experiments on the Transformer , holding the hyperparameter settings for all 30 one-to-one language directions from the 6 languages constant . We control for size ( from 102 to 106 lines ) and language with respect to representational granularity . We examine 3 primary representation types — character , byte ( UTF-8 ) , and word , and upon encountering some unusual phenomena , we perform a secondary set of controls with 5 alternate representations — on the character level : Pinyin and Wubi ( ASCII representations for ZH phones and character strokes , respectively ) , on the byte level : code page 1256 ( for AR ) and code page 1251 ( for RU ) , and on the word level : Byte Pair Encoding ( BPE ) ( Sennrich et al. , 2016 ) , an adapted compression algorithm from Gage ( 1994 ) . These symbolic variants allow us to manipulate the statistical properties of the representations , while staying as “ faithful ” to the language as possible . We adopt this symbolic data-centric approach because we would like to more directly interpret the confounds , if any , that make language data different from other data types . We operate on a smaller data size range as this is more common in traditional domain sciences and one of our higher goals is to bridge an understanding between language sciences and engineering ( the latter being the dominant focus in NLP ) . We run statistical tests to identify the strongest correlates of performance and to assess whether the differences between the mean performance of different groups are indeed significant . We are concerned not with the absolute scores , but with the relations between scores from different languages and the generalizations derived therefrom . Information-theoretic , fair evaluation with multitexts Most sequence-to-sequence models are optimized using a cross-entropy loss ( see Appendix B for definition ) . Cotterell et al . ( 2018 ) propose to use “ renormalized ” perplexity ( PP ) to evaluate LMs fairly using the total number of bits divided by some constant . In our case , we choose instead a simpler method of using an “ unnormalized ” PP , directly using the total number of bits needed to encode the development ( dev ) set , which has a constant size of 3,077 lines per language . Disparity/Inequality In the context of our CLMing experiments , we consider there to be “ disparity ” or “ inequality ” between languages l1 and l2 if there are significant differences between the performance distributions of these two languages with respect to each representation . Here , by performance we mean the number of bits required to encode the held-out data using a trained CLM . With 30 directions , there are 15 pairs of source languages ( lsrc1 , lsrc2 ) and 15 pairs of target languages ( ltrg1 , ltrg2 ) possible . To assess whether the differences are significant , we perform unpaired two-sided significance tests with the null hypothesis that the score distributions for the two languages are not different . Upon testing for normality with the Shapiro-Wilk test ( Shapiro & Wilk , 1965 ; Royston , 1995 ) , we use the parametric unpaired two-sample Welch ’ s t-test ( Welch , 1947 ) ( when normal ) or the non-parametric unpaired Wilcoxon test ( Wilcoxon , 1945 ) ( when not normal ) for the comparisons . We use the implementation in R ( R Core Team , 2014 ) for these 3 tests . To account for the multiple comparisons we are performing , we correct all p-values using Bonferroni ’ s correction ( Benjamini & Heller , 2008 ; Dror et al. , 2017 ) and follow Holm ’ s procedure2 ( Holm , 1979 ; Dror et al. , 2017 ) to identify the pairs of l1 and l2 with significant differences after correction . We report all 3 levels of significance ( α ≤ 0.05 , 0.01 , 0.001 ) for a more comprehensive evaluation . Experimental setup The systematic , identical treatment we give to our data is described as follows with further preprocessing and hyperparameter details in Appendices B and C , respectively . The distinctive point of our experiment is that the training regime is the same for all ( intuition in App . O.1 ) . 2using implementation from https : //github.com/rtmdrr/replicability-analysis-NLP After filtering length to 300 characters maximum per line in parallel for the 6 languages , we made 3 subsets of the data with 1 million lines each — one having lines in the order of the original corpus ( dataset A ) and two other randomly sampled ( without replacement ) from the full corpus ( datasets B & C ) . Lines in all datasets are extracted in parallel and remain fully aligned for the 6 languages . For each run and each representation , there are 30 pairwise directions ( i.e . one lsrc to one ltrg ) that result from the 6 languages . We trained all 150 ( for 5 sizes ) 6-layer Transformer models for each run using the SOCKEYE Toolkit ( Hieber et al. , 2018 ) . We optimize using PP and use early stopping if no PP improvement occurs after 3 checkpoints up to 50 epochs maximum , taking the best checkpoint . Characters and bytes are supposed to mitigate the out-of-vocabulary ( OOV ) problem on the word level . In order to assess the effect of modeling with finer granularity more precisely , all vocabulary items appearing once in the train set are accounted for ( i.e . full vocabulary on train , as in Gerz et al . ( 2018a ; b ) ) . But we allow our system to categorize all unknown items in the dev set to be unknown ( UNK ) so to measure OOVs ( open vocabulary on dev ( Jurafsky & Martin , 2009 ) ) . To identify correlates of performance , we perform Spearman ’ s correlation ( Spearman , 1904 ) with some basic statistical properties of the data ( e.g . length , vocabulary size ( |V | ) , type-token-ratio , OOV rate ) as metrics — a complete list thereof is provided in Appendix F. For each of the 3 primary representations — character , byte , and word , we performed 5 runs total in 5 sizes ( 102-106 lines ) ( runs A0 , B0 , C0 , A1 , & A2 ) and 7 more runs in 4 sizes ( 102-105 lines ) ( A3-7 , B1 , & C1 ) , also controlling for seeds . For the alternate/secondary representations , we ran 3 runs each in 5 sizes ( 102-106 lines ) ( A0 , B0 , & C0 ) .
The paper investigates whether languages are equally hard to Conditional-Language-Model (CLM). To do this, the authors perform controlled experiments by modeling text from parallel data from 6 typologically diverse languages. They pair the languages and perform experiments in 30 directions with Transformers, and compare 3 different unit representations: characters, bytes, and word-level (BPE).
SP:87bda29654ffe25cda14e3b27a6e4b53e2a40164
Bractivate: Dendritic Branching in Medical Image Segmentation Neural Architecture Search
1 INTRODUCTION Researchers manually composing neural networks must juggle multiple goals for their architectures . Architectures must make good decisions ; they must be fast , and they should work even with limited computational resources . These goals are challenging to achieve manually , and researchers often spend months attempting to discover the perfect architecture . To overcome these challenges , we turn to the human brain ’ s efficient neural wiring for automated architecture discovery . Neuroscience already underlies core neural network concepts : The perceptron ( Rosenblatt , 1958 ) is directly analogous to a human neuron . One of the brain ’ s fundamental learning mechanisms is dendritic branching ( Greenough & Volkmar , 1973 ) whereby active neurons send out signals for other neurons to form connections , strengthening signals through that neural pathway . This neuroscience concept inspires us to devise Bractivate , a Neural Architecture Search ( NAS ) algorithm for learning new efficient UNet architectures networks , capable of being trained twice as fast as the traditional UNet , and often one to two orders of magnitude lighter in terms of trainable parameters . We apply Bractivate on three medical imaging segmentation problems : cell nuclei , electron microscopy , and chest X-ray lung segmentation . Medical image segmentation is a growing field in Deep Learning Computer Assisted Detection ( CAD ) : it is a powerful component in clinical decision support tools and has applications in retinal fundus image , lung scan , and mammography analysis . Most papers now approach medical image segmentation with the UNet ( Ronneberger et al. , 2015 ) ; the model architecture is straightforward : it has symmetric , hierarchical convolutional blocks , which are components of an initial contracting path and a final expanding path , with an apex bottleneck layer . Between parallel contracting and expanding blocks , the traditional UNet contains skip connections that pass information through concatenation ( Ronneberger et al. , 2015 ) . Traditional UNet skip connections involve feature map aggregation with same-scale convolutional blocks , but recent advances have yielded more complex connections ranging from the UNet++ ( Zhou et al. , 2018 ) to the NasUNet ( Weng et al. , 2019 ) . While the UNet is a powerful tool , it does have many limitations : 1 . The depth necessary for many segmentation tasks is initially unknown , and traditional neural architecture search ( NAS ) struggles to identify the optimal UNet depth . 2 . Researchers often manually choose skip connection locations , leading to potentially missed optimal connections . 3 . Scientists need a NAS algorithm addressing many implementation objectives , including computational time , number of model parameters , and robust segmentation performance . On a broader level , discovering efficient UNet architectures is crucial because it can generate simpler models for applications on mobile devices , which need low latency for online learning . In the Telemedicine age , many medical applications rely on mobile Deep Learning to segment medical images and process raw patient data ( Xu et al. , 2017 ; Vaze et al. , 2020 ) . We address the Medical and Engineering fields ’ need for efficiency with Bractivate , a NAS algorithm to discover lightweight UNet architectures for medical image segmentation tasks . We present the following three primary contributions : 1 . An evolutionary algorithm that non-randomly samples from a distribution of various UNet Model depths and skip connection configurations , with both tensor concatenation and addition operators . 2 . ” Dendritic Branching ” -inspired mutations that , just as in the brain , cause salient UNet blocks to branch to other blocks in the network through dendritic skip connections , creating efficient networks that preserve information signals through the network . 3 . Bractivate generates high-performing models with lower space complexity than the current state-of-the-art . The remainder of the paper is structured as follows : In Section 2 , we discuss prior works , and what gaps in the literature inspire us to propose Bractivate . Then , in Section 3 , we discuss the search algorithm and the dendritic branching mutation . Later , in Section 4 , we implement our algorithm with various experiments ranging from changing the search space depth to an ablation study . We report our quantitative and qualitative results , along with baseline comparisons in Section 5 before concluding in Section 6 . 2 RELATED WORKS . Deep learning algorithms are often restricted to manual model design ( Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Oktay et al. , 2018 ; Ronneberger et al. , 2015 ) . To automate model schemes , NAS is the process of selecting candidate architectures through various search strategies to achieve optimal performance ( Elsken et al. , 2019 ) . Advances in NAS have branched into different areas , including evolutionary algorithms ( Miller et al. , 1989 ; de Garis , 1990 ; Yao , 1993 ; Fogel et al. , 1990 ; Angeline et al. , 1994 ; Real et al. , 2018 ; Yao , 1999 ) and automatic pattern recognition ( Cai et al. , 2018 ; Radosavovic et al. , 2020 ) . While both approaches are merited , the tasks address image classification problems , and although some focus on skip connections , they lack deeper investigation of their optimal configurations . Recent advances in the UNet have led to alternative skip connection implementations , including addition ( Ghamdi et al. , 2020 ) , max out operations ( Estrada et al. , 2019 ; Goodfellow et al. , 2013 ) and multiplication by a gating function ( Oktay et al. , 2018 ) . Ghamdi et al . ( 2020 ) reports these connections ’ improved efficacy over traditional concatenation , as they overcome vanishing gradients and preserve salient features . Auto-DeepLab , which Liu et al . ( 2019 ) present for semantic segmentation , is a graph-based NAS algorithm that addresses changing model depth and connection locations in hierarchical models . Building off this work , Zhou et al . ( 2020 ) propose a similar graph-search algorithm , termed UNet++ , for improved NAS ; the final model incorporates dense skip connections to achieve multi-scale feature aggregation . Although UNet++ successfully addresses the model depth problem , it ignores choosing the skip connection operator and relies on pretraining and pruning to generate skip connection configurations . The Differential Architecture Search ( DARTs ) algorithm by Liu et al . ( 2018 ) continuously relaxes the architecture representation to enable gradient-based optimization . Advancing this algorithm , Chen et al . ( 2019 ) proposes the Progressive Differentiable Architecture Search Algorithm ( PDARTs ) to allow the searched model ’ s depth to grow during the search ; when applied to ImageNet ( Deng et al. , 2009 ) , CIFAR-10 ( Krizhevsky et al. , 2009 ) , or CIFAR-100 ( Krizhevsky et al. , 2009 ) , the total training time is approximately seven hours . Although the DARTS and PDARTs algorithms are specific to image classification and sequential model architecture , they lack applications for segmentation models . Weng et al . ( 2019 ) suggest a NASUNet method with modified DARTs search for medical imaging segmentation ; their approach addresses searching for model parameters in the convolutional blocks to reduce the space complexity found in attention-based ( Oktay et al. , 2018 ; Hy , 2018 ) and recurrent ( Alom et al. , 2018 ; Hy , 2018 ) UNets , yet NASUNet still preserves same-scale concatenation skip connections , overlooking alternate skip connection possibilities across network blocks . Many existing NAS algorithms use modified objective functions for evaluating the searched model performance e.g . NAS-Bench-101 ( Ying et al. , 2019 ) uses the cross-entropy loss , Stochastic Neural Architecture Search ( SNAS ) ( Xie et al. , 2019 ) devises a cost function deemed Memory Access Cost ( MAC ) that incorporates the floating-point operations ( FLOPs ) and number of parameters , and PDARTs ( Chen et al. , 2019 ) employs auxiliary loss ( Szegedy et al. , 2014 ) . To target gaps in the literature related to skip connection search for efficient models , we propose Bractivate , a NAS algorithm inspired by the brain ’ s dendritic branching to facilitate optimal architecture discovery . 3 THE BRACTIVATE NAS ALGORITHM 3.1 DENDRITIC ARBORIZATION Table 1 translates neuroscience into computational terms we use throughout the paper . In the neuroscience field , dendritic branching occurs when stimulating environments cause neurons to form new connections ( Greenough & Volkmar , 1973 ; Greenough et al. , 1985 ) . These neural connections are associated with learning , and even learning-impaired children with fetal alcohol syndrome display lower dendritic branching levels ( Hamilton et al. , 2010 ) compared to their healthy peers . This branching phenomenon parallels Deep Neural Networks : in the brain , dendrites form new connections to the hyperactive soma : the perceptron ’ s activation function is to the biological soma as the incoming connections are to dendrites . Perceptrons can be stacked together to form multi-layer perceptrons ( Rumelhart et al. , 1986 ) , with parallel architecture similar to the brain , and this structure underlies convolutional neural networks ( LeCun et al. , 1995 ) . For the UNet , if we consider layer in the network ’ s blocks to be a neural soma , then we can think about a block ’ s ” activity ” as the mean-absolute value of its layers ’ activations , as shown by Equa- tion 1 . Ab = 1 L L∑ l=0 |Al| ( 1 ) where Ab represents block activation , b ∈ B , Al is the weight of each layer in the block , l , and L is the total number of layers in the block . Knowing the block ’ s location , b , with max ( Ab ) , surrounding blocks then form skip connections around this active node , a process analogous to dendritic branching . We apply this method to target conv and deconv layers , excluding batch normalization layers as they contain static weights and high values that overwhelm the mutation ’ s layer selection . When new connections are formed , across blocks with various tensor dimensions , we overcome spatial dimensional mismatch by resizing the incoming connection tensors to the receiving tensor by bilinear interpolation . 3.2 NAS WITH DENDRITIC BRANCHING MUTATIONS . Sample Randomly Initialized Domain Chosen Model Best Model Genome Efficiency Evaluator Dendritic Branching Mutation for each block , and the skip connection operator type ( concatenation or addition ) . A detailed discussion on the genotype initialization and its micro-architecture is found in Appendix A.1 . When initializing the genotype , we constrain the number of feature maps to grow by 1.5 in the encoder and decrease it by the same factor for each block in the decoder . 3.3 EFFICIENT LOSS . NAS methods often focus on accuracy as the main performance metric ( Real et al. , 2018 ; Radosavovic et al. , 2020 ) , but often lack consideration for discovered model space and time complexity . To address this , we propose an efficient loss function for the NAS evaluation step . Traditional binary cross-entropy is given by Equation 2 . During the NAS mutation and selections , the search process accelerates as ” better ” models have faster training steps . BCL = − 1 m m∑ i=1 ( yi × log ( ŷi ) + ( 1− yi ) × log ( 1− ŷi ) ) ( 2 ) where m is the number of samples , yi is the sample image ’ s true segmentation mask tensor , and ŷi is the model ’ s prediction tensor . We propose an alternate efficient loss function equation , Efficient Loss Scaling ( ELS ) . It uses the number of model parameters , P , and the training time per epoch , T . EFFICIENCY LOSS SCALING We also explore Efficiency penalty scaling where log ( P ) and log ( T ) scale the overall loss function through multiplication , hence : ELS = γ × log ( P ) × log ( T ) ×BCL ( 3 ) In our experiments we set γ = 0.01 . We use Equation 3 in Section 4.4 during model search . A detailed ablation study on how this equation favors efficient networks and outperforms standard BCL can be found in Appendix A.3 .
The authors propose a neural architecture search (NAS) algorithm inspired by brain physiology. In particular, they propose a NAS algorithm based on neural dendritic branching, and apply it to three different segmentation tasks (namely cell nuclei, electron microscopy, and chest X-ray lung segmentation). The authors share their codes with the scientific community, which is highly appreciated.
SP:0cab715d71a765b97066673f3a2d0e00d22ffa3c
What to Prune and What Not to Prune at Initialization
1 INTRODUCTION . Computational complexity and overfitting in neural networks is a well established problem Frankle & Carbin ( 2018 ) , Han et al . ( 2015 ) , LeCun et al . ( 1990 ) , Denil et al . ( 2013 ) . We utilize pruning approaches for the following two reasons : 1 ) To reduce the computational cost of a fully connected neural network . 2 ) To reduce overfitting in the network . Given a large number of post-training pruning approaches Srivastava et al . ( 2014 ) , Geman et al . ( 1992 ) , Pan et al . ( 2016 ) , the paper attempts to propose two pre-training pruning approaches : kstarts and dissipating gradients . Moreover , it appears to be the case that when isolated from other factors sparse networks outperform fully connected networks . When not isolated they perform at least as well up to a percentage of sparsity depending on the number of parameters in the said network . kstarts and dissipating gradients provide are simple nevertheless effective methods to quickly look for best sparse networks to . The approaches exploit the knowledge that a network has multiple underlying p-sparse networks that perform just as well and in some cases even better when contrasted with their fully connected counterparts Frankle & Carbin ( 2018 ) . What percentage of sparsity is realized , depends largely on the number of parameters originally present in the network . Such sparse networks are potent in preventing over-fitting and reducing computational cost . The poset-training pruning has several approaches in place such as adding various regularization schemes to prune the network Louizos et al . ( 2017 ) , Pan et al . ( 2016 ) or using second derivative or hessian of the weights for dropout LeCun et al . ( 1990 ) , Hassibi & Stork ( 1993 ) . Han et al . ( 2015 ) , Alford et al . ( 2019 ) , Zhu & Gupta ( 2017 ) use an efficient iterative pruning method to iteratively increase sparsity . Srivastava et al . ( 2014 ) dropout random hidden units with p probability instead of weights to avoid overfitting in general . Each of these approaches is effective and achieves good sparsity post-training . We use a simple intuitive models that achieve good results and exploits the fact that a number of sub networks in a Neural Network has the potential to individually learn the input Srivastava et al . ( 2014 ) . We decide on a sparse network early on based on the dropout method and use only that for training . This provides an edge for faster computation , quicker elimination of excess weights and reduced generalization error . The sparsity achieved is superior to random dropout . Section II gives a general introduction to all the methods , section III defines p-sparsity , section IV provides the algorithm for both approaches , section V describes experimental setup and results , section VI discusses various design choices , section VII gives a general discussion of results , section VIII discusses limitations of the approach and section IX provides conclusions and final remarks . 2 PRUNING METHODS . 2.1 KSTARTS . 2.1.1 KSTARTS AND EVOLUTIONARY ALGORITHMS . We take the concept of k random starts from Evolutionary Algorithms ( Vikhar , 2016 ) that use a fitness function or heuristic to perform ” natural selection ” in optimization and search based problems ( Goldberg & Holland , 1988 ) . It is relatively simple to fit genetic algorithms to the problem at hand . Other method that would be equally effective with a little bit of modification are Hunting Search ( Oftadeh et al. , 2010 ) , Natural Adaptation Strategies ( Wierstra et al. , 2008 ) , firefly algorithm ( Yang , 2010 ) etc . The basic components of the algorithm are : ( 1 ) Population : A product of network weights and sparse matrices . ( 2 ) Individual : An instance of the population . ( 3 ) Fitness Function : The heuristic chosen for evaluation of the population . 2.1.2 POPULATION . We first initialize K sparse matrices , a single instance of these K sparse matrices can be seen in equation ? ? . In every iteration we multiply model weights W of the Network layer in question with every instance of the K sparse matrices . The resulting set of matrices is our population for that iteration . Each iteration is referred to as a new generation . population =W ∗K − SparseMatrices ( 1 ) 2.1.3 INDIVIDUAL . Each individual , in a population of K instances , is a sparse matrix of size equal to the size of network weights , W. The number of 0 ’ s and 1 ’ s in the sparsity matrix are determined by the connectivity factor p which is further described in section 3 . An sparse matrix of p ≈ 0.5 will have ≈ 50 % 0 ’ s and ≈ 50 % 1s . 2.1.4 EVALUATION/FITNESS FUNCTION . The fitness of an individual is ranked by determining the sum of each individual in population as given in 1 such that the fittest individual in a generation is given by equation 2. fittest = argmax ind i∗c∑ j=1 ind [ j ] ( of population ) ( 2 ) where i ∗ c is the size of each individual and ind refers to the individual in population . 2.1.5 NEXT GENERATION SELECTION . Assuming each iteration is the next generation . In each generation the fit individual is favoured so : • The fittest individual is passed of as weight to the next generation . • Every 5 generations or as per the decided elimination frequency , the individual with lowest fitness is discarded from the population . 2.2 DISSIPATING GRADIENTS . Magnitude and gradient based pruning approaches are popular in post-training pruning Srivastava et al . ( 2014 ) , LeCun et al . ( 1990 ) . It does not make much sense to employ them pre-training becauseof the randomly generated weights . But in order to reduce error , any network aims to target updating weights that influence the results most . Based upon that hypothesis , in each epoch we sum gradients over and eliminate weights whose weights are not getting updated . In equation 3 N is the total number of iterations for an epoch . In equation 4 epsilon is 1e-6 for all experiments . Accumulated dw = N∑ i=1 dW ( 3 ) W [ Accumulated dw < ] = 0 ( 4 ) One consideration in this approach is to not do this for too many epochs which can be only 2 if the image is very monochrome and more than 2 if the gradients are dissipating more slowly . Moreover , once specific weights have reached their optimal learning , their gradients will dissipate and we don ’ t want to eliminate them . 2.3 COMBINATION DROPOUT . Combination dropout is merely combining Kstarts with dissipating gradients . The weights eliminated use both approaches . We fix p for Kstarts to a certain value of minimum sparsity and further eliminate weights that dissipating gradients method will eliminate as well . The approach achieves better performance than either methods . 3 DEFINING P-SPARSITY . In a p-sparse network layer approximately p percent of connection between two layers are eliminated . Figure 1 shows a fully connected conventional neural network ( figure 1a ) and three sparsely connected networks with different values of p ( figures 1b , 1c , 1d ) . 3.1 CONNECTIVITY FACTOR . Connectivity factor , p , determines the percentage of connections to be removed between any two layers of the network . For instance if p = 0.0 than the network is be fully connected as shown in figure 1a , if on the opposite extreme , p=1.0 , then there will be no connections between two layers of neurons . If p=0.5 , figure 1b , only approximately 50 % of the connection in the network remain . If p=0.3 , figure 1c , approximately 70 % of the connection still exist and if p=0.7 , figure 1d , a mere 30 % of the connections are active between two network layers . In short p determines percentage of 0 ’ s in an individual sparse matrix shown in equation ? ? . 4 ALGORITHM . The autoencoder , two and three layered neural networks are trained in a standard manner with adam optimizer and batch update . 4.1 KSTARTS ALGORITHM . Algorithm 1 gives the kstarts method : 1 . We have K number of sparse matrices as shown in equation ? ? generated and we call them KI in algorithm 1 . 2 . Every time the weight W needs to be updated instead of the usual gradient update we add one step further and using the fitness function , pass the fittest individual as the new W which biases the network towards that sparse matrix . 3 . Every 5 iterations , the individual with lowest fitness is dropped . Algorithm 1 : k Random starts input : Data , params , K , p output : W , b initialize W , b ; KI ← K individuals with approximately 1-p percent active connections ; for maxiterations do Run Neural Network ; Update weights ; if One individual in KI is left then W← individual ; else W← individual with maximum sum ( of weights ) ; for every 5 iterations do pop individual with minimum sum ( of weights ) from KI ; end end end 4.2 DISSIPATING GRADIENTS ALGORITHM . Algorithm 2 is more simple and just eliminates weights with sum of gradients equal to zero in the first 1-4 epochs depending on the desired sparsity . Algorithm 2 : Dissipating Gradients input : Data , params output : W , b initialize W , b ; for maxepochs do for Maxiterations do Run Neural Network ; accumulated dW← accumulated dW+ dW ; Update weights ; end if accumulated dW < 0.0001 then accumulated dW←0 ; else accumulated dW←1 ; end W←W * accumulated dW ; end 5 EXPERIMENTS AND RESULTS . The experiments performed on two datasets ; MNIST Deng ( 2012 ) and Fashion MNIST Xiao et al . ( 2017 ) . The network architectures are two layered Autoencoder ( 784-128-64 ) , a three layered NN ( 784-128-100-10 ) both with sigmoid activation and adam optimization functions . The Architecture used for learning curves is a single layered NN ( 784-10 ) . 5.1 EFFECT OF INCREASING SPARSITY . As sparsity increases , overall performance reduces . Figure 2 shows the behaviour of various dropout methods presented in this paper . In case of random dropout , it ’ s indeed a random shot . Either no useful weight is eliminated or multiple crucial weights are eliminated which decides how well does random dropout perform . Kstarts performs slightly better on average with multiple start choices . Depending on how many independent p-sparse networks are present in the network that can learn well , one of them can be identified , given k is large enough and the fitness function is smartly decided by first examining the weights and gradients . Dissipating gradients works well as long as the network isn ’ t learning very fast i.e . some weights are being updated in the consequent epochs . It ’ s also most reliable . Combination works by far the best because it does not only rely on eliminating weights that are not being updated but also uses kstarts . It seems to achieve superior performance as long as p value chosen is a value that kstarts performs well on .
The authors propose two approaches for pruning: (a) "Evolution-style": start with K random masks associated with the weights, update weights on gradient descent corresponding to those active in the “fittest” mask, and overtime throw away all but one masks which are less fit. (b) "Dissipating-gradients”: Here those weights are removed which are not being updated as much, measured by their sum of gradients over a number of iterations. This is shown for elementary networks on MNIST datasets without any serious experiments or comparisons or even presentation.
SP:232edf223e799126992acd9ee04d88c22ff57110
Sparse Linear Networks with a Fixed Butterfly Structure: Theory and Practice
1 INTRODUCTION . A butterfly network ( see Figure 6 in Appendix A ) is a layered graph connecting a layer of n inputs to a layer of n outputs withO ( log n ) layers , where each layer contains 2n edges . The edges connecting adjacent layers are organized in disjoint gadgets , each gadget connecting a pair of nodes in one layer with a corresponding pair in the next layer by a complete graph . The distance between pairs doubles from layer to layer . This network structure represents the execution graph of the Fast Fourier Transform ( FFT ) ( Cooley and Tukey , 1965 ) , Walsh-Hadamard transform , and many important transforms in signal processing that are known to have fast algorithms to compute matrix-vector products . Ailon and Chazelle ( 2009 ) showed how to use the Fourier ( or Hadamard ) transform to perform fast Euclidean dimensionality reduction with Johnson and Lindenstrauss ( 1984 ) guarantees . The resulting transformation , called Fast Johnson Lindenstrauss Transform ( FJLT ) , was improved in subsequent works ( Ailon and Liberty , 2009 ; Krahmer and Ward , 2011 ) . The common theme in this line of work is to define a fast randomized linear transformation that is composed of a random diagonal matrix , followed by a dense orthogonal transformation which can be represented via a butterfly network , followed by a random projection onto a subset of the coordinates ( this research is still active , see e.g . Jain et al . ( 2020 ) ) . In particular , an FJLT matrix can be represented ( explicitly ) by a butterfly network followed by projection onto a random subset of coordinates ( a truncation operator ) . We refer to such a representation as a truncated butterfly network ( see Section 4 ) . Simple Johnson-Lindenstrauss like arguments show that with high probability for any W ∈ Rn2×n1 and any x ∈ Rn1 , Wx is close to ( JT2 J2 ) W ( JT1 J1 ) x where J1 ∈ Rk1×n1 and J2 ∈ Rk2×n2 are both FJLT , and k1 = log n1 , k2 = log n2 ( see Section 4.2 for details ) . Motivated by this , we propose to replace a dense ( fully-connected ) linear layer of size n2 × n1 in any neural network by the following architecture : JT1 W ′J2 , where J1 , J2 can be represented by a truncated butterfly network and W ′ is a k2 × k1 dense linear layer . The clear advantages of such a strategy are : ( 1 ) almost all choices of the weights from a specific distribution , namely the one mimicking FJLT , preserve accuracy while reducing the number of parameters , and ( 2 ) the number of weights is nearly linear in the layer width of W ( the original matrix ) . Our empirical results demonstrate that this offers faster training and prediction in deployment while producing results that match and often outperform existing known architectures . Compressing neural networks by replacing linear layers with structured linear transforms that are expressed by fewer parameters have been studied extensively in the recent past . We compare our approach with these related works in Section 3 . Since the butterfly structure adds logarithmic depth to the architecture , it might pose optimization related issues . Moreover , the sparse structure of the matrices connecting the layers in a butterfly network defies the general theoretical analysis of convergence of deep linear networks . We take a small step towards understanding these issues by studying the optimization landscape of a encoder-decoder network ( two layer linear neural network ) , where the encoder layer is replaced by a truncated butterfly network followed by a dense linear layer in fewer parameters . This replacement is motivated by the result of Sarlós ( 2006 ) , related to fast randomized low-rank approximation of matrices using FJLT ( see Section 4.2 for details ) . We consider this replacement instead of the architecture consisting of two butterfly networks and a dense linear layer as proposed earlier , because it is easier to analyze theoretically . We also empirically demonstrate that our new network with fewer parameters performs as well as an encoder-decoder network . The encoder-decoder network computes the best low-rank approximation of the input matrix . It is well-known that with high probability a close to optimal low-rank approximation of a matrix is obtained by either pre-processing the matrix with an FJLT ( Sarlós , 2006 ) or a random sparse matrix structured as given in Clarkson and Woodruff ( 2009 ) and then computing the best low-rank approximation from the rows of the resulting matrix1 . A recent work by Indyk et al . ( 2019 ) studies this problem in the supervised setting , where they find the best pre-processing matrix structured as given in Clarkson and Woodruff ( 2009 ) from a sample of matrices ( instead of using a random sparse matrix ) . Since an FJLT can be represented by a truncated butterfly network , we emulate the setting of Indyk et al . ( 2019 ) but learn the pre-processing matrix structured as a truncated butterfly network . 2 OUR CONTRIBUTION AND POTENTIAL IMPACT . We provide an empirical report , together with a theoretical analysis to justify our main idea of using sparse linear layers with a fixed butterfly network in deep learning . Our findings indicate that this approach , which is well rooted in the theory of matrix approximation and optimization , can offer significant speedup and energy saving in deep learning applications . Additionally , we believe that this work would encourage more experiments and theoretical analysis to better understand the optimization and generalization of our proposed architecture ( see Future Work section ) . On the empirical side – The outcomes of the following experiments are reported : ( 1 ) In Section 6.1 , we replace a dense linear layer in the standard state-of-the-art networks , for both image and language data , with an architecture that constitutes the composition of ( a ) truncated butterfly network , ( b ) dense linear layer in smaller dimension , and ( c ) transposed truncated butterfly network ( see Section 4.2 ) . The structure parameters are chosen so as to keep the number of weights near linear ( instead of quadratic ) . ( 2 ) In Sections 6.2 and 6.3 , we train a linear encoder-decoder network in which the encoder is replaced by a truncated butterfly network followed by a dense linear layer in smaller dimension . These experiments support our theoretical result . The network structure parameters are chosen so as to keep the number of weights in the ( replaced ) encoder near linear in the input dimension . Our results ( also theoretically ) demonstrate that this has little to no effect on the performance compared to the standard encoder-decoder network . ( 3 ) In Section 7 , we learn the best pre-processing matrix structured as a truncated butterfly network to perform low-rank matrix approximation from a given sample of matrices . We compare our results 1The pre-processing matrix is multiplied from the left . to that of Indyk et al . ( 2019 ) , which learn the pre-processing matrix structured as given in Clarkson and Woodruff ( 2009 ) . On the theoretical side – The optimization landscape of linear neural networks with dense matrices have been studied by Baldi and Hornik ( 1989 ) , and Kawaguchi ( 2016 ) . The theoretical part of this work studies the optimization landscape of the linear encoder-decoder network in which the encoder is replaced by a truncated butterfly network followed by a dense linear layer in smaller dimension . We call such a network as the encoder-decoder butterfly network . We give an overview of our main result , Theorem 1 , here . Let X ∈ Rn×d and Y ∈ Rm×d be the data and output matrices respectively . Then the encoder-decoder butterfly network is given as Y = DEBX , where D ∈ Rm×k and E ∈ Rk× ` are dense layers , B is an ` × n truncated butterfly network ( product of log n sparse matrices ) and k ≤ ` ≤ m ≤ n ( see Section 5 ) . The objective is to learn D , E and B that minimizes ||Y − Y ||2F . Theorem 1 shows how the loss at the critical points of such a network depends on the eigenvalues of the matrix Σ = Y XTBT ( BXXTBT ) −1BXY T 2 . In comparison , the loss at the critical points of the encoder-decoder network ( without the butterfly network ) depends on the eigenvalues of the matrix Σ′ = Y XT ( XXT ) −1XY T ( Baldi and Hornik , 1989 ) . In particular , the loss depends on how the learned matrix B changes the eigenvalues of Σ′ . If we learn only for an optimal D and E , keeping B fixed ( as done in the experiment in Section 6.3 ) then it follows from Theorem 1 that every local minima is a global minima and that the loss at the local/global minima depends on howB changes the top k eigenvalues of Σ′ . This inference together with a result by Sarlós ( 2006 ) is used to give a worst-case guarantee in the special case when Y = X ( called auto-encoders that capture PCA ; see the below Theorem 1 ) . 3 RELATED WORK . Important transforms like discrete Fourier , discrete cosine , Hadamard and many more satisfy a property called complementary low-rank property , recently defined by Li et al . ( 2015 ) . For an n×n matrix satisfying this property related to approximation of specific sub-matrices by low-rank matrices , Michielssen and Boag ( 1996 ) and O ’ Neil et al . ( 2010 ) developed the butterfly algorithm to compute the product of such a matrix with a vector inO ( n log n ) time . The butterfly algorithm factorizes such a matrix into O ( log n ) many matrices , each with O ( n ) sparsity . In general , the butterfly algorithm has a pre-computation stage which requires O ( n2 ) time ( O ’ Neil et al. , 2010 ; Seljebotn , 2012 ) . With the objective of reducing the pre-computation cost Li et al . ( 2015 ) ; Li and Yang ( 2017 ) compute the butterfly factorization for an n × n matrix satisfying the complementary low-rank property in O ( n 3 2 ) time . This line of work does not learn butterfly representations for matrices or apply it in neural networks , and is incomparable to our work . A few works in the past have used deep learning models with structured matrices ( as hidden layers ) . Such structured matrices can be described using fewer parameters compared to a dense matrix , and hence a representation can be learned by optimizing over a fewer number of parameters . Examples of structured matrices used include low-rank matrices ( Denil et al. , 2013 ; Sainath et al. , 2013 ) , circulant matrices ( Cheng et al. , 2015 ; Ding et al. , 2017 ) , low-distortion projections ( Yang et al. , 2015 ) , Toeplitz like matrices ( Sindhwani et al. , 2015 ; Lu et al. , 2016 ; Ye et al. , 2018 ) , Fourier-related transforms ( Moczulski et al. , 2016 ) and matrices with low-displacement rank ( Thomas et al. , 2018 ) . Recently Alizadeh et al . ( 2020 ) demonstrated the benefits of replacing the pointwise convolutional layer in CNN ’ s by a butterfly network . Other works by Mocanu et al . ( 2018 ) ; Lee et al . ( 2019 ) ; Wang et al . ( 2020 ) ; Verdenius et al . ( 2020 ) consider a different approach to sparsify neural networks . The works closest to ours are by Yang et al . ( 2015 ) , Moczulski et al . ( 2016 ) , and Dao et al . ( 2020 ) and we make a comparison below . Yang et al . ( 2015 ) and Moczulski et al . ( 2016 ) attempt to replace dense linear layers with a stack of structured matrices , including a butterfly structure ( the Hadamard or the Cosine transform ) , but they do not place trainable weights on the edges of the butterfly structure as we do . Note that adding these trainable weights does not compromise the run time benefits in prediction , while adding to the expressiveness of the network in our case . Dao et al . ( 2020 ) replace handcrafted structured subnetworks in machine learning models by a kaleidoscope layer , which consists of compositions of butterfly matrices . This is motivated by the fact that the kaleidoscope hierarchy captures a structured matrix exactly and optimally in terms of multiplication operations required to perform the matrix 2At a critical point the gradient of the loss function with respect to the parameters in the network is zero . vector product operation . Their work differs from us as we propose to replace any dense linear layer in a neural network ( instead of a structured sub-network ) by the architecture proposed in Section 4.2 . Our approach is motivated by theoretical results which establish that this can be done with almost no loss in representation . Finally , Dao et al . ( 2019 ) show that butterfly representations of standard transformations like discrete Fourier , discrete cosine , Hadamard mentioned above can be learnt efficiently . They additionally show the following : a ) for the benchmark task of compressing a single hidden layer model they compare the network constituting of a composition of butterfly networks with the classification accuracy of a fully-connected linear layer and b ) in ResNet a butterfly sub-network is added to get an improved result . In comparison , our approach to replace a dense linear layer by the proposed architecture in Section 4.2 is motivated by well-known theoretical results as mentioned previously , and the results of the comprehensive list of experiments in Section 6.1 support our proposed method .
The paper studies “butterfly networks”, where, a logarithmic number of linear layers with sparse connections resembling the butterfly structure of the FFT algorithm, along with linear layers in smaller dimensions are used to approximate linear layers in larger dimensions. In general, the paper follows the idea of sketching to design new architectures that can reduce the number of trainable parameters. In that regard, the paper is very appealing, as it shows that replacing linear layers with the butterfly networks does not result in any loss in performance.
SP:bb0b99194e5d102320ca4cc7c89c4ae6ee514d83
Provable Rich Observation Reinforcement Learning with Combinatorial Latent States
1 INTRODUCTION . Most reinforcement learning ( RL ) algorithms scale polynomially with the size of the state space , which is inadequate for many real world applications . Consider for example a simple navigation task in a room with furniture where the set of furniture pieces and their locations change from episode to episode . If we crudely approximate the room as a 10× 10 grid and consider each element in the grid to contain a single bit of information about the presence of furniture , then we end up with a state space of size 2100 , as each element of the grid can be filled independent of others . This is intractable for RL algorithms that depend polynomially on the size of state space . The notion of factorization allows tractable solutions to be developed . For the above example , the room can be considered a state with 100 factors , where the next value of each factor is dependent on just a few other parent factors and the action taken by the agent . Learning in factored Markov Decision Processes ( MDP ) has been studied extensively ( Kearns & Koller , 1999 ; Guestrin et al. , 2003 ; Osband & Van Roy , 2014 ) with tractable solutions scaling linearly in the number of factors and exponentially in the number of parent factors whenever planning can be done efficiently . However , factorization alone is inadequate since the agent may not have access to the underlying factored state space , instead only receiving a rich-observation of the world . In our room example , the agent may have access to an image of the room taken from a megapixel camera instead of the grid representation . Naively , treating each pixel of the image as a factor suggests there are over a million factors and a prohibitively large number of parent factors for each pixel . Counterintuitively , thinking of the observation as the state in this way leads to the conclusion that problems become harder as the camera resolution increases or other sensors are added . It is entirely possible , that these pixels ( or more generally , observation atoms ) are generated by a small number of latent factors with a small number of parent factors . This motivates us to ask : can we achieve PAC RL guarantees that depend polynomially on the number of latent factors and very weakly ( e.g. , logarithmically ) on the size of observation space ? Recent work has addressed this for a rich-observation setting with a non-factored latent state space when certain supervised learning problems are tractable ( Du et al. , 2019 ; Misra et al. , 2020 ; Agarwal et al. , 2020 ) . However , addressing the rich-observation setting with a latent factored state space has remained elusive . Specifically , ignoring the factored structure in the latent space or treating observation atoms as factors yields intractable solutions . ∗Correspondence at : dimisra @ microsoft.com Contributions . We combine two threads of research on rich-observation RL and factored MDP by proposing a new problem setup called Factored Block MDP ( Section 2 ) . In this setup , observations are emitted by latent states that obey the dynamics of a factored MDP . We assume observations to be composed of atoms ( which can be pixels for an image ) that are emitted by the latent factors . A single factor can emit a large number of atoms but no two factors can control the same atom . Following existing rich-observation RL literature , we assume observations are rich enough to decode the current latent state . We introduce an algorithm FactoRL that achieves the desired guarantees for a large class of Factored Block MDPs under certain computational and realizability assumptions ( Section 4 ) . The main challenge that FactoRL handles is to map atoms to the parent factor that emits them . We achieve this by reducing the identification problem to solving a set of independence test problems with distributions satisfying certain properties . We perform independence tests in a domain-agnostic setting using noise-contrastive learning ( Section 3 ) . Once we have mapped atoms to their parent factors , FactoRL then decodes the factors , estimates the model , recovers the latent structure in the transition dynamics , and learns a set of exploration policies . Figure 1 shows the different steps of FactoRL . This provides us with enough tools to visualize the latent dynamics , and plan for any given reward function . Due to the space limit , we defer the discussion of related work to Appendix B . To the best of our knowledge , our work represents the first provable solution to rich-observation RL with a combinatorially large latent state space .. 2 THE FACTORED BLOCK MDP SETTING . There are many possible ways to add rich observations to a factored MDP resulting in inapplicability or intractability . Our goal here is to define a problem setting that is tractable to solve and covers potential real-world problems . We start with the definition of Factored MDP ( Kearns & Koller , 1999 ) , but first review some useful notation that we will be using : Notations : For any n ∈ N , we use [ n ] to denote the set { 1 , 2 , · · · , n } . For any ordered set ( or a vector ) U of size n , and an ordered index set I ⊆ [ n ] and length k , we use the notation U [ I ] to denote the ordered set ( U [ I [ 1 ] ] , U [ I [ 2 ] ] , · · · , U [ I [ k ] ] ) . Definition 1 . A Factored MDP ( S , A , T , R , H ) consists of a d-dimensional discrete state space S ⊆ { 0 , 1 } d , a finite action space A , an unknown transition function T : S × A → ∆ ( S ) , an unknown reward function R : S × A → [ 0 , 1 ] and a time horizon H . Each state s ∈ S consists of d factors with the ith factor denoted as s [ i ] . The transition function satisfies T ( s′ | s , a ) =∏d i=1 Ti ( s ′ [ i ] | s [ pt ( i ) ] , a ) for every s , s′ ∈ S and a ∈ A , where Ti : { 0 , 1 } |pt ( i ) | ×A → ∆ ( { 0 , 1 } ) defines a factored transition distribution and a parent function pt : [ d ] → 2 [ d ] defines the set of parent factors that can influence a factor at the next timestep . We assume a deterministic start state . We also assume , without loss of generality , that each state and observation is reachable at exactly one time step . This can be easily accomplished by concatenating the time step information to state and observations . This allows us to write the state space as S = ( S1 , S2 , · · · , SH ) where Sh is the set of states reachable at time step h. A natural question to ask here is why we assume factored transition . In tabular MDPs , the lower bound for sample complexity scales linearly w.r.t . the size of the state set ( Kakade , 2003 ) . If we do not assume a factorized transition function then we can encode an arbitrary MDP with a state space of size 2d , which would yield a lower bound of Ω ( 2d ) rendering the setting intractable . Instead , we will prove sample complexity guarantees for FactoRL that scales in number of factors as dO ( κ ) where κ : = maxi∈ [ d ] |pt ( i ) | is the size of the largest parent factor set . The dependence of κ in the exponent is unavoidable as we have to find the parent factors from all possible ( d κ ) combinations , as well as learn the model for all possible values of the parent factor . However , for real-world problems we expect κ to be a small constant such as 2 . This yields significant improvement , for example , if κ = 2 and d = 100 then dκ = 100 while 2d ≈ 1030 . Based on the definition of Factored MDP , we define the main problem setup of this paper , called Factored Block MDP , where the agent does not observe the state but instead receives an observation containing enough information to decode the latent state . Definition 2 . A Factored Block MDP consists of an observation space X = X m and a latent state space S ⊆ { 0 , 1 } d. A single observation x ∈ X is made of m atoms with the kth denoted by x [ k ] ∈ X . Observations are generated stochastically given a latent state s ∈ S according to a factored emission function q ( x | s ) = ∏di=1 qi ( x [ ch ( i ) ] | s [ i ] ) where qi : { 0 , 1 } → ∆ ( X |ch ( i ) | ) and ch : [ d ] → 2 [ m ] is a child function satisfying ch ( i ) ∩ch ( j ) = ∅ whenever i 6= j . The emission function satisfies the disjointness property : for every i ∈ [ d ] , we have supp ( qi ( · | 0 ) ) ∩ supp ( qi ( · | 1 ) ) = ∅.1 The dynamics of the latent state space follows a Factored MDP ( S , A , T , R , H ) , with parent function pt and a deterministic start state . The notion of atoms generalizes commonly used abstractions . For example , if the observation is an image then atoms can be individual pixels or superpixels , and if the observation space is a natural language text then atoms can be individual letters or words . We make no assumption about the structure of the atom space X or its size , which can be infinite . An agent is responsible for mapping each observation x ∈ X to individual atoms ( x [ 1 ] , · · · , x [ m ] ) ∈X m. For the two examples above , this mapping is routinely performed in practice . If observation is a text presented to the agent as a string , then it can use off-the-shelf tokenizer to map it to sequence of tokens ( atoms ) . Similar to states , we assume the set of observations reachable at different time steps is disjoint . Additionally , we also allow the parent ( pt ) and child function ( ch ) to change across time steps . We denote these functions at time step h by pth and chh . The disjointness property was introduced in Du et al . ( 2019 ) for Block MDPs—a class of richobservation non-factorized MDPs . This property removes partial observability concerns and enables tractable learning . We expect this property to hold in real world problems whenever sufficient sensor data is available to decode the state from observation . For example , disjointness holds true for the navigation task with an overhead camera in Figure 1 . In this case , the image provides us with enough information to locate all objects in the room , which describes the agent ’ s state .. Disjointness allows us to define a decoder φ ? i : X |ch ( i ) | → { 0 , 1 } for every factor i ∈ [ d ] , such that φ ? i ( x [ ch ( i ) ] ) = s [ i ] if x [ ch ( i ) ] ∈ supp ( qi ( . | s [ i ] ) ) . We define a shorthand φ ? i ( x ) = φ ? i ( x [ ch ( i ) ] ) whenever ch is clear from the context . Lastly , we define the state decoder φ ? : X → { 0 , 1 } d where φ ? ( x ) [ i ] = φ ? i ( x ) . The agent interacts with the environment by taking actions according to a policy π : X → ∆ ( A ) . These interactions consist of episodes { s1 , x1 , a1 , r1 , s2 , x2 , a2 , r2 , · · · , aH , sH } with s1 = ~0 , xh ∼ q ( . | sh ) , rh = R ( xh , ah ) , and sh+1 ∼ T ( . | sh , ah ) . The agent never observes { s1 , · · · , sH } . Technical Assumptions . We make two assumptions that are specific to the FactoRL algorithm . The first is a margin assumption on the transition dynamics that enables us to identify different values of a factor . This assumption was introduced by Du et al . ( 2019 ) , and we adapt it to our setting . Assumption 1 ( Margin Assumption ) . For every h ∈ { 2 , 3 , · · · , H } , i ∈ [ d ] , let ui be the uniform distribution jointly over actions and all possible reachable values of sh−1 [ pt ( i ) ] . Then we assume : ‖Pui ( · , · | sh [ i ] = 1 ) − Pui ( · , · | sh [ i ] = 0 ) ‖TV ≥ σ where Pui ( sh−1 [ pt ( i ) ] , a | sh [ i ] ) is the backward dynamics denoting the probability over parent values and last action given sh [ i ] and roll-in distribution ui , and σ > 0 is the margin . 1The notation supp ( p ) denotes the support of the distribution p. Formally , supp ( p ) = { z | p ( z ) > 0 } . Assumption 1 captures a large set of problems , including all deterministic problems for which the value of σ is 1 . Assumption 1 helps us identify the different values of a factor but it does not help with mapping atoms to the factors from which they are emitted . In order to identify if two atoms come from the same factor , we make the following additional assumption to measure their dependence . Assumption 2 ( Atom Dependency Bound ) . For any h ∈ [ H ] , u , v ∈ [ m ] and u 6= v , if ch−1 ( u ) = ch−1 ( v ) , i.e. , atoms xh [ u ] and xh [ v ] have the same factor . Then under any distribution D ∈ ∆ ( Sh ) we have ‖PD ( xh [ u ] , xh [ v ] ) − PD ( xh [ u ] ) PD ( xh [ v ] ) ‖TV ≥ βmin . Dependence assumption states that atoms emitted from the same factor will be correlated . This is true for many real-world problems . For example , consider a toy grid-based navigation task . Each state factor s [ i ] represents a cell in the grid which can be empty ( s [ i ] = 0 ) or occupied ( s [ i ] = 1 ) . In the latter case , a randomly sampled box from the set { red box , yellow box , black box } , occupies its place . We expect Assumption 2 to hold in this case as pixels emitted from the same factor come from the same object and hence will be correlated . More specifically , if one pixel is red in color , then another pixel from the same cell will also be red as the object occupying the cell is a red box . This assumption does not remove the key challenge in identifying factors . As atoms from different factors can still be dependent due to actions and state distributions from previous time steps . Model Class . We use two regressor classes F and G. The first regressor class F : X ×X → [ 0 , 1 ] takes a pair of atoms and outputs a scalar in [ 0 , 1 ] . To define the second class , we first define a decoder class Φ : X ∗ → { 0 , 1 } . We allow this class to be defined on any set of atoms . This is motivated by empirical research where commonly used neural network models operate on inputs of arbitrary lengths . For example , the LSTM model can operate on a text of arbitrary length ( Sundermeyer et al. , 2012 ) . However , this is without loss of generality as we can define a different model class for different numbers of atom . We also define a model class U : X × A × { 0 , 1 } → [ 0 , 1 ] . Finally , we define the regressor class G : X × A×X ∗ → [ 0 , 1 ] as { ( x , a , x̌ ) 7→ u ( x , a , φ ( x̌ ) ) | u ∈ U , φ ∈ Φ } . We assume F and G are finite classes and derive sample complexity guarantees which scale as log |F| and log |G| . However , since we only use uniform convergence arguments extending the guarantees to other statistical complexity measures such as Rademacher complexity is straightforward . Let Πall : S → A denote the set of all non-stationary policies of this form . We then define the class of policies Π : X → A by { x 7→ ϕ ( φ ? ( x ) ) | ∀ϕ ∈ Πall } , which we use later to define our task . We use Pπ [ E ] to denote probability of an event E under the distribution over episodes induced by policy π. Computational Oracle . We assume access to two regression oracles REG for model classes F and G. Let D1 be a dataset of triplets ( x [ u ] , x [ v ] , y ) where u , v denote two different atoms and y ∈ { 0 , 1 } . Similarly , let D2 be a dataset of quads ( x , a , x′ , y ) where x ∈ X , a ∈ A , x̌ ∈ X ∗ , and y ∈ { 0 , 1 } . Lastly , let ÊD [ · ] denote the empirical mean over dataset D. The two computational oracles compute : REG ( D1 , F ) =arg min f∈F ÊD1 [ ( f ( x [ u ] , x [ v ] ) − y ) 2 ] , REG ( D2 , G ) =arg min g∈GN ÊD2 [ ( g ( x , a , x̌ ) − y ) 2 ] . We also assume access to a ∆pl-optimal planning oracle planner . Let Ŝ = ( Ŝ1 , · · · , Ŝh ) be a learned state space and T̂ = ( T̂1 , · · · , T̂H ) with T̂h : Ŝh−1 ×A → ∆ ( Ŝh ) be the learned dynamics , and R̂ : Ŝ × A → [ 0 , 1 ] be a given reward function . Let ϕ : Ŝ → A be a policy and V ( ϕ ; T̂ , R̂ ) be the policy value . Then for any ∆pl > 0 the output of planner ϕ̂ = planner ( T̂ , R̂ , ∆pl ) satisfies V ( ϕ̂ ; T̂ , R̂ ) ≥ supϕ V ( ϕ ; T̂ , R̂ ) −∆pl , where supremum is taken over policies of type Ŝ → A . Task Definition . We focus on a reward-free setting with the goal of learning a state decoder and estimating the latent dynamics T . Since the state space is exponentially large , we can not visit every state . However , the factorization property allows us to estimate the model by reaching factor values . In fact , we show that controlling the value of at most 2κ factors is sufficient for learning the model . Let C≤k ( U ) denote the space of all sets containing at most k different elements selected from the set U including ∅ . We define the reachability probability ηh ( K , Z ) for a given h ∈ [ H ] , K ⊆ [ d ] , and Z ∈ { 0 , 1 } |K| , and the reachability parameter ηmin as : ηh ( K , Z ) : = sup π∈ΠNS Pπ ( sh [ K ] = Z ) , ηmin : = inf h∈ [ H ] inf s∈Sh inf K∈C≤2κ ( [ d ] ) ηh ( K , s [ K ] ) . Our sample complexity scales polynomially with η−1min . Note that we only require that if sh [ K ] = Z is reachable , then it is reachable with at least ηmin probability , i.e. , either ηh ( K , Z ) = 0 or it is at least ηmin . These requirements are similar to those made by earlier work for non-factored state space ( Du et al. , 2019 ; Misra et al. , 2020 ) . The key difference being that instead of requiring every state to be reachable with ηmin probability , we only require a small set of factor values to be reachable . For reference , if every policy induces a uniform distribution over S = { 0 , 1 } d , then probability of visiting any state is 2−d but the probability of two factors taking certain values is only 0.25 . This gives us a more practical value for ηmin . Besides estimating the dynamics and learning a decoder , we also learn an α-policy cover to enable exploration of different reachable values of factors . We define this below : Definition 3 ( Policy Cover ) . A set of policies Ψ is an α-policy cover of Sh for any α > 0 and h if : ∀s ∈ Sh , K ∈ C≤2κ ( [ d ] ) , sup π∈Ψ Pπ ( sh [ K ] = s [ K ] ) ≥ αηh ( K , s [ K ] ) .
The paper considers the problem of partitioning the atoms (e.g., pixels of an image) of a reinforcement learning task to latent states (e.g., a grid that determines whether there exists furniture in each cell). The number of states grows exponentially with the number of cells of the grid. So the algorithms that are polynomial in the number of states are not efficient. The paper considers the factored block Markov decision process (MDP) model and adds a few more assumptions. Generally, this model and assumptions guarantee that the cells of the grid partition the atoms (i.e., each atom depends on only one cell), the atoms in a cell are dependent (in the probabilistic sense), the conditional probability of the parent value of the states and the action given the next state is 0 or 1 is separated (i.e., the difference is bounded away from zero), and the regressor classes that are used are realizable. The paper shows that this is enough to give an algorithm that partitioned the atoms in each step with high probability and its time complexity is polynomial in the number of cells and logarithmic in the number of atoms.
SP:fb0eda1f20d9b0a63164e96a2bf9ab4bee365eea
A Large-scale Study on Training Sample Memorization in Generative Modeling
Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric . In this work , we critically evaluate the gameability of such metrics by running a competition that ultimately resulted in participants attempting to cheat . Our competition received over 11000 submitted models and allowed us to investigate both intentional and unintentional memorization . To stop intentional memorization , we propose the “ Memorization-Informed Fréchet Inception Distance ” ( MiFID ) as a new memorization-aware metric and design benchmark procedures to ensure that winning submissions made genuine improvements in perceptual quality . Furthermore , we manually inspect the code for the 1000 top-performing models to understand and label different forms of memorization . The inspection reveals that unintentional memorization is a serious and common issue in popular generative models . The generated images and our memorization labels of those models as well as code to compute MiFID are released to facilitate future studies on benchmarking generative models . 1 INTRODUCTION . Recent work on generative models for natural images has produced huge improvements in image quality , with some models producing samples that can be indistinguishable from real images ( Karras et al. , 2017 ; 2019a ; b ; Brock et al. , 2018 ; Kingma & Dhariwal , 2018 ; Maaløe et al. , 2019 ; Menick & Kalchbrenner , 2018 ; Razavi et al. , 2019 ) . Improved sample quality is important for tasks like super-resolution ( Ledig et al. , 2017 ) and inpainting ( Yu et al. , 2019 ) , as well as creative applications ( Park et al. , 2019 ; Isola et al. , 2017 ; Zhu et al. , 2017a ; b ) . These developments have also led to useful algorithmic advances on other downstream tasks such as semi-supervised learning ( Kingma et al. , 2014 ; Odena , 2016 ; Salimans et al. , 2016 ; Izmailov et al. , 2019 ) or representation learning ( Dumoulin et al. , 2016 ; Donahue et al. , 2016 ; Donahue & Simonyan , 2019 ) . Modern generative models utilize a variety of underlying frameworks , including autoregressive models ( Oord et al. , 2016 ) , Generative Adversarial Networks ( GANs ; Goodfellow et al. , 2014 ) , flow-based models ( Dinh et al. , 2014 ; Rezende & Mohamed , 2015 ) , and Variational Autoencoders ( VAEs ; Kingma & Welling , 2013 ; Rezende et al. , 2014 ) . This diversity of approaches , combined with the philosophical nature of evaluating generative performance , has prompted the development of heuristically-motivated metrics designed to measure the perceptual quality of generated samples such as the Inception Score ( IS ; Salimans et al. , 2016 ) or the Fréchet Inception Distance ( FID ; Heusel et al. , 2017 ) . These metrics are used in a benchmarking procedure where “ state-of-the-art ” results are claimed based on a better score on standard datasets . Indeed , much recent progress in the field of machine learning as a whole has relied on useful benchmarks on which researchers can compare results . Specifically , improvements on the benchmark metric should reflect improvements towards a useful and nontrivial goal . Evaluation of the metric should be a straightforward and well-defined procedure so that results can be reliably compared . For example , the ImageNet Large-Scale Visual Recognition Challenge ( Deng et al. , 2009 ; Russakovsky et al. , 2015 ) has a useful goal ( classify objects in natural images ) and a well-defined evaluation procedure ( top-1 and top-5 accuracy of the model ’ s predictions ) . Sure enough , the ImageNet benchmark has facilitated the development of dramatically better image classification models which have proven to be extremely impactful across a wide variety of applications . Unfortunately , some of the commonly-used benchmark metrics for generative models of natural images do not satisfy the aforementioned properties . For instance , although the IS is demonstrated to correlate well with human perceived image quality ( Salimans et al. , 2016 ) , Barratt & Sharma ( 2018 ) points out several flaws of the IS when used as a single metric for evaluating generative modeling performance , including its sensitivity to pretrained model weights which undermines generalization capability . Seperately , directly optimizing a model to improve the IS can result in extremely unrealistic-looking images ( Barratt & Sharma , 2018 ) despite resulting in a better score . It is also well-known that if a generative model memorizes images from the training set ( i.e . producing non-novel images ) , it will achieve a good IS ( Gulrajani et al. , 2018 ) . On the other hand , the FID is widely accepted as an improvement over IS due to its better consistency under perturbation ( Heusel et al. , 2017 ) . However , there is no clear evidence of the FID resolving any of the flaws of the IS . A large-scale empirical study is necessary to provide robust support for understanding quantitatively how flawed the FID is . Motivated by these issues , we want to benchmark generative models in the “ real world ” , i.e . outside of the research community by holding a public machine learning competition . To the extent of our knowledge , no large-scale generative modeling competitions have ever been held , possibly due to the immense difficulty of identifying training sample memorization in a efficient and scalable manner . We designed a more rigorous procedure for evaluating competition submissions , including a memorization-aware variant of FID for autonomously detecting cheating via intentional memorization . We also manually inspected the code for the top 1000 submissions to reveal different forms of intentional or unintentional cheating , to ensure that the winning submissions reflect meaningful improvements , and to confirm efficacy of our proposed metric . We hope that the success of the first-ever generative modeling competition can serve as future reference and stimulate more research in developing better generative modeling benchmarks . Our main goal in this paper is to conduct an empirical study on issues of relying on the FID as a benchmark metric to guide the progression of generative modeling . In Section 2 , we briefly review the metrics and challenges of evaluating generative models . In Section 3 , we explain in detail the competition design choices and propose a novel benchmarking metric , the Memorization-Informed Fréchet Inception Distance ( MiFID ) . We show that MiFID enables fast profiling of participants that intentionally memorize the training dataset . In Section 4 , we introduce a dataset released along with this paper that includes over one hundred million generated images and manual labels obtained by painstaking code review . In Section 5 , we connect phenomena observed in large-scale benchmarking of generative models in the real world back to the research community and point out crucial but neglected flaws in the FID . 2 BACKGROUND . In generative modeling , our goal is to produce a model pθ ( x ) ( parameterized by θ ) of some true distribution p ( x ) . We are not given direct access to p ( x ) ; instead , we are provided only with samples drawn from it x ∼ p ( x ) . In this paper , we will assume that samples x from p ( x ) are 64-by-64 pixel natural images , i.e . x ∈ R64×64×3 . A common approach is to optimize θ so that pθ ( x ) assigns high likelihood to samples from p ( x ) . This provides a natural evaluation procedure which measures the likelihood assigned by pθ ( x ) to samples from p ( x ) that were held out during the optimization of θ . However , not all models facilitate exact computation of likelihoods . Notably , Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) learn an “ implicit ” model of p ( x ) from which we can draw samples but that does not provide an exact ( or even an estimate ) of the likelihood for a given sample . The GAN framework has proven particularly successful at learning models which can generate extremely realistic and high-resolution images , which leads to a natural question : How should we evaluate the quality of a generative model if we can ’ t compute the likelihood assigned to held-out samples ? This question has led to the development of many alternative ways to evaluate generative models ( Borji , 2019 ) . A historically popular metric , proposed in ( Salimans et al. , 2016 ) , is the Inception Score ( IS ) which computes IS ( pθ ) = Ex∼pθ ( x ) [ DKL ( IN ( y|x ) ‖ IN ( y ) ) ] where IN ( y|x ) is the conditional probability of a class label y assigned to a datapoint x by a pretrained Inception Network ( Szegedy et al. , 2015 ) . More recently , ( Heusel et al. , 2017 ) proposed the Fréchet Inception Distance ( FID ) which better correlates with perceptual quality . The FID uses the estimated mean and covariance of the Inception Network feature space distribution to calculate the distance between the real and fake distributions up to second order . The FID between the real images r and generated images g is computed as : FID ( r , g ) = ‖µr − µg‖22 + Tr ( Σr + Σg − 2 ( ΣrΣr ) 1 2 ) where µr and µg are the mean of the real and generated images in latent space , and Σr and Σg are the covariance matrices for the real and generated feature vectors . A drawback of both the IS and FID is that they assign a very good score to a model which simply memorizes a small and finite sample from p ( x ) ( Gulrajani et al. , 2018 ) , an issue we address in section 3.1 . 3 GENERATIVE MODELING COMPETITION DESIGN . We designed the first generative model competition where participants were invited to generate realistic dog images given 20,579 images of dogs from ImageNet ( Russakovsky et al. , 2015 ) . Participants were required to implement their generative model in a constrained computation environment to prevent them from obtaining unfair advantages . The computation environment was designed with : • Limited computation resource ( 9 hours on a NVIDIA P100 GPU for each submission ) since generative model performance is known to be highly related to the amount of computational resources used ( Brock et al. , 2018 ) • Isolated containerization to avoid continuous training by reloading model checkpoints from previous sessions • No access to external resources ( i.e . the internet ) to avoid usage of pre-trained models or additional data Each submission is required to provide 10,000 generated images of dimension 64 × 64 × 3 and receives a public score in return . Participants are allowed to submit any number of submissions during the two-month competition . Before the end of the competition , each team is required to choose two submissions , and the final ranking is determined by the better private score ( described below ) out of the two selected submissions . In the following sections , we discuss how the final decisions were made regarding pretrained model selection ( for FID feature projection ) and how we enforced penalties to ensure the fairness of the competition . 3.1 MEMORIZATION-INFORMED FRÉCHET INCEPTION DISTANCE ( MIFID ) . The most crucial part of the competition is the performance evaluation metric to score the submissions . To assess the quality of generated images , we adopted the Fréchet Inception Distance ( Heusel et al. , 2017 ) which is a widely used metric for benchmarking generative tasks . Compared to the Inception Score ( Salimans et al. , 2016 ) , the FID has the benefits of better robustness against noise and distortion and more efficient computation ( Borji , 2019 ) . For a generative modeling competition , a good metric not only needs to reflect the quality of generated samples but must also allow easy identification of cheating with as little manual intervention as possible . Many forms of cheating were prevented by setting up the aforementioned computation environment , but even with these safeguards it would be possible to “ game ” the FID score . Specifically , we predicted that memorization of training data would be a major issue , since current generative model evaluation metrics such as IS or FID are prone to reward high scores to memorized instances ( Gulrajani et al. , 2018 ) . This motivated the addition of a ” memorization-aware ” metric that penalizes models producing images too similar to the training set . Combining memorization-aware and generation quality components , we introduced the MemorizationInformed Fréchet Inception Distance ( MiFID ) as the metric used for the competition : MiFID ( Sg , St ) = mτ ( Sg , St ) · FID ( Sg , St ) where Sg is the generated set , St is the original training set , FID is the Fréchet Inception Distance , and mτ is the memorization penalty which we discuss in the following section .
Motivated by the observation that prevalent metrics (Inception Score, Frechet Inception Distance) used to assess the quality of samples obtained from generative models are gameable (due to either the metric not correlating well with visually assessed sample quality or the metric being susceptible to training sample memorization), the authors conduct a large scale “controlled” study to assess the gameability of said metrics. The authors conducted a competition and subsequently analyzed how approaches tend to cheat so as to obtain higher FID scores. Furthermore, to assess the extent of memorization w.r.t. the FID score, the authors propose a new metric — Memorization-Informed Frechet Inception Distance (MiFID) — which takes into account sample memorization w.r.t. a reference set. The authors conclude on a few notable observations — (1) unintentional memorization in generative models is a serious and prevalent issue; (2) the choice of latent space used to compute FID based scores can make a significant difference.
SP:5908636440ae0162f1bf98b6e7b8969cc163f9a6
Learning to Sample with Local and Global Contexts in Experience Replay Buffer
1 INTRODUCTION . Experience replay ( Mnih et al. , 2015 ) , which is a memory that stores the past experiences to reuse them , has become a popular mechanism for reinforcement learning ( RL ) , since it stabilizes training and improves the sample efficiency . The success of various off-policy RL algorithms largely attributes to the use of experience replay ( Fujimoto et al. , 2018 ; Haarnoja et al. , 2018a ; b ; Lillicrap et al. , 2016 ; Mnih et al. , 2015 ) . However , most off-policy RL algorithms usually adopt a unique random sampling ( Fujimoto et al. , 2018 ; Haarnoja et al. , 2018a ; Mnih et al. , 2015 ) , which treats all past experiences equally , so it is questionable whether this simple strategy would always sample the most effective experiences for the agents to learn . Several sampling policies have been proposed to address this issue . One of the popular directions is to develop rule-based methods , which prioritize the experiences with pre-defined metrics ( Isele & Cosgun , 2018 ; Jaderberg et al. , 2016 ; Novati & Koumoutsakos , 2019 ; Schaul et al. , 2016 ) . Notably , since TD-error based sampling has improved the performance of various off-policy RL algorithms ( Hessel et al. , 2018 ; Schaul et al. , 2016 ) by prioritizing more meaningful samples , i.e. , high TD-error , it is one of the most frequently used rule-based methods . Here , TD-error measures how unexpected the returns are from the current value estimates ( Schaul et al. , 2016 ) . However , such rule-based sampling strategies can lead to sampling highly biased experiences . For instance , Figure 1 shows randomly selected 10 transitions among 64 transitions sampled using certain 1Code is available at https : //github.com/youngmin0oh/NERS metrics/rules under a policy-based learning , soft actor critic ( SAC ) ( Haarnoja et al. , 2018a ) , on Pendulum-v0 after 30,000 timesteps , which goal is to balance the pendulum to make it stay in the upright position . We observe that sampling by the TD-error alone mostly selects initial transitions ( see Figure 1 ( a ) ) , where the rods are in the downward position , since it is difficult to estimate Q-value on them . Conversely , the sampled transitions by Q-value describe rods in the upright position ( see Figure 1 ( b ) ) , which will provide high returns to agents . Both can largely contribute to the update of the actor and critic since the advantage term and mean-square of TD-errors are large . Yet , due to the bias , the agent trained in such a manner will mostly learn what to do in a specific state , but will not learn about others that should be experienced for proper learning of the agent . Therefore , such biased ( and redundant ) transitions may not lead to increased sample efficiency , even though each sampled transition may be individually meaningful . On the other hand , focusing only on the diversity of samples also has an issue . For instance , sampling uniformly at random is able to select out diverse transitions including intermediate states such as those in the red boxes of Figure 1 ( c ) , where the rods are in the horizontal positions which are necessary for training the agents as they provide the trajectory between the two types of states . However , the transitions are occasionally irrelevant for training both the policy and the Q networks . Indeed , states in the red boxes of Figure 1 ( c ) possess both lowQ-values and TD-errors . Their low TD-errors suggest that they are not meaningful for the update of Q networks . Similarly , low Q-values can not be used to train the policy what good actions are . Motivated by the aforementioned observations , we aim to develop a method to sample both diverse and meaningful transitions . To cache both of them , it is crucial to measure the relative importance among sampled transitions since the diversity should be considered in them , not all in the buffer . To this end , we propose a novel neural sampling policy , which we refer to Neural Experience Replay Sampler ( NERS ) . Our method learns to measure the relative importance among sampled transitions by extracting local and global contexts from each of them and all sampled ones , respectively . In particular , NERS is designed to take a set of each experience ’ s features as input and compute its outputs in an equivariant manner with respect to the permutation of the set . Here , we consider various features of transition such as TD-error , Q-value and the raw transition , e.g. , expecting to sample intermediate transitions as those in blue boxes of Figure 1 ( c ) ) efficiently . To verify the effectiveness of NERS , we validate the experience replay with various off-policy RL algorithms such as soft actor-critic ( SAC ) ( Haarnoja et al. , 2018a ) and twin delayed deep deterministic ( TD3 ) ( Fujimoto et al. , 2018 ) for continuous control tasks ( Brockman et al. , 2016 ; Todorov et al. , 2012 ) , and Rainbow ( Hessel et al. , 2018 ) for discontinuous control tasks ( Bellemare et al. , 2013 ) . Our experimental results show that NERS consistently ( and often significantly for complex tasks having high-dimensional state and action spaces ) outperforms both the existing the rule-based ( Schaul et al. , 2016 ) and learning-based ( Zha et al. , 2019 ) sampling methods for experience replay . In summary , our contribution is threefold : • To the best of our knowledge , we first investigate the relative importance of sampled transitions for the efficient design of experience replays . • For the purpose , we design a novel permutation-equivariant neural sampling architecture that utilizes contexts from the individual ( local ) and the collective ( global ) transitions with various features to sample not only meaningful but also diverse experiences . • We validate the effectiveness of our neural experience replay on diverse continuous and discrete control tasks with various off-policy RL algorithms , on which it consistently outperforms both existing rule-based and learning-based sampling methods . 2 NEURAL EXPERIENCE REPLAY SAMPLER . We consider a standard reinforcement learning ( RL ) framework , where an agent interacts with an environment over discrete timesteps . Formally , at each timestep t , the agent receives a state st from the environment and selects an action at based on its policy π . Then , the environment returns a reward rt , and the agent transitions to the next state st+1 . The goal of the agent is to learn the policy π that maximizes the return Rt = ∑∞ k=0 γ krt+k , which is the discounted cumulative reward from the timestep t with a discount factor γ ∈ [ 0 , 1 ) , at each state st . Throughout this section , we focus on off-policy actor-critic RL algorithms with a buffer B , which consist of the policy πψ ( a|s ) ( i.e. , actor ) and Q-function Qθ ( s , a ) ( i.e. , critic ) with parameters ψ and θ , respectively . 2.1 OVERVIEW OF NERS . We propose a novel neural sampling policy f with parameter φ , called Neural Experience Replay Sampler ( NERS ) . It is trained for learning to select important transitions from the experience replay buffer for maximizing the actual cumulative rewards . Specifically , at each timestep , NERS receives a set of off-policy transitions ’ features , which are proportionally sampled in the buffer B based on priorities evaluated in previous timesteps . Then it outputs a set of new scores from the set , in order for the priorities to be updated . Further , both the sampled transitions and scores are used to optimize the off-policy policy πψ ( a|s ) and action-value function Qθ ( s , a ) . Note that the output of NERS should be equivariant of the permutation of the set , so we design its neural architecture to satisfy the property . Next , we define the reward rre as the actual performance gain , which is defined as the difference of the expectation of the sum of rewards between the current and previous evaluation policies , respectively . Figure 2 shows an overview of the proposed framework , which learns to sample from the experience replay . In the following section , we describe our method of learning the sampling policy for experience replay and the proposed network architecture in detail . 2.2 DETAILED COMPONENTS OF NERS . Input observations . Throughout this paper , we denote the set { 1 , · · · , n } by [ n ] for positive integer n. Without loss of generality , suppose that the replay buffer B stores the following information as its i-th transition Bi = ( sκ ( i ) , aκ ( i ) , rκ ( i ) , sκ ( i ) +1 ) where κ ( i ) is a function from the index of B to a corresponding timestep . We use a set of priorities PB = { σ1 , · · · , σ|B| } that is updated whenever sampling transitions for training the actor and critic . One can sample an index set I in [ |B| ] with the probability pi of i-th transition as follows : pi = σαi∑ k∈ [ |B| ] σ α k , ( 1 ) Algorithm 1 Training NERS : batch size m and sample size n Initialize NERS parameters φ , a replay buffer B ← ∅ , priority set PB ← ∅ , and index set I ← ∅ for each timestep t do Choose at from the actor and collect a sample ( st , at , rt , st+1 ) from the environment Update replay buffer B ← B ∪ { ( st , at , rt , st+1 ) } and priority set PB ← PB ∪ { 1.0 } for each gradient step do Sample an index I by the given set PB and Eq . ( 1 ) with |I| = m Calculate a score set { σk } k∈I and weights { wi } i∈I by Eq . ( 4 ) and Eq . ( 5 ) , respectively Train the actor and critic using batch { Bi } i∈I ⊂ B and corresponding weights { wi } i∈I Collect I ← I ⋃ I and update PB ( I ) by the score set { σk } k∈I end for for the end of an episode do Choose a subset Itrain from I uniformly at random such that |Itrain| = n Calculate rre as in Eq . ( 6 ) Update sampling policy φ using the gradient ( 7 ) with respect to Itrain Empty I , i.e. , I ← ∅ end for end for with a hyper-parameter α > 0 . Then , we define the following sequence of features for { Bi } i∈I : D ( B , I ) = { sκ ( i ) , aκ ( i ) , rκ ( i ) , sκ ( i ) +1 , κ ( i ) , δκ ( i ) , rκ ( i ) + γmax a Qθ̂ ( sκ ( i ) + a ) } i∈I , ( 2 ) where γ is a discount factor , θ̂ is the target network parameter , and δκ ( i ) is the TD-error defined as follows : δκ ( i ) = rκ ( i ) + γmax a Qθ̂ ( sκ ( i ) +1 , a ) −Qθ ( sκ ( i ) , aκ ( i ) ) . The TD-error indicates how ‘ surprising ’ or ‘ unexpected ’ the transition is ( Schaul et al. , 2016 ) . Note that the input D ( B , I ) contains various features including both exact values ( i.e. , states , actions , rewards , next states , and timesteps ) and predicted values in the long-term perspective ( i.e. , TD-errors and Q-values ) . We abbreviate the notation D ( B , I ) = D ( I ) for simplicity . Utilizing various information is crucial in selecting diverse and important transitions ( see Section 3 ) . Architecture and action spaces . Now we explain the neural network structure of NERS f . Basically , f takes D ( I ) as an input and generate their scores , where these values are used to sample transitions proportionally . Specifically , f consists of fl , fg , and fs called learnable local , global and score networks with output dimensions dl , dg , and 1 . The local network is used to capture attributes in each transition by fl ( D ( I ) ) = { fl,1 ( D ( I ) ) , · · · fl , |I| ( D ( I ) ) } ∈ R|I|×dl , where fl , k ( D ( I ) ) ∈ Rdl ( k ∈ [ |I| ] ) . The global network is used to aggregate collective information of transitions by taking fg avg ( D ( I ) ) = ∑ fg ( D ( I ) ) |I| ∈ R 1×dg , where fg ( D ( I ) ) ∈ R|I|×dg . Then by concatenating them , one can make an input for the score network fs as follows : Dcat ( I ) : = { fl,1 ( D ( I ) ) ⊕ fgavg ( D ( I ) ) , · · · , fl , |I| ( D ( I ) ) ⊕ fgavg ( D ( I ) ) } ∈ R|I|× ( dl+dg ) , ( 3 ) where ⊕ denotes concatenation . Finally , the score network generates a score set : fs ( D cat ( I ) ) = { σi } i∈I ∈ R |I| . ( 4 ) One can easily observe that fs is permutation-equivariant with respect to input D ( I ) . The set { σi } i∈I is used to update the priorities set P for transitions corresponding to I by Eq . ( 1 ) and to compute importance-sampling weights for updating the critic , compensating the bias of probabilities ( Schaul et al. , 2016 ) ) : wi = ( 1 |B|p ( i ) ) β , ( 5 ) where β > 0 is a hyper-parameter . Then the agent and critic receive training batch D ( I ) and corresponding weights { wi } i∈I for training , i.e. , the learning rate for training sample Bi is set to be proportional to wi . Due to this structure satisfying the permutation-equivariant property , one can evaluate the relative importance of each transition by observing not only itself but also other transitions . Reward function and optimizing sampling policy . We update NERS at each evaluation step . To optimize our sampling policy , we define the replay reward rre of the current evaluation as follows : for policies π and π′ used in the current and previous evaluations as in ( Zha et al. , 2019 ) , rre : = Eπ ∑ t∈ { timesteps in an episode } rt − Eπ′ ∑ t∈ { timesteps in an episode } rt . ( 6 ) The replay reward is interpreted as measuring how much actions of the sampling policy help the learning of the agent for each episode . Notice that rre only observes the difference of the mean of cumulative rewards between the current and previous evaluation policies since NERS needs to choose transitions without knowing which samples will be added and how well agents will be trained in the future . To maximize the sample efficiency for learning the agent ’ s policy , we propose to train the sampling policy to selects past transitions in order to maximize rre . To train NERS , one can choose Itrain that is a subset of a index set I for totally sampled transitions in the current episode . Then we use the following formula by REINFORCE ( Williams , 1992 ) : ∇φEItrain [ rre ] = EItrain [ rre ∑ i∈Itrain ∇φ log pi ( D ( Itrain ) ) ] , ( 7 ) where pi is defined in Eq . ( 1 ) . The detailed description is provided in Algorithm 1 . While ERO Zha et al . ( 2019 ) uses a similar replay-reward ( Eq . 6 ) , there are a number of fundamental differences between it and our method . First of all , ERO does not consider the relative importance between the transitions as NERS does , but rather learns an individual sampling rate for each transition . Moreover , they consider only three types of features , namely TD-error , reward , and the timestep , while NERS considers a larger set of features by considering more informative features that are not used by ERO , such as raw features , Q-values , and actions . However , the most important difference between the two is that ERO performs two-stage sampling , where they first sample with the individually learned Bernoulli sampling probability for each transition , and further perform random sampling from the subset of sampled transitions . However , with such a strategy , the first-stage sampling is highly inefficient even with moderate size experience replays , since it should compute the sampling rate for each individual instance . Accordingly , its time complexity of the first-stage sampling depends finally on the capacity of the buffer B , i.e. , O ( |B| ) . On the contrary , NERS uses a sum-tree structure as in ( Schaul et al. , 2016 ) to sample transitions with priorities , so that its time complexity for sampling depends highly on O ( log |B| ) . Secondly , since the number of experiences selected from the first stage sampling is large , it may have little or no effect , making it to behave similarly to random sampling . Moreover , ERO updates its network with the replay reward and experiences that are not sampled from two-stage samplings but sampled by the uniform sampling at random ( see Algorithm 2 in Zha et al . ( 2019 ) ) . In other words , samples that are never selected affect the training of ERO , while NERS updates its network solely based on the transitions that are actually selected by itself .
Observing that the existed ER-based sampling method may introduce bias or redundancy in sampled transitions, the paper proposes a new sampling method in the ER learning setting. The idea is to take into consideration the context, i.e. many visited transitions, rather than a single one, based on which one can measure the relative importance of each transition. Specifically, the weights of transitions are also learned through a Reinforce agent and hence the sampling distribution is learned to directly improve sample efficiency.
SP:9ce7a60c5f2e40f7d59e98c90171a7b49621c67c
Anytime Sampling for Autoregressive Models via Ordered Autoencoding
1 INTRODUCTION . Autoregressive models are a prominent approach to data generation , and have been widely used to produce high quality samples of images ( Oord et al. , 2016b ; Salimans et al. , 2017 ; Menick & Kalchbrenner , 2018 ) , audio ( Oord et al. , 2016a ) , video ( Kalchbrenner et al. , 2017 ) and text ( Kalchbrenner et al. , 2016 ; Radford et al. , 2019 ) . These models represent a joint distribution as a product of ( simpler ) conditionals , and sampling requires iterating over all these conditional distributions in a certain order . Due to the sequential nature of this process , the computational cost will grow at least linearly with respect to the number of conditional distributions , which is typically equal to the data dimension . As a result , the sampling process of autoregressive models can be slow and does not allow interruptions . Although caching techniques have been developed to speed up generation ( Ramachandran et al. , 2017 ; Guo et al. , 2017 ) , the high cost of sampling limits their applicability in many scenarios . For example , when running on multiple devices with different computational resources , we may wish to trade off sample quality for faster generation based on the computing power available on each device . Currently , a separate model must be trained for each device ( i.e. , computational budget ) in order to trade off sample quality for faster generation , and there is no way to control this trade-off on the fly to accommodate instantaneous resource availability at time-of-deployment . To address this difficulty , we consider the novel task of adaptive autoregressive generation under computational constraints . We seek to build a single model that can automatically trade-off sample quality versus computational cost via anytime sampling , i.e. , where the sampling process may be interrupted anytime ( e.g. , because of exhausted computational budget ) to yield a complete sample whose sample quality decays with the earliness of termination . In particular , we take advantage of a generalization of Principal Components Analysis ( PCA ) proposed by Rippel et al . ( 2014 ) , which learns an ordered representations induced by a structured application of dropout to the representations learned by an autoencoder . Such a representation encodes raw data into a latent space where dimensions are sorted based on their importance for reconstruction . Autoregressive modeling is then applied in the ordered representation space instead . This approach enables a natural trade-off between quality and computation by truncating the length of the representations : When running on devices with high computational capacity , we can afford to generate the full representation and decode it to obtain a high quality sample ; when on a tighter computational budget , we can generate only the first few dimensions of the representation and decode it to a sample whose quality degrades smoothly with truncation . Because decoding is usually fast and the main computation bottleneck lies on the autoregressive part , the run-time grows proportionally relative to the number of sampled latent dimensions . Through experiments , we show that our autoregressive models are capable of trading off sample quality and inference speed . When training autoregressive models on the latent space given by our encoder , we witness little degradation of image sample quality using only around 60 % to 80 % of all latent codes , as measured by Fréchet Inception Distance ( Heusel et al. , 2017 ) on CIFAR-10 and CelebA . Compared to standard autoregressive models , our approach allows the sample quality to degrade gracefully as we reduce the computational budget for sampling . We also observe that on the VCTK audio dataset ( Veaux et al. , 2017 ) , our autoregressive model is able to generate the low frequency features first , then gradually refine the waveforms with higher frequency components as we increase the number of sampled latent dimensions . 2 BACKGROUND . Autoregressive Models Autoregressive models define a probability distribution over data points x ∈ RD by factorizing the joint probability distribution as a product of univariate conditional distributions with the chain rule . Using pθ to denote the distribution of the model , we have : pθ ( x ) = D∏ i=1 pθ ( xi | x1 , · · · , xi−1 ) ( 1 ) The model is trained by maximizing the likelihood : L = Epd ( x ) [ log pθ ( x ) ] , ( 2 ) where pd ( x ) represents the data distribution . Different autoregressive models adopt different orderings of input dimensions and parameterize the conditional probability pθ ( xi | x1 , · · · , xi−1 ) , i = 1 , · · · , D in different ways . Most architectures over images order the variables x1 , · · · , xD of image x in raster scan order ( i.e. , left-toright then top-to-bottom ) . Popular autoregressive architectures include MADE ( Germain et al. , 2015 ) , PixelCNN ( Oord et al. , 2016b ; van den Oord et al. , 2016 ; Salimans et al. , 2017 ) and Transformer ( Vaswani et al. , 2017 ) , where they respectively use masked linear layers , convolutional layers and self-attention blocks to ensure that the output corresponding to pθ ( xi | x1 , · · · , xi−1 ) is oblivious of xi , xi+1 , · · · , xD . Cost of Sampling During training , we can evaluate autoregressive models efficiently because x1 , · · · , xD are provided by data and all conditionals p ( xi | x1 , · · · , xi−1 ) can be computed in parallel . In contrast , sampling from autoregressive models is an inherently sequential process and can not be easily accelerated by parallel computing : we first need to sample x1 , after which we sample x2 from pθ ( x2 | x1 ) and so on—the i-th variable xi can only be obtained after we have already computed x1 , · · · , xi−1 . Thus , the run-time of autoregressive generation grows at least linearly with respect to the length of a sample . In practice , the sample length D can be more than hundreds of thousands for real-world image and audio data . This poses a major challenge to fast autoregressive generation on a small computing budget . 3 ANYTIME SAMPLING WITH ORDERED AUTOENCODERS . Our goal is to circumvent the non-interruption and linear time complexity of autoregressive models by pushing the task of autoregressive modeling from the original data space ( e.g. , pixel space ) into an ordered representation space . In doing so , we develop a new class of autoregressive models where premature truncation of the autoregressive sampling process leads to the generation of a lower quality sample instead of an incomplete sample . In this section , we shall first describe the learning of the ordered representation space via the use of an ordered autoencoder . We then describe how to achieve anytime sampling with ordered autoencoders . 3.1 ORDERED AUTOENCODERS . Consider an autoencoder that encodes an input x ∈ RD to a code z ∈ RK . Let z = eθ ( x ) : RD → RK be the encoder parameterized by θ and x′ = dφ ( z ) : RK → RD be the decoder parameterized by φ . We define eθ ( · ) ≤i : x ∈ RD 7→ ( z1 , z2 , · · · , zi , 0 , · · · , 0 ) T ∈ RK , which truncates the representation to the first i dimensions of the encoding z = eθ ( x ) , masking out the remainder of the dimensions with a zero value . We define the ordered autoencoder objective as 1 N N∑ i=1 1 K K∑ j=1 ‖xi − dφ ( eθ ( xi ) ≤j ) ‖22 . ( 3 ) We note that Eq . ( 3 ) is equivalent to Rippel et al . ( 2014 ) ’ s nested dropout formulation using a uniform sampling of possible truncations . Moreover , when the encoder/decoder pair is constrained to be a pair of orthogonal matrices up to a transpose , then the optimal solution in Eq . ( 3 ) recovers PCA . 3.1.1 THEORETICAL ANALYSIS . Rippel et al . ( 2014 ) ’ s analysis of the ordered autoencoder is limited to linear/sigmoid encoder and a linear decoder . In this section , we extend the analysis to general autoencoder architectures by employing an information-theoretic framework to analyze the importance of the i-th latent code to reconstruction for ordered autoencoders . We first reframe our problem from a probabilistic perspective . In lieu of using deterministic autoencoders , we assume that both the encoder and decoder are stochastic functions . In particular , we let qeθ ( z | x ) be a probability distribution over z ∈ RK conditioned on input x , and similarly let pdφ ( x | z ) be the stochastic counterpart to dφ ( z ) . We then use qeθ ( z | x ) ≤i to denote the distribution of ( z1 , z2 , · · · , zi , 0 , · · · , 0 ) T ∈ RK , where z ∼ qeθ ( z | x ) , and let pdφ ( x | z ) ≤i represent the distribution of pdφ ( x | ( z1 , z2 , · · · , zi , 0 , · · · , 0 ) T ∈ RK ) . We can modify Eq . ( 3 ) to have the following form : Ex∼pd ( x ) , i∼U { 1 , K } Ez∼qeθ ( z|x ) ≤i [ − log pdφ ( x|z ) ≤i ] , ( 4 ) where U { 1 , K } denotes a uniform distribution over { 1 , 2 , · · · , K } , and pd ( x ) represents the data distribution . We can choose both the encoder and decoder to be fully factorized Gaussian distributions with a fixed variance σ2 , then Eq . ( 13 ) can be simplified to Epd ( x ) [ 1 K K∑ i=1 Ez∼N ( eθ ( x ) ≤i ; σ2 ) [ 1 2σ2 ‖x− dφ ( z ) ≤i‖22 ] ] . The stochastic encoder and decoder in this case will become deterministic when σ → 0 , and the above equation will yield the same encoder/decoder pair as Eq . ( 3 ) when σ → 0 and N →∞ . The optimal encoders and decoders that minimize Eq . ( 13 ) satisfy the following property . Theorem 1 . Let x denote the input random variable . Assuming both the encoder and decoder are optimal in terms of minimizing Eq . ( 13 ) , and ∀i ∈ 3 , · · · , K , zi−1 ⊥ zi | x , z≤i−2 , we have ∀i ∈ { 3 , · · · , K } : I ( zi ; x|z≤i−1 ) ≤ I ( zi−1 ; x|z≤i−2 ) , where z≤i denotes ( z1 , z2 , · · · , zi ) . We defer the proof to Appendix A.1 . The assumption zi−1 ⊥ zi | x , z≤i−2 holds whenever the encoder qθ ( z | x ) is a factorized distribution , which is a common choice in variational autoencoders ( Kingma & Welling , 2013 ) , and we use I ( a ; b | c ) to denote the mutual information between random variables a and b conditioned on c. Intuitively , the above theorem states that for optimal encoders and decoders that minimize Eq . ( 13 ) , one can extract less additional information about the raw input as the code gets longer . Therefore , there exists a natural ordering among different dimensions of the code based on the additional information they can provide for reconstructing the inputs .
The paper considers the problem of slow sampling in autoregressive generative models. Sampling in such models is sequential, so its computational cost scales with the data dimensionality. Existing work speeds up autoregressive sampling by caching activations or distilling into normalizing flows with fast sampling. Authors of this work instead propose a method that returns (approximate) samples given an arbitrary computational budget, a behaviour referred to as *anytime sampling*. The proposed model is based on VQ-VAE by van den Oord et al. (2017), where an autoregressive model is fit to a latent space of a trained discrete autoencoder, rather than to raw pixels. Authors adapt the *nested dropout* idea by Rippel et al. (2014) to encourage the discrete autoencoder to order latent dimensions by their "importance" for reconstruction. Experiments demonstrate that the ordered latent space allows to stop the autoregressive sampling process at an arbitrary latent dimension and still obtain "complete" samples. The quality of samples increases as more latent dimensions are sampled, which allows to trade sample quality for reduced computational cost.
SP:ca6ab92369346b3d457f575fc652333255f2dfec
Explainable Deep One-Class Classification
1 INTRODUCTION . Anomaly detection ( AD ) is the task of identifying anomalies in a corpus of data ( Edgeworth , 1887 ; Barnett and Lewis , 1994 ; Chandola et al. , 2009 ; Ruff et al. , 2021 ) . Powerful new anomaly detectors based on deep learning have made AD more effective and scalable to large , complex datasets such as high-resolution images ( Ruff et al. , 2018 ; Bergmann et al. , 2019 ) . While there exists much recent work on deep AD , there is limited work on making such techniques explainable . Explanations are needed in industrial applications to meet safety and security requirements ( Berkenkamp et al. , 2017 ; Katz et al. , 2017 ; Samek et al. , 2020 ) , avoid unfair social biases ( Gupta et al. , 2018 ) , and support human experts in decision making ( Jarrahi , 2018 ; Montavon et al. , 2018 ; Samek et al. , 2020 ) . One typically makes anomaly detection explainable by annotating pixels with an anomaly score and , in some applications , such as finding tumors in cancer detection ( Quellec et al. , 2016 ) , these annotations are the primary goal of the detector . One approach to deep AD , known as Deep Support Vector Data Description ( DSVDD ) ( Ruff et al. , 2018 ) , is based on finding a neural network that transforms data such that nominal data is concentrated to a predetermined center and anomalous data lies elsewhere . In this paper we present Fully Convolutional Data Description ( FCDD ) , a modification of DSVDD so that the transformed samples are themselves an image corresponding to a downsampled anomaly heatmap . The pixels in this heatmap that are far from the center correspond to anomalous regions in the input image . FCDD does this by only using convolutional and pooling layers , thereby limiting the receptive field of each output pixel . Our method is based on the one-class classification paradigm ( Moya et al. , 1993 ; Tax , 2001 ; Tax and Duin , 2004 ; Ruff et al. , 2018 ) , which is able to naturally incorporate known anomalies Ruff et al . ( 2021 ) , but is also effective when simply using synthetic anomalies . ∗equal contribution 1Our code is available at : https : //github.com/liznerski/fcdd We show that FCDD ’ s anomaly detection performance is close to the state of the art on the standard AD benchmarks with CIFAR-10 and ImageNet while providing transparent explanations . On MVTecAD , an AD dataset containing ground-truth anomaly maps , we demonstrate the accuracy of FCDD ’ s explanations ( see Figure 1 ) , where FCDD sets a new state of the art . In further experiments we find that deep one-class classification models ( e.g . DSVDD ) are prone to the “ Clever Hans ” effect ( Lapuschkin et al. , 2019 ) where a detector fixates on spurious features such as image watermarks . In general , we find that the generated anomaly heatmaps are less noisy and provide more structure than the baselines , including gradient-based methods ( Simonyan et al. , 2013 ; Sundararajan et al. , 2017 ) and autoencoders ( Sakurada and Yairi , 2014 ; Bergmann et al. , 2019 ) . 2 RELATED WORK . Here we outline related works on deep AD focusing on explanation approaches . Classically deep AD used autoencoders ( Hawkins et al. , 2002 ; Sakurada and Yairi , 2014 ; Zhou and Paffenroth , 2017 ; Zhao et al. , 2017 ) . Trained on a nominal dataset autoencoders are assumed to reconstruct anomalous samples poorly . Thus , the reconstruction error can be used as an anomaly score and the pixel-wise difference as an explanation ( Bergmann et al. , 2019 ) , thereby naturally providing an anomaly heatmap . Recent works have incorporated attention into reconstruction models that can be used as explanations ( Venkataramanan et al. , 2019 ; Liu et al. , 2020 ) . In the domain of videos , Sabokrou et al . ( 2018 ) used a pre-trained fully convolutional architecture in combination with a sparse autoencoder to extract 2D features and provide bounding boxes for anomaly localization . One drawback of reconstruction methods is that they offer no natural way to incorporate known anomalies during training . More recently , one-class classification methods for deep AD have been proposed . These methods attempt to separate nominal samples from anomalies in an unsupervised manner by concentrating nominal data in feature space while mapping anomalies to distant locations ( Ruff et al. , 2018 ; Chalapathy et al. , 2018 ; Goyal et al. , 2020 ) . In the domain of NLP , DSVDD has been successfully applied to text , which yields a form of interpretation using attention mechanisms ( Ruff et al. , 2019 ) . For images , Kauffmann et al . ( 2020 ) have used a deep Taylor decomposition ( Montavon et al. , 2017 ) to derive relevance scores . Some of the best performing deep AD methods are based on self-supervision . These methods transform nominal samples , train a network to predict which transformation was used on the input , and provide an anomaly score via the confidence of the prediction ( Golan and El-Yaniv , 2018 ; Hendrycks et al. , 2019b ) . Hendrycks et al . ( 2019a ) have extended this to incorporate known anomalies as well . No explanation approaches have been considered for these methods so far . Finally , there exists a great variety of explanation methods in general , for example model-agnostic methods ( e.g . LIME ( Ribeiro et al. , 2016 ) ) or gradient-based techniques ( Simonyan et al. , 2013 ; Sundararajan et al. , 2017 ) . Relating to our work , we note that fully convolutional architectures have been used for supervised segmentation tasks where target segmentation maps are required during training ( Long et al. , 2015 ; Noh et al. , 2015 ) . 3 EXPLAINING DEEP ONE-CLASS CLASSIFICATION . We review one-class classification and fully convolutional architectures before presenting our method . Deep One-Class Classification Deep one-class classification ( Ruff et al. , 2018 ; 2020b ) performs anomaly detection by learning a neural network to map nominal samples near a center c in output space , causing anomalies to be mapped away . For our method we use a Hypersphere Classifier ( HSC ) ( Ruff et al. , 2020a ) , a recently proposed modification of Deep SAD ( Ruff et al. , 2020b ) , a semi-supervised version of DSVDD ( Ruff et al. , 2018 ) . Let X1 , . . . , Xn denote a collection of samples and y1 , . . . , yn be labels where yi = 1 denotes an anomaly and yi = 0 denotes a nominal sample . Then the HSC objective is min W , c 1 n n∑ i=1 ( 1− yi ) h ( φ ( Xi ; W ) − c ) − yi log ( 1− exp ( −h ( φ ( Xi ; W ) − c ) ) ) , ( 1 ) where c ∈ Rd is the center , and φ : Rc×h×w → Rd a neural network with weightsW . Here h is the pseudo-Huber loss ( Huber et al. , 1964 ) , h ( a ) = √ ‖a‖22 + 1 − 1 , which is a robust loss that interpolates from quadratic to linear penalization . The HSC loss encourages φ to map nominal samples near c and anomalous samples away from the center c. In our implementation , the center c corresponds to the bias term in the last layer of our networks , i.e . is included in the network φ , which is why we omit c in the FCDD objective below . Fully Convolutional Architecture Our method uses a fully convolutional network ( FCN ) ( Long et al. , 2015 ; Noh et al. , 2015 ) that maps an image to a matrix of features , i.e . φ : Rc×h×w → R1×u×v by using alternating convolutional and pooling layers only , and does not contain any fully connected layers . In this context , pooling can be seen as a special kind of convolution with fixed parameters . A core property of a convolutional layer is that each pixel of its output only depends on a small region of its input , known as the output pixel ’ s receptive field . Since the output of a convolution is produced by moving a filter over the input image , each output pixel has the same relative position as its associated receptive field in the input . For instance , the lower-left corner of the output representation has a corresponding receptive field in the lower-left corner of the input image , etc . ( see Figure 2 left side ) . The outcome of several stacked convolutions also has receptive fields of limited size and consistent relative position , though their size grows with the amount of layers . Because of this an FCN preserves spatial information . Fully Convolutional Data Description Here we introduce our novel explainable AD method Fully Convolutional Data Description ( FCDD ) . By taking advantage of FCNs along with the HSC above , we propose a deep one-class method where the output features preserve spatial information and also serve as a downsampled anomaly heatmap . For situations where one would like to have a full-resolution heatmap , we include a methodology for upsampling the low-resolution heatmap based on properties of receptive fields . FCDD is trained using samples that are labeled as nominal or anomalous . As before , let X1 , . . . , Xn denote a collection of samples with labels y1 , . . . , yn where yi = 1 denotes an anomaly and yi = 0 denotes a nominal sample . Anomalous samples can simply be a collection of random images which are not from the nominal collection , e.g . one of the many large collections of images which are freely available like 80 Million Tiny Images ( Torralba et al. , 2008 ) or ImageNet ( Deng et al. , 2009 ) . The use of such an auxiliary corpus has been recommended in recent works on deep AD , where it is termed Outlier Exposure ( OE ) ( Hendrycks et al. , 2019a ; b ) . When one has access to “ true ” examples of the anomalous dataset , i.e . something that is likely to be representative of what will be seen at test time , we find that even using a few examples as the corpus of labeled anomalies performs exceptionally well . Furthermore , in the absence of any sort of known anomalies , one can generate synthetic anomalies , which we find is also very effective . With an FCN φ : Rc×h×w → Ru×v the FCDD objective utilizes a pseudo-Huber loss on the FCN output matrix A ( X ) = ( √ φ ( X ; W ) 2 + 1− 1 ) , where all operations are applied element-wise . The FCDD objective is then defined as ( cf. , ( 1 ) ) : min W 1 n n∑ i=1 ( 1− yi ) 1 u · v ‖A ( Xi ) ‖1 − yi log ( 1− exp ( − 1 u · v ‖A ( Xi ) ‖1 ) ) . ( 2 ) Here ‖A ( X ) ‖1 is the sum of all entries in A ( X ) , which are all positive . FCDD is the utilization of an FCN in conjunction with the novel adaptation of the HSC loss we propose in ( 2 ) . The objective maximizes ‖A ( X ) ‖1 for anomalies and minimizes it for nominal samples , thus we use ‖A ( X ) ‖1 as the anomaly score . Entries of A ( X ) that contribute to ‖A ( X ) ‖1 correspond to regions of the input image that add to the anomaly score . The shape of these regions depends on the receptive field of the FCN . We include a sensitivity analysis on the size of the receptive field in Appendix A , where we find that performance is not strongly affected by the receptive field size . Note that A ( X ) has spatial dimensions u × v and is smaller than the original image dimensions h × w. One could use A ( X ) directly as a low-resolution heatmap of the image , however it is often desirable to have full-resolution heatmaps . Because we generally lack ground-truth anomaly maps in an AD setting during training , it is not possible to train an FCN in a supervised way to upsample the low-resolution heatmap A ( X ) ( e.g . as in ( Noh et al. , 2015 ) ) . For this reason we introduce an upsampling scheme based on the properties of receptive fields . Algorithm 1 Receptive Field Upsampling Input : A ∈ Ru×v ( low-res anomaly heatmap ) Output : A′ ∈ Rh×w ( full-res anomaly heatmap ) Define : [ G2 ( µ , σ ) ] x , y , 12πσ2 exp ( − ( x−µ1 ) 2+ ( y−µ2 ) 2 2σ2 ) A′ ← 0 for all output pixels a in A do f ← receptive field of a c← center of field f A′ ← A′ + a ·G2 ( c , σ ) end for return A′ Heatmap Upsampling Since we generally do not have access to ground-truth pixel annotations in anomaly detection during training , we can not learn how to upsample using a deconvolutional type of structure . We derive a principled way to upsample our lower resolution anomaly heatmap instead . For every output pixel in A ( X ) there is a unique input pixel which lies at the center of its receptive field . It has been observed before that the effect of the receptive field for an output pixel decays in a Gaussian manner as one moves away from the center of the receptive field ( Luo et al. , 2016 ) . We use this fact to upsample A ( X ) by using a strided transposed convolution with a fixed Gaussian kernel ( see Figure 2 right side ) . We describe this operation and procedure in Algorithm 1 which simply corresponds to a strided transposed convolution . The kernel size is set to the receptive field range of FCDD and the stride to the cumulative stride of FCDD . The variance of the distribution can be picked empirically ( see Appendix B for details ) . Figure 3 shows a complete overview of our FCDD method and the process of generating full-resolution anomaly heatmaps .
This paper presents a one-class classification method using a fully convolutional model and directly using the output map as an explanation map. The method is dubbed FCDD for fully convolutional data descriptor. FCDD uses a hypersphere classifier combined with a pseudo-Huber loss. FCDD is trained using outliers exposure (OE) from a different but related dataset. The empirical study consists of 3 parts:
SP:a4cda983cb5a670c3ad7054b9cd7797107af64b1
not-MIWAE: Deep Generative Modelling with Missing not at Random Data
1 INTRODUCTION z x s θ φ γ N ( a ) PPCA not-MIWAE PPCA ( b ) Figure 1 : ( a ) Graphical model of the not-MIWAE . ( b ) Gaussian data with MNAR values . Dots are fully observed , partially observed data are displayed as black crosses . A contour of the true distribution is shown together with directions found by PPCA and not-MIWAE with a PPCA decoder . Missing data often constitute systemic issues in real-world data analysis , and can be an integral part of some fields , e.g . recommender systems . This requires the analyst to take action by either using methods and models that are applicable to incomplete data or by performing imputations of the missing data before applying models requiring complete data . The expected model performance ( often measured in terms of imputation error or innocuity of missingness on the inference results ) depends on the assumptions made about the missing mechanism and how well those assumptions match the true missing mechanism . In a seminal paper , Rubin ( 1976 ) introduced a formal probabilistic framework to assess missing mechanism assumptions and their consequences . The most commonly used assumption , either implicitly or explicitly , is that a part of the data is missing at random ( MAR ) . Essentially , the MAR assumption means that the missing pattern does not depend on the missing values . This makes it possible to ignore the missing data mechanism in likelihood-based inference by marginalizing over the missing data . The often implicit assumption made in nonprobabilistic models and ad-hoc methods is that the data are missing completely at random ( MCAR ) . MCAR is a stronger assumption than MAR , and informally it means that both observed and missing data do not depend on the missing pattern . More details on these assumptions can be found in the monograph of Little & Rubin ( 2002 ) ; of particular interest are also the recent revisits of Seaman et al . ( 2013 ) and Doretti et al . ( 2018 ) . In this paper , our goal is to posit statistical models that leverage deep learning in order to break away from these assumptions . Specifically , we propose a general ∗Department of Applied Mathematics and Computer Science , Technical University of Denmark , Denmark †Université Côte d ’ Azur , Inria ( Maasai team ) , Laboratoire J.A . Dieudonné , UMR CNRS 7351 , France ‡Equal contribution recipe for dealing with cases where there is prior information about the distribution of the missing pattern given the data ( e.g . self-censoring ) . The MAR and MCAR assumptions are violated when the missing data mechanism is dependent on the missing data themselves . This setting is called missing not at random ( MNAR ) . Here the missing mechanism can not be ignored , doing so will lead to biased parameter estimates . This setting generally requires a joint model for data and missing mechanism . Deep latent variable models ( DLVMs , Kingma & Welling , 2013 ; Rezende et al. , 2014 ) have recently been used for inference and imputation in missing data problems ( Nazabal et al. , 2020 ; Ma et al. , 2018 ; 2019 ; Ivanov et al. , 2019 ; Mattei & Frellsen , 2019 ) . This led to impressive empirical results in the MAR and MCAR case , in particular for high-dimensional data . 1.1 CONTRIBUTIONS . We introduce the not-missing-at-random importance-weighted autoencoder ( not-MIWAE ) which allows for the application of DLVMs to missing data problems where the missing mechanism is MNAR . This is inspired by the missing data importance-weighted autoencoder ( MIWAE , Mattei & Frellsen , 2019 ) , a framework to train DLVMs in MAR scenarios , based itself on the importanceweighted autoencoder ( IWAE ) of Burda et al . ( 2016 ) . The general graphical model for the notMIWAE is shown in figure 1a . The first part of the model is simply a latent variable model : there is a stochastic mapping parameterized by θ from a latent variable z ∼ p ( z ) to the data x ∼ pθ ( x|z ) , and the data may be partially observed . The second part of the model , which we call the missing model , is a stochastic mapping from the data to the missing mask s ∼ pφ ( s|x ) . Explicit specification of the missing model pφ ( s|x ) makes it possible to address MNAR issues . The model can be trained efficiently by maximising a lower bound of the joint likelihood ( of the observed features and missing pattern ) obtained via importance weighted variational inference ( Burda et al. , 2016 ) . A key difference with the MIWAE is that we use the reparameterization trick in the data space , as well as in the code space , in order to get stochastic gradients of the lower bound . Missing processes affect data analysis in a wide range of domains and often the MAR assumption does not hold . We apply our method to censoring in datasets from the UCI database , clipping in images and the issue of selection bias in recommender systems . 2 BACKGROUND . Assume that the complete data are stored within a data matrix X = ( x1 , . . . , xn ) ᵀ ∈ Xn that contain n i.i.d . copies of the random variable x ∈ X , where X = X1 × · · · × Xp is a p-dimensional feature space . For simplicity , xij refers to the j ’ th feature of xi , and xi refers to the i ’ th sample in the data matrix . Throughout the text , we will make statements about the random variable x , and only consider samples xi when necessary . In a missing data context , each sample can be split into an observed part and a missing part , xi = ( xoi , x m i ) . The pattern of missingness is individual to each copy of x and described by a corresponding mask random variable s ∈ { 0 , 1 } p. This leads to a mask matrix S = ( s1 , . . . , sn ) ᵀ ∈ { 0 , 1 } n×p verifying sij = 1 if xij is observed and sij = 0 if xij is missing . We wish to construct a parametric model pθ , φ ( x , s ) for the joint distribution of a single data point x and its mask s , which can be factored as pθ , φ ( x , s ) = pθ ( x ) pφ ( s|x ) . ( 1 ) Here pφ ( s|x ) = pφ ( s|xo , xm ) is the conditional distribution of the mask , which may depend on both the observed and missing data , through its own parameters φ . The three assumptions from the framework of Little & Rubin ( 2002 ) ( see also Ghahramani & Jordan , 1995 ) pertain to the specific form of this conditional distribution : • MCAR : pφ ( s|x ) = pφ ( s ) , • MAR : pφ ( s|x ) = pφ ( s|xo ) , • MNAR : pφ ( s|x ) may depend on both xo and xm . To maximize the likelihood of the parameters ( θ , φ ) , based only on observed quantities , the missing data is integrated out from the joint distribution pθ , φ ( x o , s ) = ∫ pθ ( x o , xm ) pφ ( s|xo , xm ) dxm . ( 2 ) In both the MCAR and MAR cases , inference for θ using the full likelihood becomes proportional to pθ , φ ( xo , s ) ∝ pθ ( xo ) , and the missing mechanism can be ignored while focusing only on pθ ( xo ) . In the MNAR case , the missing mechanism can depend on both observed and missing data , offering no factorization of the likelihood in equation ( 2 ) . The parameters of the data generating process and the parameters of the missing data mechanism are tied together by the missing data . 2.1 PPCA EXAMPLE . A linear DLVM with isotropic noise variance can be used to recover a model similar to probabilistic principal component analysis ( PPCA , Roweis , 1998 ; Tipping & Bishop , 1999 ) . In figure 1b , a dataset affected by an MNAR missing process is shown together with two fitted PPCA models , regular PPCA and the not-MIWAE formulated as a PPCA-like model . Data is generated from a multivariate normal distribution and an MNAR missing process is imposed by setting the horizontal coordinate to missing when it is larger than its mean , i.e . it becomes missing because of the value it would have had , had it been observed . Regular PPCA for missing data assumes that the missing mechanism is MAR so that the missing process is ignorable . This introduces a bias , both in the estimated mean and in the estimated principal signal direction of the data . The not-MIWAE PPCA assumes the missing mechanism is MNAR so the data generating process and missing data mechanism are modelled jointly as described in equation ( 2 ) . 2.2 PREVIOUS WORK . In ( Rubin , 1976 ) the appropriateness of ignoring the missing process when doing likelihood based or Bayesian inference was introduced and formalized . The introduction of the EM algorithm ( Dempster et al. , 1977 ) made it feasible to obtain maximum likelihood estimates in many missing data settings , see e.g . Ghahramani & Jordan ( 1994 ; 1995 ) ; Little & Rubin ( 2002 ) . Sampling methods such as Markov chain Monte Carlo have made it possible to sample a target posterior in Bayesian models , including the missing data , so that parameter marginal distributions and missing data marginal distributions are available directly ( Gelman et al. , 2013 ) . This is also the starting point of the multiple imputations framework of Rubin ( 1977 ; 1996 ) . Here the samples of the missing data are used to provide several realisations of complete datasets where complete-data methods can be applied to get combined mean and variability estimates . The framework of Little & Rubin ( 2002 ) is instructive in how to handle MNAR problems and a recent review of MNAR methods can be found in ( Tang & Ju , 2018 ) . Low rank models were used for estimation and imputation in MNAR settings by Sportisse et al . ( 2020a ) . Two approaches were taken to fitting models , 1 ) maximising the joint distribution of data and missing mask using an EM algorithm , and 2 ) implicitly modelling the joint distribution by concatenating the data matrix and the missing mask and working with this new matrix . This implies a latent representation both giving rise to the data and the mask . An overview of estimation methods for PCA and PPCA with missing data was given by Ilin & Raiko ( 2010 ) , while PPCA in the presence of an MNAR missing mechanism has been addressed by Sportisse et al . ( 2020b ) . There has been some focus on MNAR issues in the form of selection bias within the recommender system community ( Marlin et al. , 2007 ; Marlin & Zemel , 2009 ; Steck , 2013 ; Hernández-Lobato et al. , 2014 ; Schnabel et al. , 2016 ; Wang et al. , 2019 ) where methods applied range from joint modelling of data and missing model using multinomial mixtures and matrix factorization to debiasing existing methods using propensity based techniques from causality . Deep latent variable models are intuitively appealing in a missing context : the generative part of the model can be used to sample the missing part of an observation . This was already utilized by Rezende et al . ( 2014 ) to do imputation and denoising by sampling from a Markov chain whose stationary distribution is approximately the conditional distribution of the missing data given the observed . This procedure has been enhanced by Mattei & Frellsen ( 2018a ) using Metropolis-withinGibbs . In both cases the experiments were assuming MAR and a fitted model , based on complete data , was already available . Approaches to fitting DLVMs in the presence of missing have recently been suggested , such as the HI-VAE by Nazabal et al . ( 2020 ) using an extension of the variational autoencoder ( VAE ) lower bound , the p-VAE by Ma et al . ( 2018 ; 2019 ) using the VAE lower bound and a permutation invariant encoder , the MIWAE by Mattei & Frellsen ( 2019 ) , extending the IWAE lower bound ( Burda et al. , 2016 ) , and GAIN ( Yoon et al. , 2018 ) using GANs for missing data imputation . All approaches are assuming that the missing process is MAR or MCAR . In ( Gong et al. , 2020 ) , the data and missing mask are modelled together , as both being generated by a mapping from the same latent space , thereby tying the data model and missing process together . This gives more flexibility in terms of missing process assumptions , akin to the matrix factorization approach by Sportisse et al . ( 2020a ) . In concurrent work , Collier et al . ( 2020 ) have developed a deep generative model of the observed data conditioned on the mask random variable , and Lim et al . ( 2021 ) apply a model similar to the not-MIWAE to electronic health records data . In forthcoming work , Ghalebikesabi et al . ( 2021 ) propose a deep generative model for non-ignorable missingness building on ideas from VAEs and pattern-set mixture models .
This paper proposes an approach to training deep latent variable models on data that is missing not at random. To learn the parameters of deep latent variable models, the paper adopts importance-weighted variational inference techniques. Experiments on a variety of datasets show that the proposed approach is effective by explicitly modeling missing not at random data.
SP:1d4d75e1bbb4e58273bc027f004aa986a587a6dd
Sparse Gaussian Process Variational Autoencoders
1 INTRODUCTION . Increasing amounts of large , multi-dimensional datasets that exhibit strong spatio-temporal dependencies are arising from a wealth of domains , including earth , social and environmental sciences ( Atluri et al. , 2018 ) . For example , consider modelling daily atmospheric measurements taken by weather stations situated across the globe . Such data are ( 1 ) large in number ; ( 2 ) subject to strong spatio-temporal dependencies ; ( 3 ) multi-dimensional ; and ( 4 ) non-Gaussian with complex dependencies across outputs . There exist two venerable approaches for handling these characteristics : Gaussian process ( GP ) regression and deep generative models ( DGMs ) . GPs provide a framework for encoding high-level assumptions about latent processes , such as smoothness or periodicity , making them effective in handling spatio-temporal dependencies . Yet , existing approaches do not support the use of flexible likelihoods necessary for modelling complex multi-dimensional outputs . In contrast , DGMs support the use of flexible likelihoods ; however , they do not provide a natural route through which spatio-temporal dependencies can be encoded . The amalgamation of GPs and DGMs , GP-DGMs , use latent functions drawn independently from GPs , which are then passed through a DGM at each input location . GP-DGMs combine the complementary strengths of both approaches , making them naturally suited for modelling spatio-temporal datasets . Intrinsic to the application of many spatio-temporal datasets is the notion of tasks . For instance : medicine has individual patients ; each trial in a scientific experiment produces an individual dataset ; and , in the case of a single large dataset , it is often convenient to split it into separate tasks to improve computational efficiency . GP-DGMs support the presence of multiple tasks in a memory efficient way through the use of amortisation , giving rise to the Gaussian process variational autoencoder ( GP-VAE ) , a model that has recently gained considerable attention from the research community ( Pearce , 2020 ; Fortuin et al. , 2020 ; Casale et al. , 2018 ; Campbell & Liò , 2020 ; Ramchandran et al. , 2020 ) . However , previous work does not support sparse GP approximations based on inducing points , a necessity for modelling even moderately sized datasets . Furthermore , many spatio-temporal datasets contain an abundance of missing data : weather measurements are often absent due to sensor failure , and in medicine only single measurements are taken at any instance . Handling partial observations in a principled manner is essential for modelling spatio-temporal data , but is yet to be considered . Our key technical contributions are as follows : i ) We develop the sparse GP-VAE ( SGP-VAE ) , which uses inference networks to parameterise multi-output sparse GP approximations . ii ) We employ a suite of partial inference networks for handling missing data in the SGP-VAE . iii ) We conduct a rigorous evaluation of the SGP-VAE in a variety of experiments , demonstrat- ing excellent performance relative to existing multi-output GPs and structured VAEs . 2 A FAMILY OF SPATIO-TEMPORAL VARIATIONAL AUTOENCODERS . Consider the multi-task regression problem in which we wish to model T datasets D = { D ( t ) } Tt=1 , each of which comprises input/output pairsD ( t ) = { x ( t ) n , y ( t ) n } Ntn=1 , x ( t ) n ∈ RD and y ( t ) n ∈ RP . Further , let any possible permutation of observed values be potentially missing , such that each observation y ( t ) n = yon ( t ) ∪ yun ( t ) contains a set of observed values yon ( t ) and unobserved values yun ( t ) , with O ( t ) n denoting the index set of observed values . For each task , we model the distribution of each observation y ( t ) n , conditioned on a corresponding latent variable f ( t ) n ∈ RK , as a fully-factorised Gaussian distribution parameterised by passing f ( t ) n through a decoder deep neural network ( DNN ) with parameters θ2 . The elements of f ( t ) n correspond to the evaluation of a K-dimensional latent function f ( t ) = ( f ( t ) 1 , f ( t ) 2 , . . . , f ( t ) K ) at input x ( t ) n . That is , f ( t ) n = f ( t ) ( x ( t ) n ) . Each latent function f ( t ) is modelled as being drawn from K independent GP priors with hyper-parameters θ1 = { θ1 , k } Kk=1 , giving rise to the complete probabilistic model : f ( t ) ∼ K∏ k=1 GP ( 0 , kθ1 , k ( x , x ′ ) ) ︸ ︷︷ ︸ pθ1 ( f ( t ) k ) y ( t ) |f ( t ) ∼ Nt∏ n=1 N ( µoθ2 ( f ( t ) n ) , diag σ o θ2 2 ( f ( t ) n ) ) ︸ ︷︷ ︸ pθ2 ( y o n ( t ) |f ( t ) , x ( t ) n , O ( t ) n ) ( 1 ) where µoθ2 ( f ( t ) n ) and σ o θ2 2 ( f ( t ) n ) are the outputs of the decoder indexed byO ( t ) n . We shall refer to the set θ = { θ1 , θ2 } as the model parameters , which are shared across tasks . The probabilistic model in equation 1 explicitly accounts for dependencies between latent variables through the GP prior . The motive of the latent structure is twofold : to discover a simpler representation of each observation , and to capture the dependencies between observations at different input locations . 2.1 MOTIVATION FOR SPARSE APPROXIMATIONS AND AMORTISED INFERENCE . The use of amortised inference in DGMs and sparse approximations in GPs enables inference in these respective models to scale to large quantities of data . To ensure the same for the GP-DGM described in equation 1 , we require the use of both techniques . In particular , amortised inference is necessary to prevent the number of variational parameters scaling with O ( ∑T t=1N ( t ) ) . Further , the inference network can be used to condition on previously unobserved data without needing to learn new variational parameters . Similarly , sparse approximations are necessary to prevent the computational complexity increasing cubically with the size of each task O ( ∑T t=1N ( t ) 3 ) . Unfortunately , it is far from straightforward to combine sparse approximations and amortised inference in a computationally efficient way . To see this , consider the standard form for the sparse GP approximate posterior , q ( f ) = pθ1 ( f\u|u ) q ( u ) where q ( u ) = N ( u ; m , S ) with m , S and Z , the inducing point locations , being the variational parameters . q ( u ) does not decompose into a product over N ( t ) factors and is therefore not amendable to per-datapoint amortisation . That is , m and S must be optimised as free-form variational parameters . A naı̈ve approach to achieving per-datapoint amortisation is to decompose q ( u ) into the prior pθ1 ( u ) multiplied by the product of approximate likelihoods , one for each inducing point . Each approximate likelihood is itself equal to the product of per-datapoint approximate likelihoods , which depend on both the observation yon and the distance of the input xn to that of the inducing point . An inference network which takes these two values of inputs can be used to obtain the parameters of the approximate likelihood factors . Whilst we found that this approach worked , it is somewhat unprincipled . Moreover , it requires passing each datapoint/inducing point pair through an inference network , which scales very poorly . In the following section , we introduce a theoretically principled decomposition of q ( f ) we term the sparse structured approximate posterior which will enable efficient amortization . 2.2 THE SPARSE STRUCTURED APPROXIMATE POSTERIOR . By simultaneously leveraging amortised inference and sparse GP approximations , we can perform efficient and scalable approximate inference . We specify the sparse structured approximate posterior , q ( f ( t ) ) , which approximates the intractable true posterior for task t : pθ ( f ( t ) |y ( t ) , X ( t ) ) = 1 Zp pθ1 ( f ( t ) ) Nt∏ n=1 pθ2 ( y o n ( t ) |f ( t ) , x ( t ) n , O ( t ) n ) ≈ 1 Zq pθ1 ( f ( t ) ) Nt∏ n=1 lφl ( u ; y o n ( t ) , x ( t ) n , Z ) = q ( f ( t ) ) . ( 2 ) Analogous to its presence in the true posterior , the approximate posterior retains the GP prior , yet replaces each non-conjugate likelihood factor with an approximate likelihood , lφl ( u ; y o n ( t ) , x ( t ) n , Z ) , over a set ofKM ‘ inducing points ’ , u = ∪Kk=1∪Mm=1umk , at ‘ inducing locations ’ , Z = ∪Kk=1∪Mm=1 zmk . For tractability , we restrict the approximate likelihoods to be Gaussians factorised across each latent dimension , parameterised by passing each observation through a partial inference network : lφl ( uk ; y o n ( t ) , x ( t ) n , Zk ) = N ( µφl , k ( y o n ( t ) ) ; k f ( t ) nkuk K−1ukukuk , σ 2 φl , k ( yon ( t ) ) ) ( 3 ) where φl denotes the weights and biases of the partial inference network , whose outputs are shown in red . This form is motivated by the work of Bui et al . ( 2017 ) , who demonstrate the optimality of approximate likelihoods of the form N ( gn ; kf ( t ) nkuk K−1ukukuk , vn ) , a result we prove in Appendix A.1 . Whilst , in general , the optimal free-form values of gn and vn depend on all of the data points , we make the simplifying assumption that they depend only on yon ( t ) . For GP regression with Gaussian noise , this assumption holds true as gn = yn and vn = σ2y ( Bui et al. , 2017 ) . The resulting approximate posterior can be interpreted as the exact posterior induced by a surrogate regression problem , in which ‘ pseudo-observations ’ gn are produced from a linear transformation of inducing points with additive ‘ pseudo-noise ’ vn , gn = kf ( t ) nkuk K−1ukukuk + √ vn . The inference network learns to construct this surrogate regression problem such that it results in a posterior that is close to our target posterior . By sharing variational parameters φ = { φl , Z } across tasks , inference is amortised across both datapoints and tasks . The approximate posterior for a single task corresponds to the product of K independent GPs , with mean and covariance functions m̂ ( t ) k ( x ) = kf ( t ) k uk Φ ( t ) k Kukf ( t ) k Σ ( t ) φl , k −1 µ ( t ) φl , k k̂ ( t ) k ( x , x ′ ) = k f ( t ) k f ′ k ( t ) − kf ( t ) k ukK −1 ukuk kukf ′k ( t ) + k f ( t ) k uk Φ ( t ) k kukf ′k ( t ) ( 4 ) where Φ ( t ) k −1 = Kukuk + Kukf ( t ) k Σ ( t ) φl , k −1 K f ( t ) k uk , [ µ ( t ) φl , k ] i = µφl , k ( y o i ( t ) ) and [ Σ ( t ) φl , k ] ij = δijσ 2 φl , k ( yoi ( t ) ) . See Appendix A.2 for a complete derivation . The computational complexity associated with evaluating the mean and covariance functions is O ( KM2N ( t ) ) , a significant improve- ment over the O ( P 3N ( t ) 3 ) cost associated with exact multi-output GPs for KM2 P 3N ( t ) 2 . We refer to the combination of the aforementioned probabilistic model and sparse structured approximate posterior as the SGP-VAE . The SGP-VAE addresses three major shortcomings of existing sparse GP frameworks . First , the inference network can be used to condition on previously unobserved data without needing to learn new variational parameters . Suppose we use the standard sparse GP variational approximation q ( f ) = pθ1 ( f\u|u ) q ( u ) where q ( u ) = N ( u ; m , S ) . If more data are observed , m and S have to be re-optimised . When an inference network is used to parameterise q ( u ) , the approximate posterior is ‘ automatically ’ updated by mapping from the new observations to their corresponding approximate likelihood terms . Second , the complexity of the approximate posterior can be modified as desired with no changes to the inference network , or additional training , necessary : any changes in the morphology of inducing points corresponds to a deterministic transformation of the inference network outputs . Third , if the inducing point locations are fixed , then the number of variational parameters does not depend on the size of the dataset , even as more inducing points are added . This contrasts with the standard approach , in which new variational parameters are appended to m and S .
In this work generative models using a GP as prior and a deep network as likelihood (GP-DGMs) are considered. In the VAE formalism for inference, the novelty of this paper is located in the encoder: It is sparse and the posterior can be computed even when part of the observations are missing. Sparsity is obtained using inducing inputs and the missing observations are handled through the use of deep sets, i.e. the observations aren't given as a vector, but as a permutation-invariant set of (index, value) pairs.
SP:da630280f443afedfacaf7ad1abe20d97ebb60f2
End-to-End Egospheric Spatial Memory
1 INTRODUCTION . Egocentric spatial memory is central to our understanding of spatial reasoning in biology ( Klatzky , 1998 ; Burgess , 2006 ) , where an embodied agent constantly carries with it a local map of its surrounding geometry . Such representations have particular significance for action selection and motor control ( Hinman et al. , 2019 ) . For robotics and embodied AI , the benefits of a persistent local spatial memory are also clear . Such a system has the potential to run for long periods , and bypass both the memory and runtime complexities of large scale world-centric mapping . Peters et al . ( 2001 ) propose an EgoSphere as being a particularly suitable representation for robotics , and more recent works have utilized ego-centric formulations for planar robot mapping ( Fankhauser et al. , 2014 ) , drone obstacle avoidance ( Fragoso et al. , 2018 ) and mono-to-depth ( Liu et al. , 2019 ) . In parallel with these ego-centric mapping systems , a new paradigm of differentiable memory architectures has arisen , where a memory bank is augmented to a neural network , which can then learn read and write operations ( Weston et al. , 2014 ; Graves et al. , 2014 ; Sukhbaatar et al. , 2015 ) . When compared to Recurrent Neural Networks ( RNNs ) , the persistent memory circumvents issues of vanishing or exploding gradients , enabling solutions to long-horizon tasks . These have also been applied to visuomotor control and navigation tasks ( Wayne et al. , 2018 ) , surpassing baselines such as the ubiquitous Long Short-Term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) . We focus on the intersection of these two branches of research , and propose Egospheric Spatial Memory ( ESM ) , a parameter-free module which encodes geometric and semantic information about the scene in an ego-sphere around the agent . To the best of our knowledge , ESM is the first end-to-end trainable egocentric memory with a full panoramic representation , enabling direct encoding of the surrounding scene in a 2.5D image . We also show that by propagating gradients through the ESM computation graph we can learn features to be stored in the memory . We demonstrate the superiority of learning features through the ESM module on both target shape reaching and object segmentation tasks . For other visuomotor control tasks , we show that even without learning features through the module , and instead directly projecting image color values into memory , ESM consistently outperforms other memory baselines . Through these experiments , we show that the applications of our parameter-free ESM module are widespread , where it can either be dropped into existing pipelines as a non-learned module , or end-to-end trained in a larger computation graph , depending on the task requirements . 2 RELATED WORK . 2.1 MAPPING . Geometric mapping is a mature field , with many solutions available for constructing high quality maps . Such systems typically maintain an allocentric map , either by projecting points into a global world co-ordinate system ( Newcombe et al. , 2011 ; Whelan et al. , 2015 ) , or by maintaining a certain number of keyframes in the trajectory history ( Zhou et al. , 2018 ; Bloesch et al. , 2018 ) . If these systems are to be applied to life-long embodied AI , then strategies are required to effectively select the parts of the map which are useful , and discard the rest from memory ( Cadena et al. , 2016 ) . For robotics applications , prioritizing geometry in the immediate vicinity is a sensible prior . Rather than taking a world-view to map construction , such systems often formulate the mapping problem in a purely ego-centric manner , performing continual re-projection to the newest frame and pose with fixed-sized storage . Unlike allocentric formulations , the memory indexing is then fully coupled to the agent pose , resulting in an ordered representation particularly well suited for downstream egocentric tasks , such as action selection . Peters et al . ( 2001 ) outline an EgoSphere memory structure as being suitable for humanoid robotics , with indexing via polar and azimuthal angles . Fankhauser et al . ( 2014 ) use ego-centric height maps , and demonstrate on a quadrupedal robot walking over obstacles . Cigla et al . ( 2017 ) use per-pixel depth Gaussian Mixture Models ( GMMs ) to maintain an ego-cylinder of belief around a drone , with applications to collision avoidance ( Fragoso et al. , 2018 ) . In a different application , Liu et al . ( 2019 ) learn to predict depth images from a sequence of RGB images , again using ego reprojections . These systems are all designed to represent only at the level of depth and RGB features . For mapping more expressive implicit features via end-to-end training , a fully differentiable long-horizon computation graph is required . Any computation graph which satisfies this requirement is generally referred to as memory in the neural network literature . 2.2 MEMORY . The concept of memory in neural networks is deeply coupled with recurrence . Naive recurrent networks have vanishing and exploding gradient problems ( Hochreiter , 1998 ) , which LSTMs ( Hochreiter & Schmidhuber , 1997 ) and Gated Recurrent Units ( GRUs ) ( Cho et al. , 2014 ) mediate using additive gated structures . More recently , dedicated differentiable memory blocks have become a popular alternative . Weston et al . ( 2014 ) applied Memory Networks ( MemNN ) to question answering , using hard read-writes and separate training of components . Graves et al . ( 2014 ) and Sukhbaatar et al . ( 2015 ) instead made the read and writes ‘ soft ’ with the proposal of Neural Turing Machines ( NTM ) and End-to-End Memory Networks ( MemN2N ) respectively , enabling joint training with the controller . Other works have since conditioned dynamic memory on images , for tasks such as visual question answering ( Xiong et al. , 2016 ) and object segmentation ( Oh et al. , 2019 ) . Another distinct but closely related approach is self attention ( Vaswani et al. , 2017 ) . These approaches also use key-based content retrieval , but do so on a history of previous observations with adjacent connectivity . Despite the lack of geometric inductive bias , recent results demonstrate the amenability of general memory ( Wayne et al. , 2018 ) and attention ( Parisotto et al. , 2019 ) to visuomotor control and navigation tasks . Other authors have explored the intersection of network memory and spatial mapping for navigation , but have generally been limited to 2D aerial-view maps , focusing on planar navigation tasks . Gupta et al . ( 2017 ) used an implicit ego-centric memory which was updated with warping and confidence maps for discrete action navigation problems . Parisotto & Salakhutdinov ( 2017 ) proposed a similar setup , but used dedicated learned read and write operations for updates , and tested on simulated Doom environments . Without consideration for action selection , Henriques & Vedaldi ( 2018 ) proposed a similar system , but instead used an allocentric formulation , and tested on free-form trajectories of real images . Zhang et al . ( 2018 ) also propose a similar system , but with the inclusion of loop closure . Our memory instead focuses on local perception , with the ability to represent detailed 3D geometry in all directions around the agent . The benefits of our module are complementary to existing 2D methods , which instead focus on occlusion-aware planar understanding suitable for navigation . 3 METHOD . In this section , we describe our main contribution , the egospheric spatial memory ( ESM ) module , shown in Figure 1 . The module operates as an Extended Kalman Filter ( EKF ) , with an egosphere image µt ∈ Rhs×ws× ( 2+1+n ) and its diagonal covariance Σt ∈ Rhs×ws× ( 1+n ) representing the state . The egosphere image consists of 2 channels for the polar and azimuthal angles , 1 for radial depth , and n for encoded features . The angles are not included in the covariance , as their values are implicit in the egosphere image pixel indices . The covariance only represents the uncertainty in depth and features at these fixed equidistant indices , and diagonal covariance is assumed due to the large state size of the images . Image measurements are assumed to come from projective depth cameras , which similarly store 1 channel for depth and n for encoded features . We also assume incremental agent pose measurements ut ∈ R6 with covariance Σut ∈ R6×6 are available , in the form of a translation and rotation vector . The algorithm overview is presented in Algorithm 1 . Finally , the update step takes our state prediction µ̄t , Σ̄t and state observation µ̂t , Σ̂t , and fuses them to produce our new state belief µt , Σt . We spend the remainder of this section explaining the form of the constituent functions . All functions in Algorithm 1 involve re-projections across different image frames , using forward warping . Functions fm , Fm , fo and Fo are therefore all built using the same core functions . While the re-projections could be solved using a typical rendering pipeline of mesh construction followed by rasterization , we instead choose a simpler approach and directly quantize the pixel projections with variance-based image smoothing to fill in quantization holes . An overview of the projection and quantization operations for a single ESM update step is shown in Fig . 1 . 3.1 FORWARD WARPING . Forward warping projects ordered equidistant homogeneous pixel co-ordinates pcf1 from frame f1 to non-ordered non-equidistant homogeneous pixel co-ordinates p̃cf2 in frame f2 . We use µ̃f2 = { φ̃f2 , θ̃f2 , d̃f2 , ẽf2 } to denote the loss of ordering following projection from µf1 = { φf1 , θf2 , df1 , ef2 } , where φ , θ , d and e represent polar angles , azimuthal angles , depth and encoded features respectively . We only consider warping from projective to omni cameras , which corresponds to functions fo , Fo , but the omni-to-omni case as in fm , Fm is identical except with the inclusion of another polar co-ordinate transformation . The encoded features are assumed constant during projection ẽf2 = ef1 . For depth , we must transform the values to the new frame in polar co-ordinates , which is a composition of a linear transformation and non-linear polar conversion . Using the camera intrinsic matrix K1 , the full projection is composed of a scalar multiplication with homogeneous pixel co-ordinates pcf1 , transformation by camera inverse matrix K−11 and frame-to-frame T12 matrices , and polar conversion fp : { φ̃f2 , θ̃f2 , d̃f2 } = fp ( T12K−11 [ pcf1 df1 ] ) ( 1 ) Combined , this provides us with both the forward warped image µ̃f2 = { φ̃f2 , θ̃f2 , d̃f2 , ẽf2 } , and the newly projected homogeneous pixel co-ordinates p̃cf2 = { kpprφ̃f2 , kppr θ̃f2 , 1 } , where kppr denotes the pixels-per-radian resolution constant . The variances are also projected using the full analytic Jacobians , which are efficiently implemented as tensor operations , avoiding costly autograd usage . ˆ̃Σ2 = JV V1J T V + JPP12J T P ( 2 ) 3.2 QUANTIZATION , FUSION AND SMOOTHING . Following projection , we first quantize the floating point pixel coordinates p̃cf2 into integer pixel co-ordinates pcf2 . This in general leads to quantization holes and duplicates . The duplicates are handled with a variance conditioned depth buffer , such that the closest projected depth is used , provided that it ’ s variance is lower than a set threshold . This in general prevents highly uncertain close depth values from overwriting highly certain far values . We then perform per pixel fusion based on lines 6 and 7 in Algorithm 1 provided the depths fall within a set relative threshold , otherwise the minimum depth with sufficiently low variance is taken . This again acts as a depth buffer . Finally , we perform variance based image smoothing , whereby we treat each N ×N image patch ( µk , l ) k∈ { 1 , .. , N } , l∈ { 1 , .. , N } as a collection of independent measurements of the central pixel , and combine their variance values based on central limit theory , resulting in smoothed values for each pixel in the image µi , j . Although we use this to update the mean belief , we do not smooth the variance values , meaning projection holes remain at prior variance . This prevents the smoothing from distorting our belief during subsequent projections , and makes the smoothing inherently local to the current frame only . The smoothing formula is as follows , with variance here denoted as σ2 : µi , j = ∑ k ∑ l µk , l · σ −2 k , l∑ k ∑ l σ −2 k , l ( 3 ) Given that the quantization is a discrete operation , we can not compute it ’ s analytic jacobian for uncertainty propagation . We therefore approximate the added quantization uncertainty using the numerical pixel gradients of the newly smoothed image Gi , j , and assume additive noise proportional to the x and y quantization distances ∆pci , j : Σi , j = Σ̃i , j +Gi , j∆pci , j ( 4 )
The paper considers the problem of creating spatial memory representations, which play important roles in robotics and are crucial for real-world applications of intelligent agents. The paper proposes an ego-centric representation that stores depth values and features at each pixel in a panorama. Given the relative pose between frames, the representation from the previous frame is transformed via forward warping (using known depth values) to the viewpoint of the current frame. The proposed approach has no learnable parameters. Experiments on a wide range of tasks show that the proposed approach outperforms baselines such as LSTM and NTM.
SP:30ceb5d450760e9954ac86f091fb97cb14a2d092
Training Invertible Linear Layers through Rank-One Perturbations
1 INTRODUCTION Many deep learning applications depend critically on the neural network parameters having a certain mathematical structure . As an important example , reversible generative models rely on invertibility and , in the case of normalizing flows , efficient inversion and computation of the Jacobian determinant ( Papamakarios et al. , 2019 ) . Preserving parameter properties during training can be challenging and many approaches are currently in use . The most basic way of incorporating constraints is by network design . Many examples could be listed , like defining convolutional layers to obtain equivariances , constraining network outputs to certain intervals through bounded activation functions , Householder flows ( Tomczak & Welling , 2016 ) to enforce layer-wise orthogonality , or coupling layers ( Dinh et al. , 2014 ; 2016 ) that enforce tractable inversion through their twochannel structure . A second approach concerns the optimizers used for training . Optimization routines have been tailored for example to maintain Lipschitz bounds ( Yoshida & Miyato , 2017 ) or efficiently optimize orthogonal linear layers ( Choromanski et al. , 2020 ) . The present work introduces a novel algorithmic concept for training invertible linear layers and facilitate tractable inversion and determinant computation , see Figure 1 . In lieu of directly changing the network parameters , the optimizer operates on perturbations to these parameters . The actual network parameters are frozen , while a parameterized perturbation ( a rank-one update to the frozen parameters ) serves as a proxy for optimization . Inputs are passed through the perturbed network during training . In regular intervals , the perturbed parameters are merged into the actual network and the perturbation is reset to the identity . This stepwise optimization approach will be referred to as property-preserving parameter perturbation , or P4 update . A similar concept was introduced recently by Lezcano-Casado ( 2019 ) , who used dynamic trivializations for optimization on manifolds . In this work , we use P4 training to optimize invertible linear layers while keeping track of their inverses and determinants using rank-one updates . Previous work ( see Section 2 ) has mostly focused on optimizing orthogonal matrices , which can be trivially inverted and have unity determinant . Only most recently , Gresele et al . ( 2020 ) presented a first method to optimize general invertible matrices implicitly using relative gradients , thereby providing greater flexibility and expressivity . While their scheme implicitly tracks the weight matrices ’ determinants , it does not facilitate cheap inversion . In contrast , the present P4Inv layers are inverted at the cost of roughly three matrix-vector multiplications . P4Inv layers can approximate arbitrary invertible matrices A ∈ GL ( n ) . Interestingly , our stepwise perturbation even allows sign changes in the determinants and recovers the correct inverse after emerging from the ill-conditioned regime . Furthermore , it avoids any explicit computations of inverses or determinants . All operations occurring in optimization steps have complexity of at most O ( n2 ) . To our knowledge , the present method is the first to feature these desirable properties . We show how P4Inv blocks can be utilized in normalizing flows by combining them with nonlinear , bijective activation functions and with coupling layers . The resulting neural networks are validated for density estimation and as deep generative models . Finally , we outline potential applications of P4 training to network properties other than invertibility . 2 BACKGROUND AND RELATED WORK . 2.1 RANK-ONE PERTURBATION . The P4Inv layers are based on rank-one updates , which are defined as transformations A 7→ A + uvT with u , v ∈ Rn . If A ∈ GL ( n ) and 1 + vTA−1u 6= 0 , the updated matrix is also invertible and its inverse can be computed by the Sherman-Morrison formula ( A+ uvT ) −1 = A−1 − 1 1 + vTA−1u A−1uvTA−1 . ( 1 ) Furthermore , the determinant is given by the matrix determinant lemma det ( A+ uvT ) = ( 1 + vTA−1u ) det ( A ) . ( 2 ) Both these equations are widely used in numerical mathematics , since they sidestep the O ( n3 ) cost and poor parallelization of both matrix inversion and determinant computation . The present work leverages these perturbation formulas to keep track of the inverses and determinants of weight matrices during training of invertible neural networks . 2.2 EXISTING APPROACHES FOR TRAINING INVERTIBLE LINEAR LAYERS . Maintaining invertibility of linear layers has been studied in the context of convolution operators ( Kingma & Dhariwal , 2018 ; Karami et al. , 2019 ; Hoogeboom et al. , 2019 ; 2020 ) and using Sylvester ’ s theorem ( Van Den Berg et al. , 2018 ) . Those approaches often involve decompositions that include triangular matrices ( Papamakarios et al . ( 2019 ) ) . While inverting triangular matrices has quadratic computational complexity , it is inherently sequential and thus fairly inefficient on parallel computers ( see Section 4.1 ) . More closely related to our work , Gresele et al . ( 2020 ) introduced a relative gradient optimization scheme for invertible matrices . In contrast to this related work , our method facilitates a cheap inverse pass and allows sign changes in the determinant . On the contrary , their method operates in a higher-dimensional search space , which might speed up the optimization in tasks that do not involve inversion during training . 2.3 NORMALIZING FLOWS . Cheap inversion and determinant computation are specifically important in the context of normalizing flows , see Appendix C. They were introduced in Tabak et al . ( 2010 ) ; Tabak & Turner ( 2013 ) and are commonly used , either in variational inference ( Rezende & Mohamed , 2015 ; Tomczak & Welling , 2016 ; Louizos & Welling , 2017 ; Van Den Berg et al. , 2018 ) or for approximate sampling from distributions given by an energy function ( van den Oord et al. , 2018 ; Müller et al. , 2019 ; Noé et al. , 2019 ; Köhler et al. , 2020 ) . The most important normalizing flow architectures are coupling layers ( Dinh et al. , 2014 ; 2016 ; Kingma & Dhariwal , 2018 ; Müller et al. , 2019 ) , which are a subclass of autoregressive flows ( Germain et al. , 2015 ; Papamakarios et al. , 2017 ; Huang et al. , 2018 ; De Cao et al. , 2019 ) , and ( 2 ) residual flows ( Chen et al. , 2018 ; Zhang et al. , 2018 ; Grathwohl et al. , 2018 ; Behrmann et al. , 2019 ; Chen et al. , 2019 ) . A comprehensive survey can be found in Papamakarios et al . ( 2019 ) . 2.4 OPTIMIZATION UNDER CONSTRAINTS AND DYNAMIC TRIVIALIZATIONS . Constrained matrices can be optimized using Riemannian gradient descent on the manifold ( Absil et al . ( 2009 ) ) . A reparameterization trick for general Lie groups has been introduced in Falorsi et al . ( 2019 ) . For the unitary / orthogonal group there are multiple more specialized approaches , including using the Cayley transform ( Helfrich et al. , 2018 ) , Householder Reflections ( Mhammedi et al. , 2017 ; Meng et al. , 2020 ; Tomczak & Welling , 2016 ) , Givens rotations ( Shalit & Chechik , 2014 ; Pevny et al. , 2020 ) or the exponential map ( Lezcano-Casado & Martı́nez-Rubio , 2019 ; Golinski et al. , 2019 ) . Lezcano-Casado ( 2019 ) introduced the concept of dynamic trivializations . This method performs training on manifolds by combining ideas from Riemannian gradient descent and trivializations ( parameterizations of the manifold via an unconstrained space ) . Dynamic trivializations were derived in the general settings of Riemannian exponential maps and Lie groups . Convergence results were recently proven in follow-up work ( Lezcano-Casado ( 2020 ) ) . P4 training resembles dynamic trivializations in that both perform a number of iteration steps in a fixed basis and infrequently lift the optimization problem to a new basis . In contrast , the rank-one updates do not strictly parameterize GL ( n ) but instead can access all of Rn×n . This introduces the need for numerical stabilization , but enables efficient computation of the inverse and determinant through equation 1 and equation 2 , which is the method ’ s unique and most important aspect . 3 P4 UPDATES : PRESERVING PROPERTIES THROUGH PERTURBATIONS . 3.1 GENERAL CONCEPT . A deep neural network is a parameterized function MA : Rn → Rm with a high-dimensional parameter tensor A . Now , let S define the subset of feasible parameter tensors so that the network satisfies a certain desirable property . In many situations , generating elements of S from scratch is much harder than transforming any A ∈ S into other elements A′ ∈ S , i.e . to move within S. The efficiency of perturbative updates can be leveraged as an incremental approach to retain certain desirable properties of the network parameters during training . Rather than optimizing the parameter tensors directly , we instead use a transformation RB : S → S , which we call a property-preserving parameter perturbation ( P4 ) . A P4 transforms a given parameter tensor A ∈ S into another tensor with the desired property A′ ∈ S. The P4 itself is also parameterized , by a tensor B . We demand that the identity idS : A 7→ A be included in the set of these transformations , i.e . there exists a B0 such that RB0 = idS . During training , the network is evaluated using the perturbed parameters à = RB ( A ) . The parameter tensor of the perturbation , B , is trainable via gradient-based stochastic optimizers , while the actual parameters A are frozen . In regular intervals , every N iterations of the optimizer , the optimized parameters of the P4 , B , are merged into A as follows : Anew ← RB ( A ) , ( 3 ) Bnew ← B0 . ( 4 ) This update does not modify the effective ( perturbed ) parameters of the network à , since Ãnew = RBnew ( Anew ) = RB0 ( RB ( A ) ) = RB ( A ) = à . Hence , this procedure enables a steady , iterative transformation of the effective network parameters and stochastic gradient descent methods can be used for training without major modifications . Furthermore , given a reasonable P4 , the iterative update of A can produce increasingly non-trivial transformations , thereby enabling high expressivity of the resulting neural networks . This concept is summarized in Algorithm 1 . Further extensions to stabilize the merging step will be exemplified in Section 3.3 . Algorithm 1 : P4 Training Input : Model M , training data , loss function J , number of optimization steps Nsteps , merge interval N , perturbation R , optimizer OPT initialize A ∈ S ; initialize B : = B0 ; for i : = 1 . . . Nsteps do X , Y0 : = i-th batch from training data ; à : = RB ( A ) ; // perturb parameters Y : = Mà ( X ) ; // evaluate perturbed model j : = J ( Y , Y0 ) ; // evaluate loss function gradient : = ∂j/∂B ; // backpropagation B : = OPT ( B , gradient ) ; // optimization step if i mod N = 0 then A : = RB ( A ) ; // merging step : update frozen parameters B : = B0 ; // merging step : reset perturbation end end 3.2 P4INV : INVERTIBLE LAYERS VIA RANK-ONE UPDATES Algorithm 2 : P4Inv Merging Step Input : Matrix A , Inverse Ainv , Determinant d det factor : = ( 1 + vTAinvu ) new det : = det factor · d ; if ln |det factor| and ln |new det| are sane then / * update frozen parameters ( equation 3 ) * / d : = new det ; A : = Ru , v ( A ) ; Ainv : = Ainv − 11+vTAinvuAinvuv TAinv ; / * reset perturbation ( equation 4 ) * / u : = 0 ; v : = N ( 0 , In ) ; // random reinitialization end The P4 algorithm can in principle be applied to properties concerning either individual blocks or the whole network . Here we train individual invertible linear layers via rank-one perturbations . Each of these P4Inv layers is an affine transformation Ax+b . In this context , the weight matrix A is handled by the P4 update and the bias b is optimized without perturbation . Without loss of generality , we present the method for layers Ax . We define S as the set of invertible matrices , for which we know the inverse and determinant . Then the rank-one update Ru , v ( A ) = A+ uv T ( 5 ) with B = ( u , v ) ∈ R2n is a P4 on S due to equations 1 and 2 , which also define the inverse pass and determinant computation of the perturbed layer , see Appendix B for details . The perturbation can be reset by setting u , v , or both to zero . In subsequent parameter studies , a favorable training efficiency was obtained by setting u to zero and reinitializing v from Gaussian noise . ( Using a unity standard deviation for the reinitialization ensures that gradient-based updates to u are on the same order of magnitude as updates to a standard linear layer so that learning rates are transferable . ) The inverse matrix Ainv and determinant d are stored in the P4 layer alongside A and updated according to the merging step in Algorithm 2 . Merges are skipped whenever the determinant update signals ill conditioning of the inversion . This is further explained in the following subsection .
This paper introduces an algorithm for training neural networks in a way that parameters preserve a given property. The optimization is based on using a transformation R that perturbs parameters in a way that the desired property is preserved. Instead of directly optimizing the parameters of the network, the optimization is carried out on the parameters B of the auxiliary transformation R.
SP:0cde0537137f3eef6c9c0d6d580a610a07112a39
On Noise Injection in Generative Adversarial Networks
1 INTRODUCTION . Noise injection is usually applied as regularization to cope with overfitting or facilitate generalization in neural networks ( Bishop , 1995 ; An , 1996 ) . The effectiveness of this simple technique has also been proved in various tasks in deep learning , such as learning deep architectures ( Hinton et al. , 2012 ; Srivastava et al. , 2014 ; Noh et al. , 2017 ) , defending adversarial attacks ( He et al. , 2019 ) , facilitating stability of differentiable architecture search with reinforcement learning ( Liu et al. , 2019 ; Chu et al. , 2020 ) , and quantizing neural networks ( Baskin et al. , 2018 ) . In recent years , noise injection1 has attracted more and more attention in the community of Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014a ) . Extensive research shows that it helps stabilize the training procedure ( Arjovsky & Bottou , 2017 ; Jenni & Favaro , 2019 ) and generate images of high fidelity ( Karras et al. , 2019a ; b ; Brock et al. , 2018 ) . In practice , Fig . 1 shows significant improvement in hair quality due to noise injection . Particularly , noise injection in StyleGAN ( Karras et al. , 2019a ; b ) has shown the amazing capability of helping generate sharp details in images , shedding new light on obtaining high-quality photo-realistic results using GANs . Therefore , studying the underlying principle of noise injection in GANs is an important theoretical work of understanding GAN algorithms . In this paper , we propose a theoretical framework to explain and improve the effectiveness of noise injection in GANs . Our framework is motivated from a geometric perspective and also combined with the results of optimal transportation problem in GANs ( Lei et al. , 2019a ; b ) . Our contributions are listed as follows : • We show that the existing GAN architectures , including Wasserstein GANs ( Arjovsky et al. , 2017 ) , may suffer from adversarial dimension trap , which severely penalizes the property of generator ; • Based on our theory , we attempt to explain the properties that noise injection is applied in the related literatures ; • Based on our theory , we propose a more proper form for noise injection in GANs , which can overcome the adversarial dimension trap . Experiments on the state-of-the-art GAN architecture , StyleGAN2 ( Karras et al. , 2019b ) , demonstrate the superiority of our new method compared with original noise injection used in StyleGAN2 . 1It suffices to note that noise injection here is totally different from the research field of adversarial attacks raised in Goodfellow et al . ( 2014b ) . To the best of our knowledge , this is the first work that theoretically draws the geometric picture of noise injection in GANs . 2 RELATED WORKS . The main drawbacks of GANs are unstable training and mode collapse . Arjovsky et al . ( Arjovsky & Bottou , 2017 ) theoretically analyze that noise injection directly to the image space can help smooth the distribution so as to stabilize the training procedure . The authors of Distribution-Filtering GAN ( DFGAN ) ( Jenni & Favaro , 2019 ) then put this idea into practice and prove that this technique will not influence the global optimality of the real data distribution . However , as the authors pointed out in ( Arjovsky & Bottou , 2017 ) , this method depends on the amount of noise . Actually , our method of noise injection is essentially different from these ones . Besides , they do not provide a theoretical vision of explaining the interactions between injected noises and features . BigGAN ( Brock et al. , 2018 ) splits input latent vectors into one chunk per layer and projects each chunk to the gains and biases of batch normalization in each layer . They claim that this design allows direct influence on features at different resolutions and levels of hierarchy . StyleGAN ( Karras et al. , 2019a ) and StyleGAN2 ( Karras et al. , 2019b ) adopt a slightly different view , where noise injection is introduced to enhance randomness for multi-scale stochastic variations . Different from the settings in BigGAN , they inject extra noise independent of latent inputs into different layers of the network without projection . Our theoretical analysis is mainly motivated by the success of noise injection used in StyleGAN ( Karras et al. , 2019a ) . Our proposed framework reveals that noise injection in StyleGAN is a kind of fuzzy reparameterization in Euclidean spaces , and we extends it into generic manifolds ( section 4.3 ) . 3 THE INTRINSIC DRAWBACKS OF TRADITIONAL GANS . 3.1 OPTIMAL TRANSPORTATION AND DISCONTINUOUS GENERATOR . Traditional GANs with Wasserstein distance are equivalent to the optimal transportation problem , where the optimal generator is the optimal transportation map . However , there is rare chance for the optimal transportation map to be continuous , unless the support of Brenier potential is convex ( Caffarelli , 1992 ) . Considering that the Brenier potential of Wasserstein GAN is determined by the real data distribution and the inverse map of the generator , it is highly unlikely that its support is convex . This means that the optimal generator will be discontinuous , which is a fatal limitation to the capacity of GANs . Based on that , Lei et al . ( Lei et al. , 2019a ) further point out that traditional GANs will hardly converge or converge to one continuous branch of the target mapping , thus leading to mode collapse . They then propose to find the continuous Brenier potential instead of the discontinuous transportation map . In the next paragraph , we show that this solution may not totally overcome the problem that traditional GANs encounter due to structural limitations of neural networks . Besides , it suffices to note that their analysis is built upon the Wasserstein distance , and may not be directly applied to the Jenson-Shannon divergence or KL divergence . We refer the readers to Lei et al . ( 2019a ) ; Caffarelli ( 1992 ) for more detailed analysis . 3.2 ADVERSARIAL DIMENSION TRAP . In addition to the above discontinuity problem , another drawback is the relatively low dimension of latent spaces in GANs compared with the high variance of details in real-world data . Taking face images as an example , the hair , freckles , and wrinkles have extremely high degree of freedom , which make traditional GANs often fail to capture them . The repetitive application of non-invertible CNN blocks makes the situation even worse . Non-invertible CNN , which is a singular linear transformation , will drop the intrinsic dimensions of feature manifolds ( Strang et al. , 1993 ) . So during the feedforward procedure of the generator , the dimensions of feature spaces will keep being dropped . Then it will have a high chance that the valid dimension of the input latent space is lower than that of the real data . The relatively lower dimension of the input latent space will then force the dimension of the support with respect to the distribution of generated images lower than that of the real data , as no smooth mappings increase the dimension . However , the discriminator , which measures the distance of these two distributions , will keep encouraging the generator to increase the dimension up to the same as the true data . This contradictory functionality , as we show in the theorem bellow , incurs severe punishment on the smoothness and invertibility of the generative model , which we refer as the adversarial dimension trap . Theorem 1 . 2 For a deterministic GAN model and generator G : Z → X , if the dimension of the input latent Z is lower than that of data manifold X , then at least one of the two cases must stand : 1. the generator can not be Lipschitz ; 2. the generator fails to capture the data distribution and is unable to perform inversion . Namely , for an arbitrary point x ∈ X , the possibility of G−1 ( x ) = ∅ is 1 . The above theorem stands for a wide range of GAN loss functions , including Wasserstein divergence , Jenson-Shannon divergence , and other KL-divergence based losses . Notice that this theorem implies much worse situation than it states . For any open sphere B in the data manifold X , the generator restricted in the pre-image of B also follows this theorem , which suggests bad properties of nearly every local neighborhood . This also suggests that the above consequences of Theorem 1 may both stand . As in some subsets , the generator may successfully capture the data distribution , while in some others , the generator may fail to do so . The first issue in section 3.1 can be addressed by not learning the generator directly with continuous neural network components . We will show how our method addresses the second issue . 4 FUZZY REPARAMETERIZATION . The generator G in the traditional GAN is a composite of sequential non-linear feature mappings , which can be denoted as G ( z ) = fk ◦ fk−1 ◦ · · · ◦ f1 ( z ) , where z ∼ N ( 0 , 1 ) is the standard Gaussian . Each feature mapping , which is typically a single layer convolutional neural network ( CNN ) plus non-linear activations , carries out a certain purpose such as extracting multi-scale patterns , upsampling , or merging multi-head information . The whole network is then a deterministic mapping from the latent space Z to the image space X . We propose to replace f i ( x ) , 1 ≤ i ≤ k , with gi ( x ) = µi ( x ) + σi ( x ) , ∼ N ( 0 , 1 ) , x ∈ gi−1 ◦ · · · ◦ g1 ( Z ) . ( 1 ) We call it as Fuzzy Reparameterization ( FR ) as it in fact learns fuzzy equivalence relation of the original features , and uses reparameterization to model the high-dimensional feature manifolds . We believe that this is the proper form of generalization of noise injection in StlyeGAN , and will show the reasons and benefits in the following sub-sections . 2As the common practice in the manifold learning community , our theorems and discussions are based on Riemannian manifolds . Proofs to all the theorems are included in the supplementary material . It is not hard to see that our proposed method can be viewed as the extension of the reparameterization trick in VAEs ( Kingma & Welling , 2013 ) . While the reparameterization trick in VAEs serves to a differentiable solution to learn through random variables and is only applied in the latent space , our method is a type of deep noise injection in feature maps of each layer to correct the defect in GAN architectures . Therefore , the purposes of using reparameterization in these two scenarios are different , thus leading to thoroughly different theories that are presented in the next sub-section . 4.1 HANDLING ADVERSARIAL DIMENSION TRAP WITH NOISE INJECTION . As Sard ’ s theorem tells us ( Petersen et al. , 2006 ) , the key to solve the adversarial dimension trap is to avoid mapping low-dimensional feature spaces into high-dimensional ones , which looks like a pyramid structure in the generator . However , we really need the pyramid structure in practice because the final output dimension of generated images is much larger than that of the input space . So the solution could be that , instead of mapping into the full feature spaces , we choose to map only onto the skeleton of the feature spaces and use random noise to fill up the remaining space . For a compact manifold , it is easy to find that the intrinsic dimension of the skeleton set can be arbitrarily low by applying Heine–Borel theorem to the skeleton ( Rudin et al. , 1964 ) . By this way , the model can escape from the adversarial dimension trap . Now we develop the idea in detail . The whole idea is based on approximating the manifold by the tangent polyhedron . Assume that the feature spaceM is a Riemannian manifold embedded in Rm . Then for any point µ ∈ M , the local geometry induces a coordinate transformation from a small neighborhood of µ inM to its projection onto the tangent space TµM at µ by the following theorem . Theorem 2 . Given Riemannian manifoldM embedded in Rm , for any point µ ∈M , we let TµM denote the tangent space at µ . Then the exponential map Expµ induces a smooth diffeomorphism from a Euclidean ball BTµM ( 0 , r ) centered at O to a geodesic ball BM ( µ , r ) centered at µ inM . Thus { Exp−1µ , BM ( µ , r ) , BTµM ( 0 , r ) } forms a local coordinate system ofM in BM ( µ , r ) , which we call the normal coordinates . Thus we have BM ( µ , r ) = Expµ ( BTµM ( 0 , r ) ) = { τ : τ = Expµ ( v ) , v ∈ BTµM ( 0 , r ) } . ( 2 ) Theorem 3 . The differential of Expµ at the origin of TµM is identity I . Thus Expµ can be approximated by Expµ ( v ) = µ+ Iv + o ( ‖v‖2 ) . ( 3 ) Thus , if r in equation ( 2 ) is small enough , we can approximate BM ( µ , r ) by BM ( µ , r ) ≈ µ+ IBTµM ( 0 , r ) = { τ : τ = µ+ Iv , v ∈ BTµM ( 0 , r ) } . ( 4 ) Considering that TµM is an affine subspace of Rm , the coordinates on BTµM ( 0 , r ) admit an affine transformation into the coordinates on Rm . Thus equation ( 4 ) can be written as BM ( µ , r ) ≈ µ+ IBTµM ( 0 , r ) = { τ : τ = µ+ rT ( µ ) , ∈ B ( 0 , 1 ) } . ( 5 ) We remind the readers that the linear component matrix T ( µ ) differs at different µ ∈ M and is decided by the local geometry near µ . In the above formula , µ defines the center point and rT ( µ ) defines the shape of the approximated neighbor . So we call them a representative pair of BM ( µ , r ) . Picking up a series of such representative pairs , which we refer as the skeleton set , we can construct a tangent polyhedron H ofM . Thus instead of trying to learn the feature manifold directly , we adopt a two-stage procedure . We first learn a map f : x 7→ [ µ ( x ) , σ ( x ) ] ( σ ( x ) ≡ rT ( µ ( x ) ) ) onto the skeleton set , then we use noise injection g : x 7→ µ ( x ) + σ ( x ) , ∼ U ( 0 , 1 ) ( uniform distribution ) to fill up the flesh of the feature space as shown in Figure 2 . However , the real world data often include fuzzy semantics . Even long range features could share some structural relations in common . It is unwise to model it with unsmooth architectures such as locally bounded sphere and uniform distribution . Thus we borrow the idea from fuzzy topology ( Ling & Bo , 2003 ; Zhang & Zhang , 2005 ; Murali , 1989 ; Recasens , 2010 ) which is designed to address this issue . It is well known that for any distance metrics d ( · , · ) , e−d ( µ , · ) admits a fuzzy equivalence relation for points near µ , which is similar with the density of Gaussian . The fuzzy equivalence relation can be viewed as a suitable smooth alternative to the sphere neighborhood BM ( µ , r ) . Thus we replace the uniform distribution with unclipped Gaussian3 . Under this settings , the first stage mapping in fact learns a fuzzy equivalence relation , while the second stage is a reparameterization technique . Notice that the skeleton set can have arbitrarily low dimension by Heine–Borel theorem . So the first-stage map can be smooth and well conditioned . For the second stage , we can show that it possesses a smooth property in expectation by the following theorem . Theorem 4 . Given f : x 7→ [ µ ( x ) , σ ( x ) ] T , f is locally Lipschitz and ‖σ‖∞ = o ( 1 ) . Define g ( x ) ≡ µ ( x ) + σ ( x ) , ∼ N ( 0 , 1 ) ( standard Gaussian ) . Then for any bounded set U , ∃L > 0 , we have E [ ‖g ( x ) − g ( y ) ‖2 ] ≤ L‖x − y‖2 + o ( 1 ) , ∀x , y ∈ U . Namely , the principal component of g is locally Lipschitz in expectation . Specifically , if the definition domain of f is bounded , then the principal component of g is globally Lipschitz in expectation .
To summarize, this paper proposed a new noise injection method that is easy to implement and is able to replace the original noise injection method in StyleGAN 2. The approach is supported by detailed theoretical analysis and impactful performance improvement on GAN training and inversion. The results show that they are able to achieve a considerable improvement on DCGAN and StyleGAN2.
SP:6ba57dba7e320797ca311e5c7d6e55e130384df2
TRIP: Refining Image-to-Image Translation via Rival Preferences
1 INTRODUCTION . Image-to-image ( I2I ) translation ( Isola et al. , 2017 ) aims to translate an input image into the desired ones with changes in some specific attributes . Current literature can be classified into two categories : binary translation ( Zhu et al. , 2017 ; Kim et al. , 2017 ) , e.g. , translating an image from “ not smiling ” to “ smiling ” ; fine-grained translation ( Lample et al. , 2017 ; He et al. , 2019 ; Liu et al. , 2018 ; Saquil et al. , 2018 ) , e.g. , generating a series of images with smooth changes from “ not smiling ” to “ smiling ” . In this work , we focus on the high-quality fine-grained I2I translation , namely , generate a series of realistic versions of the input image with smooth changes in the specific attributes ( See Fig . 1 ) . Note that the desired high-quality images in our context are two folds : first , the generated images look as realistic as training images ; second , the generated images are only modified in terms of the specific attributes . Relative attribute ( RA ) , referring to the preference of two images over the strength of the interested attribute , is widely used in the fine-grained I2I translation task due to their rich semantic information . Previous work Ranking Conditional Generative Adversarial Network ( RCGAN ) ( Saquil et al. , 2018 ) adopts two separate criteria for a high-quality fine-grained translation . Specifically , a ranker is adopted to distill the discrepancy from RAs regarding the targeted attribute , which then guides the generator to translate the input image into the desired one . Meanwhile , a discriminator ensures the generated images as realistic as the training images . However , the generated fine-grained images guided by the ranker are out of the real data distribution , which conflicts with the goal of the discriminator . Therefore , the generated images can not maintain smooth changes and suffer from low-quality issues . RelGAN ( Wu et al. , 2019 ) applied a unified discriminator for the high-quality fine-grained translation . The discriminator guides the generator to learn the distribution of triplets , which consist of pairs of images and their corresponding numerical labels ( i.e. , relative attributes ) . Further , RelGAN adopted the fine-grained RAs within the same framework to enable a smooth interpolation . However , the joint data distribution matching does not explicitly model the discrepancy from the RAs and fails to capture sufficient semantic information . The generated images fail to change smoothly over the interested attribute . In this paper , we propose a new adversarial ranking framework consisting of a ranker and a generator for high-quality fine-grained translation . In particular , the ranker explicitly learns to model the discrepancy from the relative attributes , which can guide the generator to produce the desired image from the input image . Meanwhile , the rival preference consisting of the generated image and the input image is constructed to evoke the adversarial training between the ranker and the generator . Specifically , the ranker can not differentiate the strength of the interested attribute between the generated image and the input image ; while the generator aims to achieve the agreement from the ranker that the generated image holds the desired difference compared to the input . Competition between the ranker and the generator drives both two modules to improve themselves until the generations exhibit desired preferences while possessing high fidelity . We summarize our contributions as follows : • We propose Translation via RIval Preference ( TRIP ) consisting of a ranker and a generator for a high-quality fine-grained translation . The rival preference is constructed to evoke the adversarial training between the ranker and the generator , which enhances the ability of the ranker and encourages a better generator . • Our tailor-designed ranker enforces a continuous change between the generated image and the input image , which promotes a better fine-grained control over the interested attribute . • Empirical results show that our TRIP achieves the state-of-art results on the fine-grained imageto-image translation task . Meanwhile , the input image can be manipulated linearly along the strength of the attribute . • We further extend TRIP to the fine-grained I2I translation of multiple attributes . A case study demonstrates the efficacy of our TRIP in terms of disentangling multiple attributes and manipulating them simultaneously . 2 RELATED WORKS . We mainly review the literature related to fine-grained I2I translation , especially smooth facial attribute transfer . We summarized them based on the type of generative models used . AE/VAE-based methods can provide a good latent representation of the input image . Some works ( Lample et al. , 2017 ; Liu et al. , 2018 ; Li et al. , 2020 ; Ding et al. , 2020 ) proposed to disentangle the attribute-dependent latent variable from the image representation but resorted to different disentanglement strategies . Then the fine-grained translation can be derived by smoothly manipulating the attribute variable of the input image . However , the reconstruction loss , which is used to ensure the image quality , can not guarantee a high fidelity of the hallucinated images . Flow-based Some works ( Kondo et al. , 2019 ) incorporates feature disentanglement mechanism into flow-based generative models . However , the designed multi-scale disentanglement requires large computation . And the reported results did not show satisfactory performance on smooth control . GAN-based GAN is a widely adopted framework for a high-quality image generation . Various methods applied GAN as a base for fine-grained I2I translation through relative attributes . The main differences lie in the strategies of incorporating the preference over the attributes into the image generation process . Saquil et al . ( 2018 ) adopted two critics consisting of a ranker , learning from the relative attributes , and a discriminator , ensuring the image quality . Then the combination of two critics is supposed to guide the generator to produce high-quality fine-grained images . However , the ranker would induce the generator to generate out-of-data-distribution images , which is opposite to the target of the discriminator , thereby resulting in poor-quality images . Wu et al . ( 2019 ) applied a unified discriminator , which learns the joint data distribution of the triplet constructed with a pair of images and a discrete numerical label ( i.e. , relative attribute ) . However , such a joint distribution modeling approach only models the discrete discrepancy of the RAs , which fails to generalize to the continuous labels very well . Rather than using RAs , He et al . ( 2019 ) directly modeled the attribute with binary classification , which can not capture detailed attribute information , and hence fail to make a smooth control over the attributes . Deng et al . ( 2020 ) embeded 3D priors into adversarial learning . However , it relies on available priors for attributes , which limits the practicality . Alharbi and Wonka ( 2020 ) proposed an unsupervised disentanglement method . It injects the structure noises to GAN for controlling specific parts of the generated images , which makes global or local features changed in a disentangled way . However , it is unclear how global or local features are related to facial attributes . Thus , it is difficult to change specific attributes . Our method is based on GAN . To ensure good control over the target attribute , the critic in GAN should transfer the signal about the subtle difference over the target attribute to the generator . Previous methods model it as two sequential processes . Namely , they capture the subtle difference over attribute using a classification model or a ranking model , and count on the learned attribute model to generalize learned attribute preference to the unseen generated images through interpolation . However , the learned attribute model never meets our expectation , since they haven ’ t seen the generated images at all during its training . As for our TRIP , we consider introducing the generated image into the training process of the attribute model , i.e. , the ranker . Since the supervision over the generated images is not accessible , we formulate the ranker into an adversarial ranking process using the constructed rival preference , following the adversarial training of vanilla GAN . Consequently , our ranker ( the attribute model ) can critic the generated image during its whole training process , and it no doubt can generalize to generated images to ensure sufficient fine-grained control over the target attribute . 3 TRIP FOR FINE-GRAINED IMAGE-TO-IMAGE TRANSLATION . In this section , we propose a new model , named TRanslation via Rival Preferences ( TRIP ) for high-quality fine-grained image-to-image ( I2I ) translation , which learns a mapping that translates an input image to a set of realistic output images by smoothly controlling the specific attributes . The whole structure of TRIP is shown in Fig . 2 , which consists of a generator and a ranker . The generator takes as input an image along with a continuous latent variable that controls the change of the attribute , and outputs the desired image ; while the ranker provides information in terms of image quality and the preference over the attribute , which guides the learning of the generator . We implement the generator with a standard encoder-decoder architecture following Wu et al . ( 2019 ) . In the following , we focus on describing the detailed design of the ranker and the principle behind it . 3.1 RANKER FOR RELATIVE ATTRIBUTES . Relative attributes ( RAs ) are assumed to be most representative and most valid to describe the information related to the relative emphasis of the attribute , owing to its simplicity and easy construction ( Parikh and Grauman , 2011 ; Saquil et al. , 2018 ) . For a pair of images ( x , y ) , RAs refer to their preference over the specific attribute : y x when y shows a greater strength than x on the target attribute and vice versa . Pairwise learning to rank is a widely-adopted technique to model the relative attributes ( Parikh and Grauman , 2011 ) . Given a pair of images ( x , y ) and its relative attribute , the pairwise learning to rank technique is formulated as a binary classification ( Cao et al. , 2006 ) , i.e. , R ( x , y ) = { 1 y x ; −1 y ≺ x , ( 1 ) where R ( x , y ) is the ranker ’ s prediction for the pair of images ( x , y ) . x y R +1/-1 Figure 3 : The ranker model . Further , the attribute discrepancy between RAs , distilled by the ranker , can then be used to guide the generator to translate the input image into the desired one . However , the ranker is trained on the real image pairs , which only focuses on the modeling of preference over the attribute and ignores image quality . To achieve the agreement with the ranker , the generator possibly produces unrealistic images , which conflicts with the goal of the discriminator . 3.2 RIVAL PREFERENCES ENHANCING THE RANKER . According to the above analysis , we consider incorporating the generated image pairs into the modeling of RAs , along with the real image pairs to reconcile the goal of the ranker and the discriminator . Meanwhile , the resultant ranker will not only generalize well to the generated pairs but also avoid providing untrustworthy feedback by discriminating the unrealistic images . Motivated by the adversarial training of GAN , we introduce an adversarial ranking process between a ranker and a generator to incorporate the generated pairs into the training of ranker . To be specific , • Ranker . Inspired by semi-supervised GAN ( Odena , 2016 ) , we assign a pseudo label to the generated pairs . In order to avoid a biased influence on the ranking decision over real image pairs , i.e. , positive ( +1 ) or negative ( -1 ) , the pseudo label is designed as zero . Note that the generated pair consists of a synthetic image and its input in order to connect the ranker prediction to the controlling latent variable . R ( x , ∆ ) = { +1 ∆ = y ∧ y x ; −1 ∆ = y ∧ y ≺ x ; 0 ∆ = ŷ . ( 2 ) where ŷ denotes the output of the generator given the input image x and v , i.e. , ŷ = G ( x , v ) . ∆ is a placeholder that can either be a real image y or be a generated image ŷ . • Generator . The goal of the generator is to achieve the consistency between the ranking prediction R ( x , ŷ ) and the corresponding latent variable v. When v > 0 , the ranker is supposed to believe that the generated image ŷ has a larger strength of the specific attribute than the input x , i.e. , R ( x , ŷ ) = +1 ; and vice versa . R ( x , ŷ ) = { +1 v > 0 ; −1 v < 0 . ( 3 ) We denominate the opposite goals between the ranker and the generator w.r.t . the generated pairs as rival preferences1 . An intuitive example of the rival preference is given in Fig . 4 for better understanding . The ranker is promoted in terms of the following aspects : ( 1 ) The function of the ranker on the real image pairs is not changed . The generated pairs are uniformly sampled regarding their latent variables . By assigning label zero , the ranking information implied within the pairs is neutral- ized to maintain the ranking performance on the real image pairs . ( 2 ) The ranker avoids providing biased ranking prediction for unrealistic image pairs . As we constrain the generated pairs at the decision boundary , i.e , R ( x , ŷ ) = 0 , the ranker is invariant to the features specified by the generated 1 “ rival ” means adversarial . We use it to distinguish it from adversarial training in the community . pairs ( Chapelle et al. , 2008 ) , suppressing the influence of the unrealistic features on the ranking decision . ( 3 ) The ranker can capture the exclusive difference over the specific attribute through the adversarial process . Since the ranker rejects to give effective feedback for unrealistic image pairs , only the realistic image pairs can attract the attention of the ranker . Therefore , the ranker only passes the effective information related to the target attribute to the generator . Then , we introduce a parallel head following the feature layer to ensure the image quality together with a rank head , shown in Fig . 2 . According to the above analysis , the ranker would not evoke conflict with the goal of the image quality . Therefore , we successfully reconcile the two goals of image quality and the extraction of the attribute difference . With a powerful ranker , the generator would “ win ” the adversarial game by producing the realistic pairs consistent with the latent variable . Remark 1 ( Assigning zero to similar real image pairs ) . It is natural to assign zero to pairs { ( x , y ) |y = x } , where = denotes that x and y have same strength in the interested attribute . They can improve the ranking prediction ( Zhou et al. , 2008 ) .
The authors proposed in this paper a supervised approach relying on given relative and quantitative attribute discrepancies. A UNet-like generator learns adversarially tends to generate realistic images while a "ranker" tends to predict the magnitude of the input parameter used to control the image manipulation. The controled parameter is defined implictly using images whose the discrepancy of the attribute of interest is known. This allows fine-grained manipulation of the attribute of interest. The results of the approach is illustrated on face datasets (CelebA-HQ and LFWA).
SP:bdbb12951868ea0864f926192fdbe2e62ecdb0e3
A Transformer-based Framework for Multivariate Time Series Representation Learning
1 INTRODUCTION . Multivariate time series ( MTS ) are an important type of data that is ubiquitous in a wide variety of domains , including science , medicine , finance , engineering and industrial applications . Despite the recent abundance of MTS data in the much touted era of “ Big Data ” , the availability of labeled data in particular is far more limited : extensive data labeling is often prohibitively expensive or impractical , as it may require much time and effort , special infrastructure or domain expertise . For this reason , in all aforementioned domains there is great interest in methods which can offer high accuracy by using only a limited amount of labeled data or by leveraging the existing plethora of unlabeled data . There is a large variety of modeling approaches for univariate and multivariate time series , with deep learning models recently challenging or replacing the state of the art in tasks such as forecasting , regression and classification ( De Brouwer et al. , 2019 ; Tan et al. , 2020a ; Fawaz et al. , 2019b ) . However , unlike in domains such as Computer Vision or Natural Language Processing ( NLP ) , the dominance of deep learning for time series is far from established : in fact , non-deep learning methods such as TS-CHIEF ( Shifaz et al. , 2020 ) , HIVE-COTE ( Lines et al. , 2018 ) , and ROCKET ( Dempster et al. , 2020 ) currently hold the record on time series regression and classification dataset benchmarks ( Tan et al. , 2020a ; Bagnall et al. , 2017 ) , matching or even outperforming sophisticated deep architectures such as InceptionTime ( Fawaz et al. , 2019a ) and ResNet ( Fawaz et al. , 2019b ) . In this work , we investigate , for the first time , the use of a transformer encoder for unsupervised representation learning of multivariate time series , as well as for the tasks of time series regression and classification . Transformers are an important , recently developed class of deep learning models , which were first proposed for the task of natural language translation ( Vaswani et al. , 2017 ) but have since come to monopolize the state-of-the-art performance across virtually all NLP tasks ( Raffel et al. , 2019 ) . A key factor for the widespread success of transformers in NLP is their aptitude for learning how to represent natural language through unsupervised pre-training ( Brown et al. , 2020 ; Raffel et al. , 2019 ; Devlin et al. , 2018 ) . Besides NLP , transformers have also set the state of the art in several domains of sequence generation , such as polyphonic music composition ( Huang et al. , 2018 ) . Transformer models are based on a multi-headed attention mechanism that offers several key advantages and renders them particularly suitable for time series data ( see Appendix section A.4 for details ) . Inspired by the impressive results attained through unsupervised pre-training of transformer models in NLP , as our main contribution , in the present work we develop a generally applicable methodology ( framework ) that can leverage unlabeled data by first training a transformer model to extract dense vector representations of multivariate time series through an input denoising ( autoregressive ) objective . The pre-trained model can be subsequently applied to several downstream tasks , such as regression , classification , imputation , and forecasting . Here , we apply our framework for the tasks of multivariate time series regression and classification on several public datasets and demonstrate that transformer models can convincingly outperform all current state-of-the-art modeling approaches , even when only having access to a very limited amount of training data samples ( on the order of hundreds of samples ) , an unprecedented success for deep learning models . Importantly , despite common preconceptions about transformers from the domain of NLP , where top performing models have billions of parameters and require days to weeks of pre-training on many parallel GPUs or TPUs , we also demonstrate that our models , using at most hundreds of thousands of parameters , can be trained even on CPUs , while training them on GPUs allows them to be trained as fast as even the fastest and most accurate non-deep learning based approaches . 2 RELATED WORK . Regression and classification of time series : Currently , non-deep learning methods such as TSCHIEF ( Shifaz et al. , 2020 ) , HIVE-COTE ( Lines et al. , 2018 ) , and ROCKET ( Dempster et al. , 2020 ) constitute the state of the art for time series regression and classification based on evaluations on public benchmarks ( Tan et al. , 2020a ; Bagnall et al. , 2017 ) , followed by CNN-based deep architectures such as InceptionTime ( Fawaz et al. , 2019a ) and ResNet ( Fawaz et al. , 2019b ) . ROCKET , which on average is the best ranking method , is a fast method that involves training a linear classifier on top of features extracted by a flat collection of numerous and various random convolutional kernels . HIVE-COTE and TS-CHIEF ( itself inspired by Proximity Forest ( Lucas et al. , 2019 ) ) , are very sophisticated methods which incorporate expert insights on time series data and consist of large , heterogeneous ensembles of classifiers utilizing shapelet transformations , elastic similarity measures , spectral features , random interval and dictionary-based techniques ; however , these methods are highly complex , involve significant computational cost , can not benefit from GPU hardware and scale poorly to datasets with many samples and long time series ; moreover , they have been developed for and only been evaluated on univariate time series . Unsupervised learning for multivariate time series : Recent work on unsupervised learning for multivariate time series has predominantly employed autoencoders , trained with an input reconstruction objective and implemented either as Multi-Layer Perceptrons ( MLP ) or RNN ( most commonly , LSTM ) networks . As interesting variations of the former , Kopf et al . ( 2019 ) and Fortuin et al . ( 2019 ) additionally incorporated Variational Autoencoding into this approach , but focused on clustering and the visualization of shifting sample topology with time . As an example of the latter , Malhotra et al . ( 2017 ) presented a multi-layered RNN sequence-to-sequence autoencoder , while Lyu et al . ( 2018 ) developed a multi-layered LSTM with an attention mechanism and evaluated both an input reconstruction ( autoencoding ) as well as a forecasting loss for unsupervised representation learning of Electronic Healthcare Record multivariate time series . As a novel take on autoencoding , and with the goal of dealing with missing data , Bianchi et al . ( 2019 ) employ a stacked bidirectional RNN encoder and stacked RNN decoder to reconstruct the input , and at the same time use a user-provided kernel matrix as prior information to condition internal representations and encourage learning similarity-preserving representations of the input . They evaluate the method on the tasks of missing value imputation and classification of time series under increasing “ missingness ” of values . A distinct approach is followed by Zhang et al . ( 2019 ) , who use a composite convolutional - LSTM network with attention and a loss which aims at reconstructing correlation matrices between the variables of the multivariate time series input . They use and evaluate their method only for the task of anomaly detection . Finally , Jansen et al . ( 2018 ) rely on a triplet loss and the idea of temporal proximity ( the loss rewards similarity of representations between proximal segments and penalizes similarity between distal segments of the time series ) for unsupervised representation learning of non-speech audio data . This idea is explored further by Franceschi et al . ( 2019 ) , who combine the triplet loss with a deep causal dilated CNN , in order to make the method effective for very long time series . Transformer models for time series : Recently , a full encoder-decoder transformer architecture was employed for univariate time series forecasting : Li et al . ( 2019 ) showed superior performance compared to the classical statistical method ARIMA , the recent matrix factorization method TRMF , an RNN-based autoregressive model ( DeepAR ) and an RNN-based state space model ( DeepState ) on 4 public forecasting datasets , while Wu et al . ( 2020 ) used a transformer to forecast influenza prevalence and similarly showed performance benefits compared to ARIMA , an LSTM and a GRU Seq2Seq model with attention , and Lim et al . ( 2020 ) used a transformer for multi-horizon univariate forecasting , supporting interpretation of temporal dynamics . Finally , Ma et al . ( 2019 ) use an encoder-decoder architecture with a variant of self-attention for imputation of missing values in multivariate , geo-tagged time series and outperform classic as well as the state-of-the-art , RNN-based imputation methods on 3 public and 2 competition datasets for imputation . By contrast , our work aspires to generalize the use of transformers from solutions to specific generative tasks ( which require the full encoder-decoder architecture ) to a framework which allows for unsupervised pre-training and with minor modifications can be readily used for a wide variety of downstream tasks ; this is analogous to the way BERT ( Devlin et al. , 2018 ) converted a translation model into a generic framework based on unsupervised learning , an approach which has become a de facto standard and established the dominance of transformers in NLP . 3 METHODOLOGY . 3.1 BASE MODEL . At the core of our method lies a transformer encoder , as described in the original transformer work by Vaswani et al . ( 2017 ) ; however , we do not use the decoder part of the architecture . A schematic diagram of the generic part of our model , common across all considered tasks , is shown in Figure 1 . We refer the reader to the original work for a detailed description of the transformer model , and here present the proposed changes that make it compatible with multivariate time series data , instead of sequences of discrete word indices . In particular , each training sample X ∈ Rw×m , which is a multivariate time series of length w and m different variables , constitutes a sequence of w feature vectors xt ∈ Rm : X ∈ Rw×m = [ x1 , x2 , . . . , xw ] . The original feature vectors xt are first normalized ( for each dimension , we subtract the mean and divide by the variance across the training set samples ) and then linearly projected onto a d-dimensional vector space , where d is the dimension of the transformer model sequence element representations ( typically called model dimension ) : ut = Wpxt + bp ( 1 ) where Wp ∈ Rd×m , bp ∈ Rd are learnable parameters and ut ∈ Rd , t = 0 , . . . , w are the model input vectors1 . These will become the queries , keys and values of the self-attention layer , after adding the positional encodings and multiplying by the corresponding matrices . We note that the above formulation also covers the univariate time series case , i.e. , m = 1 , although we only evaluate our approach on multivariate time series in the scope of this work . We additionally note that the input vectors ut need not necessarily be obtained from the ( transformed ) feature vectors at a time step t : because the computational complexity of the model scales as O ( w2 ) and the number of parameters2 as O ( w ) with the input sequence length w , to obtain ut in case the granularity ( temporal resolution ) of the data is very fine , one may instead use a 1D-convolutional layer with 1 input and d output channels and kernels Ki of size ( k , m ) , where k is the width in number of time steps and i the output channel : ut i = u ( t , i ) = ∑ j ∑ h x ( t + j , h ) Ki ( j , h ) , i = 1 , . . . , d ( 2 ) 1Although equation 1 shows the operation for a single time step for clarity , all input vectors are embedded concurrently by a single matrix-matrix multiplication 2Specifically , the learnable positional encoding , batch normalization and output layers In this way , one may control the temporal resolution by using a stride or dilation factor greater than 1 . Moreover , although in the present work we only used equation 1 , one may use equation 2 as an input to compute the keys and queries and equation 1 to compute the values of the self-attention layer . This is particularly useful in the case of univariate time series , where self-attention would otherwise match ( consider relevant/compatible ) all time steps which share similar values for the independent variable , as noted by Li et al . ( 2019 ) . Finally , since the transformer is a feed-forward architecture that is insensitive to the ordering of input , in order to make it aware of the sequential nature of the time series , we add positional encodings Wpos ∈ Rw×d to the input vectors U ∈ Rw×d = [ u1 , . . . , uw ] : U ′ = U + Wpos . Instead of deterministic , sinusoidal encodings , which were originally proposed by Vaswani et al . ( 2017 ) , we use fully learnable positional encodings , as we observed that they perform better for all datasets presented in this work . Based on the performance of our models , we also observe that the positional encodings generally appear not to significantly interfere with the numerical information of the time series , similar to the case of word embeddings ; we hypothesize that this is because they are learned so as to occupy a different , approximately orthogonal , subspace to the one in which the projected time series samples reside . This approximate orthogonality condition is much easier to satisfy in high dimensional spaces . An important consideration regarding time series data is that individual samples may display considerable variation in length . This issue is effectively dealt with in our framework : after setting a maximum sequence length w for the entire dataset , shorter samples are padded with arbitrary values , and we generate a padding mask which adds a large negative value to the attention scores for the padded positions , before computing the self-attention distribution with the softmax function . This forces the model to completely ignore padded positions , while allowing the parallel processing of samples in large minibatches . Transformers in NLP use layer normalization after computing self-attention and after the feedforward part of each encoder block , leading to significant performance gains over batch normalization , as originally proposed by Vaswani et al . ( 2017 ) . However , here we instead use batch normalization , because it can mitigate the effect of outlier values in time series , an issue that does not arise in NLP word embeddings . Additionally , the inferior performance of batch normalization in NLP has been mainly attributed to extreme variation in sample length ( i.e. , sentences in most tasks ) ( Shen et al. , 2020 ) , while in the datasets we examine this variation is much smaller . In Table 11 of the Appendix we show that batch normalization can indeed offer a very significant performance benefit over layer normalization , while the extent can vary depending on dataset characteristics .
This paper aims to develop a transformer-based pre-trained model for multivariate time series representation learning. Specifically, the transformer’s encoder is only used and a time-series imputation task is constructed as their unsupervised learning objective. This is a bit similar to the BERT model in NLP. But authors added a mask for each variable of the time series. After pretraining with this imputation loss, the transformer can be used for downstream tasks, such as regression and classification. As the authors mentioned on page 6, this is achieved by further fine-tuning all weights of the pre-trained transformer.
SP:878a518cb77731b8b376d5fd82542670e195f0d6
Connecting Sphere Manifolds Hierarchically for Regularization
1 INTRODUCTION . Applying inductive biases or prior knowledge to inference models is a popular strategy to improve their generalization performance ( Battaglia et al. , 2018 ) . For example , a hierarchical structure is found based on the similarity or shared characteristics between samples and thus becomes a basic criterion to categorize particular objects . The known hierarchical structures provided by the datasets ( e.g. , ImageNet ( Deng et al. , 2009 ) classified based on the WordNet graph ; CIFAR100 ( Krizhevsky , 2009 ) in ten different groups ) can help the network identify the similarity between the given samples . In classification tasks , the final layer of neural networks maps embedding vectors to a discrete target space . However , there is no mechanism forcing similar categories to be distributed close to each other in the embedding . Instead , we may observe classes to be uniformly distributed after training , as this simplifies the separation by the last fully-connected layer . This behavior is a consequence of seeing the label structure as ‘ flat , ’ i.e. , when we omit to consider the hierarchical relationships between classes ( Bilal et al. , 2017 ) . To alleviate this problem , in this study , we force similar classes to be closer in the embedding by forcing their hyperplanes to follow a given hierarchy . One way to realize that is by making children nodes dependent on parent nodes and constraining their distance through a regularization term . However , the norm itself does not give a relevant information on the closeness between classifiers . Indeed , two classifiers are close if they classify two similar points in the same class . This means similar classifiers have to indicate a similar direction . Therefore , we have to focus on the angle between classifiers , which can be achieved through spherical constraints . Contributions . In this paper , we propose a simple strategy to incorporate hierarchical information in deep neural network architectures with minimal changes to the training procedure , by modifying only the last layer . Given a hierarchical structure in the labels under the form of a tree , we explicitly force the classifiers of classes to belong to a sphere , whose center is the classifier of their super-class , recursively until we reach the root ( see Figure 2 ) . We introduce the spherical fully-connected layer and the hierarchically connected layer , whose combination implements our technique . Finally , we investigate the impact of Riemannian optimization instead of simple norm normalization . By its nature , the proposed technique is quite versatile because the modifications only affect the structure of last fully-connected layer of the neural network . Thus , it can be combined with many other strategies ( like spherical CNN from Xie et al . ( 2017 ) , or other deep neural network architectures ) . Related works . Hierarchical structures are well-studied , and their properties can be effectively learned using manifold embedding . The design of the optimal embedding to learn the latent hierarchy is a complex task , and was extensively studied in the past decade . For example , Word2Vec ( Mikolov et al. , 2013b ; a ) and Poincaré embedding ( Nickel & Kiela , 2017 ) showed a remarkable performance in hierarchical representation learning . ( Du et al. , 2018 ) forced the representation of sub-classes to “ orbit ” around the representation of their super-class to find similarity based embedding . Recently , using elliptical manifold embedding ( Batmanghelich et al. , 2016 ) , hyperbolic manifolds ( Nickel & Kiela , 2017 ; De Sa et al. , 2018 ; Tifrea et al. , 2018 ) , and a combination of the two ( Gu et al. , 2019 ; Bachmann et al. , 2019 ) , shown that the latent structure of many data was non-Euclidean ( Zhu et al. , 2016 ; Bronstein et al. , 2017 ; Skopek et al. , 2019 ) . ( Xie et al. , 2017 ) showed that spheres ( with angular constraints ) in the hidden layers also induce diversity , thus reducing over-fitting in latent space models . Mixing hierarchical information and structured prediction is not new , especially in text analysis ( Koller & Sahami , 1997 ; McCallum et al. , 1998 ; Weigend et al. , 1999 ; Wang et al. , 1999 ; Dumais & Chen , 2000 ) . Partial order structure of the visual-semantic hierarchy is exploited using a simple order pair with max-margin loss function in ( Vendrov et al. , 2016 ) . The results of previous studies indicate that exploiting hierarchical information during training gives better and more resilient classifiers , in particular when the number of classes is large ( Cai & Hofmann , 2004 ) . For a given hierarchy , it is possible to design structured models incorporating this information to improve the efficiency of the classifier . For instance , for support vector machines ( SVMs ) , the techniques reported in ( Cai & Hofmann , 2004 ; 2007 ; Gopal et al. , 2012 ; Sela et al. , 2011 ) use hierarchical regularization , forcing the classifier of a super-class to be close to the classifiers of its sub-classes . However , the intuition is very different in this case , because SVMs do not learn the embedding . In this study , we consider that the hierarchy of the class labels is known . Moreover , we do not change prior layers of the deep neural network , and only work on the last layer that directly contributed to build hyperplanes for a classification purpose . Our work is thus orthogonal to those works on embedding learning , but not incompatible . Comparison with hyperbolic/Poincaré/graph networks . Hyperbolic network is a recent technique that shows impressive results for hierarchical representation learning . Poincaré networks ( Nickel & Kiela , 2017 ) were originally designed to learn the latent hierarchy of data using low-dimension embedding . To alleviate their drawbacks due to a transductive property which can not be used for unseen graph inference , hyperbolic neural networks equipped set aggregation operations have been proposed ( Chami et al. , 2019 ; Liu et al. , 2019 ) . These methods have been mostly focused on learning embedding using a hyperbolic activation function for hierarchical representation . Our technique is orthogonal to these works : First , we assume that the hierarchical structure is not learnt but already known . Second , our model focuses on generating individual hyperplanes of embedding vectors given by the network architecture . While spherical geometry has a positive curvature , moreover , that of hyperbolic space has a constant negative curvature . However , our technique and hyperbolic networks are not mutually exclusive . Meanwhile focusing on spheres embedded in Rd in this study , it is straightforward to consider spheres embedded in hyperbolic spaces . 2 HIERARCHICAL REGULARIZATION . 2.1 DEFINITION AND NOTATIONS We assume we have samples with hierarchically ordered classes . For instance , apple , banana , and orange are classes that may belong to the super-class “ fruits. ” This represents hierarchical relationships with trees , as depicted in Figure 1 . We identify nodes in the graph through the path taken in the tree . To represent the leaf ( highlighted in blue in Figure 1 ) , we use the notation n { 1,3,2 } . This means it is the second child of the super-class n { 1,3 } , and recursively , until we reach the root . More formally , we identify nodes as np , where p is the path to the node . A path uniquely defines a node where only one possible path exists . Using the concatenation , between the path p and its child i , a new path p̃ can be defined as follows , p̃ = 〈p , i〉 ( 1 ) We denote P the set of all paths in the tree starting from the root , with cardinality |P| . Notice that |P| is also the number of nodes in the tree ( i.e. , number of classes and super-classes ) . We distinguish the set P from the set L , the set of paths associated to nodes whose label appears in the dataset . Although L may equal to P , this is not the case in our experiments . We show an example in Appendix A . 2.2 SIMILARITY BETWEEN OBJECTS AND THEIR REPRESENTATION . Let X be the network input ( e.g . an image ) , and φθ ( X ) be its representation , i.e. , the features of X extracted by a deep neural network parameterized by θ . We start with the following observation : Given a representation , super-class separators should be similar to separators for their sub-classes . This assumption implies the following direct consequence . All objects whose labels belong to the same super-class have a similar representation . That is a natural property that we may expect from a good representation . For instance , two dogs from different breeds should share more common features than that of a dog shares with an apple . Therefore , the parameter of the classifiers that identify dog ’ s breed should also be similar . Their difference lies in the parameters associated to some specific features that differentiate breeds of dogs . Although this is not necessarily satisfied with arbitrary hierarchical classification , we observe this in many existing datasets . For instance , Caltech-UCSD Birds 200 and Stanford dogs are datasets that classify , respectively , birds and dogs in term of their breeds . A possible example where this assumption may not be satisfied is a dataset whose super-classes are “ labels whose first letter is « · » . ” 2.3 HIERARCHICAL REGULARIZATION . Starting from a simple observation in the previous section , we propose a regularization technique that forces the network to have similar representation for classes along a path p , which implies having similar representation between similar objects . More formally , if we have an optimal classifier wp for the super-class p and a classifier w〈p , i〉 for the class 〈p , i〉 , we expect that ‖wp − w〈p , i〉‖ is small . ( 2 ) If this is satisfied , separators for objects in the same super-class are also similar because ‖w〈p , i〉 − w〈p , j〉‖ = ‖ ( w〈p , i〉 − wp ) − ( w〈p , j〉 − wp ) ‖ ≤ ‖wp − w〈p , i〉‖︸ ︷︷ ︸ small + ‖wp − w〈p , j〉‖︸ ︷︷ ︸ small . ( 3 ) However , the optimal classifier for an arbitrary representation φθ ( X ) may not satisfy equation 2 . The naive and direct way to ensure equation 2 is through hierarchical regularization , which forces classifiers in the same path to be close to each other . 2.4 HIERARCHICAL LAYER AND HIERARCHICALLY CONNECTED LAYER . In the previous section , we described the hierarchical regularization technique given a hierarchical structure in the classes . In this section , we show how to conveniently parametrize equation 2 . We first express the classifier as a sum of vectors δ defined recursively as follows : w〈p , i〉 = wp + δ〈p , i〉 , δ { } = 0 , ( 4 ) where { } is the root . It is possible to consider δ { } 6= 0 , which shifts separating hyper-planes . We do not consider this case in this paper . Given equation 4 , we have that ‖δ〈p , i〉‖ is small in equation 2 . Finally , it suffices to penalize the norm of δ〈p , i〉 during the optimization . Notice that , by construction , the number of δ ’ s is equal to the number of nodes in the hierarchical tree . Next , consider the output of CNNs for classification , φθ ( · ) TW , ( 5 ) where θ denotes the parameters of the hidden layers , W = [ w1 , . . . , w|L| ] denotes the last fullyconnected layer , and wi denotes the separator for the class i . For simplicity , we omit potential additional nonlinear functions , such as a softmax , on top of the prediction . We have parametrized wi following the recursive formula in equation 4 . To define the matrix formulation of equation 4 , we first introduce the Hierarchical layer H which plays an important role . This hierarchical layer can be identified to the adjacency matrix of the hierarchical graph . Definition 1 . ( Hierarchical layer ) . Consider ordering over the sets P and L , i.e. , for i = 1 , . . . , |P| and j = 1 , . . . , |L| , P = { p1 , . . . , pi , . . . , p|P| } and L = { p1 , . . . , pj , . . . , p|L| } . In other words , we associate to all nodes an index . Then , the hierarchical layer H is defined as H ∈ B|P|×|L| , Hi , j = 1 if npi npj , 0 otherwise . ( 6 ) where npi npj means npj is a parent of npi . We illustrate an example of H in Appendix A . The next proposition shows that equation 5 can be written using a simple matrix-matrix multiplication , involving the hierarchical layer . Proposition 1 . Consider a representation φθ ( · ) , where φθ ( · ) ∈ Rd . LetW be the matrix of separators W = [ wp1 , . . . , wp|L| ] , pi ∈ L , ( 7 ) where the separators are parametrized as equation 4 . Let ∆ be defined as ∆ ∈ Rd×|P| , ∆ = [ δp1 , . . . , δp|P| ] , ( 8 ) where P and L are defined in Section 2.1 . Consider the hierarchical layer defined in Definition 1 . Then , the matrix of separators W can be expressed as W = ∆H . ( 9 ) We can see W = ∆H as a combination of an augmented fully-connected layer , combined with the hierarchical layer that selects the right columns of ∆ , hence the term hierarchically connected layer . The ` 2 regularization of the δ can be conducted by the parameter weight decay , which is widely used in training of neural networks . The hierarchical layer H is fixed , while ∆ is learnable . This does not affect the complexity of the back-propagation significantly , as ∆H is a simple linear form . The size of the last layer slightly increases , from |L| × d to |P| × d , where d is the dimension of the representation φθ ( · ) . For instance , in the case of Tiny-ImageNet , the number of parameters of the last layer only increases by roughly 36 % ; nevertheless , the increased number of parameters of the last layer is still usually negligible in comparison with the total number of parameters for classical network architectures .
In this paper, the authors proposed a novel reparameterization framework of the last network layer that takes semantic hierarchy into account. Specifically, the authors assume a predefined hierarchy graph, and model the classifier of child classes as a parent classifier plus offsets $\delta$ recursively. The authors show that such hierarchy can be parameterized a matrix multiplication $\Delta \mathbf{H}$ where $\mathbf{H}$ is predefined by the graph. In addition, the authors further propose to fix the norm of $\delta$ in a decaying manner with respect to path length. The resulting spherical objective is optimized via Riemannian gradient descent.
SP:2fe9ca0b44e57587b94159cb8fa201f79c13db50
On the Role of Pre-training for Meta Few-Shot Learning
1 INTRODUCTION . In recent years , deep learning methods have outperformed most of the traditional methods in supervised learning , especially in image classification . However , deep learning methods generally require lots of labeled data to achieve decent performance . Some applications , however , do not have the luxury to obtain lots of labeled data . For instance , for bird classification , an ornithologist typically can only obtain a few pictures per bird species to update the classifier . Such needs of building classifiers from limited labeled data inspire some different research problems , including the few-shot learning problem ( Finn et al. , 2017 ; Snell et al. , 2017 ; Rajeswaran et al. , 2019 ; Oreshkin et al. , 2018 ; Vinyals et al. , 2016 ; Lee et al. , 2019 ) . In particular , few-shot learning starts with a training dataset that consists of data points for “ seen ” classes , and is required to classify “ unseen ” ones in the testing phase accurately based on limited labeled data points from unseen classes . Currently , there are two main frameworks , meta-learning ( Finn et al. , 2017 ; Snell et al. , 2017 ; Chen et al. , 2019 ) and transfer learning ( Dhillon et al. , 2020 ) , that deal with the few-shot learning problem . For transfer learning , the main idea is to train a traditional classifier on the meta-train dataset . In the testing phase , these methods finetune the model on the limited datapoints for the labeled novel classes . For meta-learning frameworks , their main concept is episodic training ( Vinyals et al. , 2016 ) . For the testing phase of few-shot learning , the learning method is given N novel classes , each containing K labeled data for fine-tuning and Q query data for evaluation . Unlike transfer learning algorithms , episodic training tries to simulate the testing literature in the training phase by sampling episodes in training dataset . In these two years , some transfer-learning methods ( Dhillon et al. , 2020 ) with sophisticated design in the finetuning part have a competitive performance to the meta-learning approaches . Moreover , researchers ( Lee et al. , 2019 ; Sun et al. , 2019 ; Chen et al. , 2019 ; Oreshkin et al. , 2018 ) have pointed out that combining both the global classifier ( pre-training part ) in the transfer learning framework and the episodic training concept for the meta-learning framework could lead to better performance . Yet , currently most of the attentions are on the episodic training part ( Vinyals et al. , 2016 ; Finn et al. , 2017 ; Snell et al. , 2017 ; Oreshkin et al. , 2018 ; Sun et al. , 2019 ; Lee et al. , 2019 ) and the role of pre-training is still vague . Meta-learning and pre-training has both improved a lot in the past few years . However , most of the works focus on the accuracy instead of the efficiency . For meta-learning , to make the progress more efficient , an intuitive way is to reduce the number of episodes . Currently , there are only limited researches ( Sun et al. , 2019 ) working on reducing the number of episodes . One of the methods ( Chen et al. , 2019 ; Lee et al. , 2019 ) is to apply a better weight initialization method , the one from pre-training , instead of the random initialization . Another method ( Sun et al. , 2019 ) is to mimic how people learn . For example , when we are learning dynamic programming , given a knapsack problem with simple constraint and the one with strong constraint , we will learn much more when we solve the problem with strong constraint . Sun et al . ( 2019 ) followed the latter idea and crafted the hard episode to decrease amount of necessary episodes . In this work , we study the role of pre-training in meta few-shot learning . We study the pre-training from the disentanglement of the representations . Disentanglement is the property that whether the datapoints within different classes has been mixed together . Frosst et al . ( 2019 ) pointed out that instead of the last layer of the model all representations after other layers were entangled . The last layer does the classifier and the rest captures some globally shared information . By analyzing the disentanglement property of episodic training , though the pre-training gives a better representation that benefits the episodic training , the representation becomes more disentangled after episodic training . That is to say , episodic training has spent some effort on making the representation more disentangled . Benefited from the understanding , we design a sophisticated pre-training method that is more disentangled and helps episodic training achieve competitive performance more efficiently . With our pre-training loss , the classical meta-learning algorithm , ProtoNet ( Snell et al. , 2017 ) , achieves competitive performance to other methods . Our study not only benefits the episodic training but also points out another direction to sharpen and speed-up episodic training . To sum up , there are three main contributions in this work : 1 . A brief study of the role of pre-training in episodic training . 2 . A simple regularization loss that sharpens the classical meta-learning algorithms . 3 . A new aspect for reducing the necessary episodic training episodes . 2 RELATED WORK . Few-shot learning tries to mimic the human ability to generalize to novel classes with limited datapoints . In the following , we briefly introduce the recent progress of the transfer-learning framework and two categories of the meta-learning framework . Afterward , we give a brief introduction of the not well studied episode efficiency problem . 2.1 TRANSFER-LEARNING FRAMEWORK . In the training phase , the transfer-learning framework trains a classifier on the general classification task across all base classes instead of utilizing episodic training . And for the testing phase , transferlearning methods finetune the model with the limited labeled data . There are several kinds of tricks . Qi et al . ( 2018 ) proposed a method to append the mean of the embedding with a given class as a final layer of the classifier . Qiao et al . ( 2018 ) used the parameter of the last activation output to predict the classifier for novel classes dynamically . Gidaris & Komodakis ( 2018 ) proposed a similar concept with Qiao et al . ( 2018 ) . They also embedded the weight of base classes during the novel class prediction . Moreover , they introduced an attention mechanism instead of directly averaging among the parameters of each shot . Besides embedding base classes weight to the final classifier , Dhillon et al . ( 2020 ) utilized label propagation by the uncertainty on a single prediction to prevent overfitting in the finetune stage , which is quite similar to the classical classification tasks . 2.2 META-LEARNING FRAMEWORK . For meta-learning like framework , the main concepts are learning to learn and episodic training ( Vinyals et al. , 2016 ) . Learning to learn refers to learn from a lot of tasks to benefit the new task learning . To prevent confusion , the original train and test phase are regarded as “ meta-train ” and “ meta-test ” . The term “ train ” and “ test ” would be referred to the one in each small task . Episodic training is the process of mimicking the task structure in meta-test during training . If the meta-test phase consists of K support examples and Q query examples from N classes , then we will sample lots of tasks that also have K support examples and Q query examples from N classes in the metatrain phase . Meta-learning algorithms have developed rapidly in recent years . We briefly categorize them into two categories , optimization-based methods , metric-based methods . Optimization-based Method Optimization-based methods try to get an embedding that could easily fit subtasks by adding some extra layers . Finn et al . ( 2017 ) proposed MAML ( Model-Agnostic Meta-Learning ) , which got a network that was the closest to all the best model in the low-way lowshot tasks . However , MAML might be quite inefficient due to the computation of Hessian matrix . To leverage the issue , iMAML ( Rajeswaran et al. , 2019 ) and foMAML ( Finn et al. , 2017 ) provides different approximation to avoid the heavy computation . However , MAML still suffers from the high dimensional overfitting issue . LEO model ( Rusu et al. , 2019 ) solves the overfitting issue by learning a low dimensional latent space . Instead of aiming to get an embedding that could benefit the latter fully connected layer , MetaOptNet ( Lee et al. , 2019 ) aims to get an embedding that could benefit the latter differentiable support vector machine . Metric-based Method Instead of learning an embedding that could benefit the latter additional layer , metric-based methods aim to get an embedding that could easily classify the classes by simple metrics . Matching Networks ( Vinyals et al. , 2016 ) conducts a cosine similarity metric with a Full Context Encoding module . Prototypical Networks ( Snell et al. , 2017 ) replaces the cosine similarity metric with the squared Euclidean distance and computes the mean of the embedding in the supportset as the prototype . Relation Network ( Sung et al. , 2018 ) embeds a relation module in the learning metric . Instead of using a consistent metric in the task , TADAM ( Oreshkin et al. , 2018 ) designs a task-dependent metric that could dynamically fit new class combinations . 2.3 MIXED FRAMEWORK . Some recent works have found that using a global pre-trained classifier as the initialization weight could lead to a better meta-learning result . Sun et al . ( 2019 ) used the pre-trained classifier weight as initialization weight and launched a simple gradient-based with restricting the learning process as shift and scale . Meta-Baseline ( Chen et al. , 2020 ) also follows the initialization literature and applies cosine similarity metric for the following learning process . Chen et al . ( 2019 ) changed the original pre-trained network structure into a weight imprinting taste and a simple gradient-based method for the episodic training part . Triantafillou et al . ( 2020 ) also utilized the pre-trained initialization and derived a combination between MAML Finn et al . ( 2017 ) and ProtoNet ( Snell et al. , 2017 ) . 2.4 EPISODE REDUCTION METHODS . Recent researchers have found that a pre-trained classifier leads to better meta-learning results . On the other hand , we could reduce the amount of episodes by using a pre-trained classifier . Besides utilizing the pre-training weight to reduce the number of episodes , Meta Transfer Learning ( Sun et al. , 2019 ) proposes the concept of hard episode . For each normal episode , MTL adds the class with the worst performance in a pool . After collecting for a while , MTL creates hard episodes by sampling from the pool . Instead of crafting hard episodes , our approach tries to utilize more in the pre-training phase . We propose a simple regularization that could reduce the difference between the embeddings of the pretrained classifier and the episodic training one . It has significantly reduced the number of episodes and achieves a similar ( even better ) performance for the original algorithms . Moreover , for shallow and deep backbones , it increases the final accuracy . 3 METHODOLOGY . No matter in the route of transferring the classifier or the route of episodic training , pre-training serves a crucial role . And in meta few-shot learning , pre-training provides an initialization weight for further episodic training . In recent episodic-based methods , the pre-training model is split into two parts , backbone and classifier . The last linear layer serves as the classifier and maps the embedding to logits . Others work as the backbone and transform the raw photo into embedding . After pretraining , the classifier part is directly dropped , since the target classes may be unseen or the order may have changed . Though the split is quite naive , the afterward episodic learning converges faster and better based on the initialization . Thus , previous works conclude that pre-training provides a better representation . However , what makes it better is not clear . What is the role of pre-training in meta few-shot learning ? More specifically , what is the character of backbone in pre-training and episodic training ? In ths section , we choose prototypical network ( Snell et al. , 2017 ) as the representative . Following the analysis of general learning literature by Frosst et al . ( 2019 ) , we utilize the similar loss to measure the disentanglement and entanglement property of the backbone in pre-training and episodic training . Benefited from our observation , we give an attempt to transfer the computation burden of episodic training by adding an sophisticated loss in pre-training .
This paper investigates the role of pre-training as an initialization for meta-learning for few-shot classification. In particular, they look at the extent to which the pre-trained representations are disentangled with respect to the class labels. They hypothesize that this disentanglement property of those representations is responsible for their utility as the starting point for meta-learning. Motivated by this, they design a regularizer to be used during the pre-training phase to encourage this disentanglement to be even more prominent with the hope that this pre-trained solution is now closer to the optimal one, thus requiring less additional episodic training which is time-consuming. They show experimentally that their modified pre-training phase sometimes leads to better results as an initialization for Prototypical Networks compared to the standard pre-trained solution, and sometimes converges faster.
SP:cb6afa05735201fecf8106b77c2d0a883d5cd996
RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior
1 INTRODUCTION . One of the most important unsupervised learning tasks is to learn the data distribution and build generative models . Over the past few years , various types of generative models have been proposed . Flow-based generative models are a particular family of generative models with tractable distributions ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Chen et al. , 2018b ; 2019 ; Behrmann et al. , 2019 ; Hoogeboom et al. , 2019 ; Brehmer & Cranmer , 2020 ; Rezende et al. , 2020 ; Karami et al. , 2019 ) . Yet the latent variables are on equal footing and mixed globally . Here , we propose a new flow-based model , RG-Flow , which is inspired by the idea of renormalization group in statistical physics . RGFlow imposes locality and hierarchical structure in bijective transformations . It allows us to access information at different scales in original images by latent variables at different locations , which offers better explainability . Combined with sparse priors ( Olshausen & Field , 1996 ; 1997 ; Hyvärinen & Oja , 2000 ) , we show that RG-Flow achieves hierarchical disentangled representations . Renormalization group ( RG ) is a powerful tool to analyze statistical mechanics models and quantum field theories in physics ( Kadanoff , 1966 ; Wilson , 1971 ) . It progressively extracts more coarse-scale statistical features of the physical system and decimates irrelevant fine-grained statistics at each scale . Typically , the local transformations used in RG are designed by human physicists and they are not bijective . On the other hand , the flow-based models use cascaded invertible global transformations to progressively turn a complicated data distribution into Gaussian distribution . Here , we would like to combine the key ideas from RG and flow-based models . The proposed RG-flow enables the machine to learn the optimal RG transformation from data , by constructing local invertible transformations and build a hierarchical generative model for the data distribution . Latent representations are introduced at different scales , which capture the statistical features at the corresponding scales . Together , the latent representations of all scales can be jointly inverted to generate the data . This method was recently proposed in the physics community as NeuralRG ( Li & Wang , 2018 ; Hu et al. , 2020 ) . Our main contributions are two-fold : First , RG-Flow can separate the signal statistics of different scales in the input distribution naturally , and represent information at each scale in its latent vari- ables z . Those hierarchical latent variables live on a hyperbolic tree . Taking CelebA dataset ( Liu et al. , 2015 ) as an example , the network will not only find high-level representations , such as the gender factor and the emotion factor for human faces , but also mid-level and low-level representations . To visualize representations of different scales , we adopt the concept of receptive field from convolutional neural networks ( CNN ) ( LeCun , 1988 ; LeCun et al. , 1989 ) and visualize the hidden structures in RG-flow . In addition , since the statistics are separated into a hierarchical fashion , we show that the representations can be mixed at different scales . This achieves an effect similar to style mixing . Second , we introduce the sparse prior distribution for latent variables . We find the sparse prior distribution is helpful to further disentangle representations and make them more explainable . The widely adopted Gaussian prior is rotationally symmetric . As a result , each of the latent variables in a flow model usually does not have a clear semantic meaning . By using a sparse prior , we demonstrate the clear semantic meaning in the latent space . 2 RELATED WORK . Some flow-based generative models also possess multi-scale latent space ( Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ) , and recently hierarchies of features have been utilized in Schirrmeister et al . ( 2020 ) , where the top-level feature is shown to perform strongly in out-of-distribution ( OOD ) detection task . Yet , previous models do not impose hard locality constraint in the multi-scale structure . In Appendix C , the differences between globally connected multi-scale flows and RG-Flow are discussed , and we see that semantic , meaningful receptive fields do not show up in the globally connected cases . Recently , other more expressive bijective maps have been developed ( Hoogeboom et al. , 2019 ; Karami et al. , 2019 ; Durkan et al. , 2019 ) , and those methods can be incorporated into the proposed structure to further improve the expressiveness of RG-Flow . Some other classes of generative models rely on a separate inference model to obtain the latent representation . Examples include variational autoencoders ( Kingma & Welling , 2014 ) , adversarial autoencoders ( Makhzani et al. , 2015 ) , InfoGAN ( Chen et al. , 2016 ) , and BiGAN ( Donahue et al. , 2017 ; Dumoulin et al. , 2017 ) . Those techniques typically do not use hierarchical latent variables , and the inference of latent variables is approximate . Notably , recent advances suggest that having hierarchical latent variables may be beneficial ( Vahdat & Kautz , 2020 ) . In addition , the coarseto-fine fashion of the generation process has also been discussed in other generative models , such as Laplacian pyramid of adversarial networks ( Denton et al. , 2015 ) , and multi-scale autoregressive models ( Reed et al. , 2017 ) . Disentangled representations ( Tenenbaum & Freeman , 2000 ; DiCarlo & Cox , 2007 ; Bengio et al. , 2013 ) is another important aspect in understanding how a model generates images ( Higgins et al. , 2018 ) . Especially , disentangled high-level representations have been discussed and improved from information theoretical principles ( Cheung et al. , 2015 ; Chen et al. , 2016 ; 2018a ; Higgins et al. , 2017 ; Kipf et al. , 2020 ; Kim & Mnih , 2018 ; Locatello et al. , 2019 ; Ramesh et al. , 2018 ) . Apart from the high-level representations , the multi-scale structure also lies in the distribution of natural images . If a model can separate information of different scales , then its multi-scale representations can be used to perform other tasks , such as style transfer ( Gatys et al. , 2016 ; Zhu et al. , 2017 ) , face mixing ( Karras et al. , 2019 ; Gambardella et al. , 2019 ; Karras et al. , 2020 ) , and texture synthesis ( Bergmann et al. , 2017 ; Jetchev et al. , 2016 ; Gatys et al. , 2015 ; Johnson et al. , 2016 ; Ulyanov et al. , 2016 ) . Typically , in flow-based generative models , Gaussian distribution is used as the prior for the latent space . Due to the rotational symmetry of Gaussian prior , an arbitrary rotation of the latent space would lead to the same likelihood . Sparse priors ( Olshausen & Field , 1996 ; 1997 ; Hyvärinen & Oja , 2000 ) was proposed as an important tool for unsupervised learning and it leads to better explainability in various domains ( Ainsworth et al. , 2018 ; Arora et al. , 2018 ; Zhang et al. , 2019 ) . To break the symmetry of Gaussian prior and further improve the explainability , we introduce a sparse prior to flow-based models . Please refer to Figure 12 for a quick illustration on the difference between Gaussian prior and the sparse prior , where the sparse prior leads to better disentanglement . Renormalization group ( RG ) has a broad impact ranging from particle physics to statistical physics . Apart from the analytical studies in field theories ( Wilson , 1971 ; Fisher , 1998 ; Stanley , 1999 ) , RG has also been useful in numerically simulating quantum states . The multi-scale entanglement renormalization ansatz ( MERA ) ( Vidal , 2008 ; Evenbly & Vidal , 2014 ) implements the hierarchical structure of RG in tensor networks to represent quantum states . The exact holographic mapping ( EHM ) ( Qi , 2013 ; Lee & Qi , 2016 ; You et al. , 2016 ) further extends MERA to a bijective ( unitary ) flow between latent product states and visible entangled states . Recently , Li & Wang ( 2018 ) ; Hu et al . ( 2020 ) incorporates the MERA structure and deep neural networks to design a flow-base generative model that allows machine to learn the EHM from statistical physics and quantum field theory actions . In quantum machine learning , recent development of quantum convolutional neural networks also ( Cong et al. , 2019 ) utilize the MERA structure . The similarity between RG and deep learning has been discussed in several works ( Bény , 2013 ; Mehta & Schwab , 2014 ; Bény & Osborne , 2015 ; Oprisa & Toth , 2017 ; Lin et al. , 2017 ; Gan & Shu , 2017 ) . The information theoretic objective that guides machine-learning RG transforms are proposed in recent works ( Koch-Janusz & Ringel , 2018 ; Hu et al. , 2020 ; Lenggenhager et al. , 2020 ) . The meaning of the emergent latent space has been related to quantum gravity ( Swingle , 2012 ; Pastawski et al. , 2015 ) , which leads to the exciting development of machine learning holography ( You et al. , 2018 ; Hashimoto et al. , 2018 ; Hashimoto , 2019 ; Akutagawa et al. , 2020 ; Hashimoto et al. , 2020 ) . 3 METHODS . Flow-based generative models . Flow-based generative models are a family of generative models with tractable distributions , which allows efficient sampling and exact evaluation of the probability density ( Dinh et al. , 2015 ; 2017 ; Kingma & Dhariwal , 2018 ; Chen et al. , 2019 ) . The key idea is to build a bijective map G ( z ) = x between visible variables x and latent variables z . Visible variables x are the data that we want to generate , which may follow a complicated probability distribution . And latent variables z usually have simple distribution that can be easily sampled , for example the i.i.d . Gaussian distribution . In this way , the data can be efficiently generated by first sampling z and mapping them to x through x = G ( z ) . In addition , we can get the probability associated with each data sample x , log pX ( x ) = log pZ ( z ) log @ G ( z ) @ z . ( 1 ) The bijective map G ( z ) = x is usually composed as a series of bijectors , G ( z ) = G1 G2 · · · Gn ( z ) , such that each bijector layer Gi has a tractable Jacobian determinant and can be inverted efficiently . The two key ingredients in flow-based models are the design of the bijective map G and the choice of the prior distribution pZ ( z ) . Structure of RG-Flow networks . Much of the prior research has focused on designing more powerful bijective blocks for the generator G to improve its expressive power and to achieve better approximations of complicated probability distributions . Here , we focus on designing the architecture that arranges the bijective blocks in a hierarchical structure to separate features of different scales in the data and to disentangle latent representations . 1 Our design is motivated by the idea of RG in physics , which progressively separates the coarsegrained data statistics from fine-grained statistics by local transformations at different scales . Let x be the visible variables , or the input image ( level-0 ) , denoted as x ( 0 ) ⌘ x . A step of the RG transformation extracts the coarse-grained information x ( 1 ) to send to the next layer ( level-1 ) , and splits out the rest of fine-grained information as auxiliary variables z ( 0 ) . The procedure can be described by the following recursive equation ( at level-h for example ) , x ( h+1 ) , z ( h ) = Rh ( x ( h ) ) , ( 2 ) which is illustrated in Fig . 1 ( a ) , where dim ( x ( h+1 ) ) + dim ( z ( h ) ) = dim ( x ( h ) ) , and the RG transformation Rh can be made invertible . At each level , the transformation Rh is a local bijective map , which is constructed by stacking trainable bijective blocks . We will specify its details later . The split-out information z ( h ) can be viewed as latent variables arranged at different scales . Then the inverse RG transformation Gh ⌘ R 1h simply generates the fine-grained image , x ( h ) = R 1h ( x ( h+1 ) , z ( h ) ) = Gh ( x ( h+1 ) , z ( h ) ) . ( 3 ) The highest-level image x ( hL ) = GhL ( z ( hL ) ) can be considered as generated directly from latent variables z ( hL ) without referring to any higher-level coarse-grained image , where hL = log2 L log2 m , for the original image of size L⇥L with local transformations acting on kernel size m⇥m . Therefore , given the latent variables z = { z ( h ) } at all levels h , the original image can be restored by the following nested maps , as illustrated in Fig . 1 ( b ) , x ⌘ x ( 0 ) = G0 ( G1 ( G2 ( · · · , z ( 2 ) ) , z ( 1 ) ) , z ( 0 ) ) ⌘ G ( z ) , ( 4 ) where z = { z0 , · · · , zhL } . RG-Flow is a flow-based generative model that uses the above composite bijective map G as the generator . To model the RG transformation , we arrange the bijective blocks in a hierarchical network architecture . Fig . 2 ( a ) shows the side view of the network , where each green or yellow block is a local bijective map . Following the notation of MERA networks , the green blocks are the disentanglers , which reparametrize local variables to reduce their correlations , and the yellow blocks are the decimators , which separate the decimated features out as latent variables . The blue dots on the bottom are the visible variables x from the data , and the red crosses are the latent variables z . We omit color channels of the image in the illustration , since we keep the number of color channels unchanged through the transformation . Fig . 2 ( b ) shows the top-down view of a step of the RG transformation . The green/yellow blocks ( disentanglers/decimators ) are interwoven on top of each other . The covering area of a disentangler or decimator is defined as the kernel size m⇥m of the bijector . For example , in Fig . 2 ( b ) , the kernel size is 4 ⇥ 4 . After the decimator , three fourth of the degrees of freedom are decimated into latent variables ( red crosses in Fig . 2 ( a ) ) , so the edge length of the image is halved . As a mathematical description , for the single-step RG transformation Rh , in each block ( p , q ) labeled by p , q = 0 , 1 , . . . , L2hm 1 , the mapping from x ( h ) to ( x ( h+1 ) , z ( h ) ) is given by n y ( h ) 2h ( mp+m2 +a , mq+m2 +b ) o ( a , b ) 2⇤1m = Rdish ✓n x ( h ) 2h ( mp+m2 +a , mq+m2 +b ) o ( a , b ) 2⇤1m ◆ n x ( h+1 ) 2h ( mp+a , mq+b ) o ( a , b ) 2⇤2m , n z ( h ) 2h ( mp+a , mq+b ) o ( a , b ) 2⇤1m/⇤2m =Rdech ✓n y ( h ) 2h ( mp+a , mq+b ) o ( a , b ) 2⇤1m ◆ , ( 5 ) where ⇤km = { ( ka , kb ) | a , b = 0 , 1 , . . . , mk 1 } denotes the set of pixels in a m ⇥ m square with stride k , and y is the intermediate result after the disentangler but not the decimator . The notation x ( h ) ( i , j ) stands for the variable ( a vector of all channels ) at the pixel ( i , j ) and at the RG level h ( similarly for y and z ) . The disentanglers Rdish and decimators R dec h can be any bijective neural network . Practically , We use the coupling layer proposed in the Real NVP networks ( Dinh et al. , 2017 ) to build them , with a detailed description in Appendix A . By specifying the RG transformation Rh = Rdech Rdish above , the generator Gh ⌘ R 1 h is automatically specified as the inverse transformation . Training objective . After decomposing the statistics into multiple scales , we need to make the latent features decoupled . So we assume that the latent variables z are independent random variables , described by a factorized prior distribution pZ ( z ) = Y l p ( zl ) , ( 6 ) Under review as a conference paper at ICLR 2021 x Visible variables z Latent variables ( c ) G en er at io n C au sa lC on e x Visible variables z Latent variables ( d ) In fe re nc e C au sa lC on e z Latent variables ( a ) ( b ) where l labels every element in z , including the RG level , the pixel position and the channel . This prior gives the network the incentive to minimize the mutual information between latent variables . This minimal bulk mutual information ( minBMI ) principle was previously proposed to be the information theoretic principle that defines the RG transformation ( Li & Wang ( 2018 ) ; Hu et al . ( 2020 ) ) . Starting from a set of independent latent variables z , the generator G should build up correlations locally at different scales , such that the multi-scale correlation structure can emerge in the resulting image x to model the correlated probability distribution of the data . To achieve this goal , we should maximize the log likelihood for x drawn from the data set . The loss function to minimize reads L = Ex⇠pdata ( x ) log pX ( x ) = Ex⇠pdata ( x ) ✓ log pZ ( R ( x ) ) + log @ R ( x ) @ x ◆ , ( 7 ) where R ( x ) ⌘ G 1 ( x ) = z denotes the RG transformation , which contains trainable parameters . By optimizing the parameters , the network learns the optimal RG transformation from the data . Receptive fields of latent variables . Due to the nature of local transformations in our hierarchical network , we can define the generation causal cone for a latent variable to be the affected area when that latent variable is changed . This is illustrated as the red cone in Fig . 2 ( c ) . To visualize the latent space representation , we define the receptive field for a latent variable zl as RFl = Ez⇠pZ ( z ) @ G ( z ) @ zl c , ( 8 ) where | · |c denotes the 1-norm on the color channel . The receptive field reflects the response of the generated image to an infinitesimal change of the latent variable zl , averaged over pZ ( z ) . Therefore , the receptive field of a latent variable is always contained in its generation causal cone . Higher-level latent variables have larger receptive fields than those of the lower-level ones . Especially , if the receptive fields of two latent variables do not overlap , which is often the case for lower-level latent variables , they automatically become disentangled in the representation . Image inpainting and error correction . Another advantage of the network locality can be demonstrated in the inpainting task . Similar to the generation causal cone , we can define the inference causal cone shown as the blue cone in Fig . 2 ( d ) . If we perturb a pixel at the bottom of the blue cone , all the latent variables within the blue cone will be affected , whereas the latent variables outside the cone can not be affected . An important property of the hyperbolic tree-like network is that the higher level contains exponentially fewer latent variables . Even though the inference causal cone is expanding as we go into higher levels , the number of latent variables dilutes exponentially as well , resulting in a constant number of latent variables covered by the inference causal cone on each level . Therefore , if a small local region on an image is corrupted , only O ( log L ) latent variables need to be modified , where L is the edge length of the entire image . While for globally connected networks , all O ( L2 ) latent variables have to be varied . Sparse prior distribution . We have chosen to hard-code the RG information principle by using a factorized prior distribution , i.e . pZ ( z ) = Q l p ( zl ) . The common practice is to choose p ( zl ) to be the standard Gaussian distribution , which is spherical symmetric . If we apply any rotation to z , the distribution will remain the same . Therefore , we can not avoid different features from being mixed under the arbitrary rotation . To overcome this issue , we use an anisotropic sparse prior distribution for pZ ( z ) . In our implementation , we choose the Laplacian distribution p ( zl ) = 12b exp ( |zl|/b ) , which is sparser compared to Gaussian distribution and breaks the spherical symmetry of the latent space . In Appendix E , we show a two-dimensional pinwheel example to illustrate this intuition . This heuristic method will encourage the model to find more semantically meaningful representations by breaking the spherical symmetry .
The paper proposes a method, named as RG-flow, which combines the ideas of Renormalization group (RG) and flow-based models. The RG is applied to separate signal statistics of different scales in the input distribution and flow-based idea represents each scale information in its latent variables with sparse prior distribution. Inspired by receptive field from CNNs, the authors visualize the latent space representation, which reveals the progressive semantics at different levels as the instinctive expectation.
SP:2cfe676c21709d69aa3bab1480440fda0a365c3f
Learning from Demonstrations with Energy based Generative Adversarial Imitation Learning
1 INTRODUCTION . Motivated by applying reinforcement learning algorithms into more realistic tasks , we find that most realistic environments can not feed an explicit reward signal back to the agent immediately . It becomes a bottleneck for traditional reinforcement learning methods to be applied into more realistic scenarios . So how to infer the latent reward function from expert demonstrations is of great significance . Recently , a lot of great work have been proposed to solve this problem . They are also successfully applied in scientific inquiries , such as Stanford autonomous helicopter Abbeel et al . ( 2006 ) Abbeel et al . ( 2007 ) Ng et al . ( 2004 ) Coates et al . ( 2008 ) Abbeel et al . ( 2008a ) Abbeel et al . ( 2010 ) , as well as practical challenges such as navigation Ratliff et al . ( 2006 ) Abbeel et al . ( 2008b ) Ziebart et al . ( 2008 ) Ziebart et al . ( 2010 ) and intelligent building controls Barrett & Linder ( 2015 ) . The goal of imitation learning is to mimic the expert behavior from expert demonstrations without access to a reinforcement signal from the environment . The algorithms in this field can be divided into two board categories : behavioral cloning and inverse reinforcement learning . Behavioral cloning formulate this problem as a supervised learning problem which aims at mapping state action pairs from expert trajectories to policy . These methods suffer from the problem of compounding errors ( covariate shift ) which only learn the actions of the expert but not reason about what the expert is trying to achieve . By the contrast , inverse reinforcement learning recovers the reward function from expert demonstrations and then optimize the policy under such an unknown reward function . In this paper , we propose energy-based generative adversarial imitation learning which views the discriminator as an energy function without explicit probabilistic interpretation . The energy function computed by the discriminator can be viewed as a trainable cost function for the generator , while the discriminator is trained to assign low energy values to the regions of expert demonstrations , and higher energy values outside these regions . We use an auto-encoder to represent the discriminator , and the reconstruction error is thought to be the energy . There are many other choices to learn the energy function , but an auto-encoder is quite efficient . Our main contributions are summarized as follows . • An EB-GAIL framework with the discriminator using an auto-encoder architecture in which the energy is the reconstruction error . • Theoretical analysis shows that the policy rolls out the trajectories that are indistinguishable from the distribution of the expert demonstrations by matching the occupancy measure with the expert policy . • Experiments show that EB-GAIL outperforms several SoTA imitation learning algorithms while the training process for EB-GAIL can be more stable . 2 BACKGROUND . In this section , we ’ ll briefly introduce the basic concepts in direct reinforcement learning Sutton & Barto ( 1998 ) Sugiyama ( 2015 ) , inverse reinforcement learning Ng & Russell ( 2000 ) , imitation learning Bagnell ( 2015 ) , and energy based models LeCun et al . ( 2006 ) . 2.1 DIRECT REINFORCEMENT LEARNING . Reinforcement Learning Sutton & Barto ( 1998 ) Sugiyama ( 2015 ) , which is usually used for sequential decision making problems , can help us to learn from the interactions with the environment . The process for direct reinforcement learning is that at each time step , the agent receives a state st and chooses an action at from the action space A , following a stochastic policy π ( at|st ) . After that , the environment transit to a new state st+1 and gives a scalar reward rt back to the agent . This process continues until the agent reaches a terminal state . In this case , the training model can be thought as a Markov decision process ( MDP ) , which is a tuple ( S , A , T , γ , D , R ) . In this tuple , S is the state space ; A is the action space ; T = Psa is a probability matrix for state transitions , owing to the environment dynamics ; γ ∈ ( 0 , 1 ] is a discount factor ; D is the initial-state transition distribution ; and R : S → A is the reward function , which is assumed to be bounded in absolute value by 1 . In addition , we also define that a policy π is a mapping from states to probability distributions over actions , which is also called a stochastic policy . For a certain task , the goal of direct reinforcement learning is to maximize the total future reward . For simplification , we define the value function to be a prediction of the total future reward , which can be shown as a discounted future reward : V = ∑∞ t=0 γ tRt . Besides , we also define the action value function as Q ( s , a ) = E [ Rt|st = s , at = a ] , which is the expected return for selecting action a in state s. According to the Bellman Optimality , an optimal action value function Q∗ ( s , a ) is the maximum action value achievable by any policy for state s and action a . 2.2 INVERSE REINFORCEMENT LEARNING . The goal of inverse reinforcement learning is to infer the reward signal with respect to the expert demonstrations which are assumed to be the observations of optimal behaviors Ng & Russell ( 2000 ) . In the past decade , a lot of great work have been proposed towards enlarging the ability of reward function representation . Take some traditional methods for example , in 2010 , FIRL Levine & Popvić ( 2010 ) was proposed to learn a set of composites features based on logical conjunctions with nonlinear functions for the reward signal . Later , non-parametric methods based on Gaussian Process ( GP ) Rasmussen & Williams ( 2006 ) are proposed to enlarge the function space of latent reward to allow for non-linearity in Levine & Popvić ( 2010 ) . For undertaking the learning of an abstract structure with smaller data sets , Jin et al . Jin & Spanos ( 2015 ) combined deep belief networks and gaussian process to optimize the existing algorithm GP-IRL . To solve the substantial noise in the sensor measurements , some work have been applied here such as bayesian programming , probabilistic graphical model and so on . For example , in 2007 , Ramachandran et al . proposed a Bayesian nonparametric approach Ramachandran & Amir ( 2007 ) to construct the reward function features in IRL , which is so-called Bayesian IRL . Later , Choi et al . Choi & Kim ( 2014 ) extend this algorithm by defining a prior on the composite features , which are defined to be the logical conjunctions of the atomic features . By assuming that each trajectory can be generated by multiple locally consistent reward functions , Nguyen et al . used an expectation-maximization ( EM ) algorithm to learn the different reward functions and the stochastic transitions between them in order to jointly improve the likelihood of the expert ’ s demonstrated trajectories Nguyen et al . ( 2015 ) . As a result , the most likely partition of a trajectory into segments that are generated from different locally consistent reward functions selected by EM can be derived . Experiments show that the algorithm outperforms the SoTA EM clustering with maximum likelihood IRL . 2.3 IMITATION LEARNING . Imitation learning is a study of algorithms that can mimic the experts ’ demonstrations or a teachers ’ behavior . Unlike inverse reinforcement learning , the goal of imitation learning is to obtain a policy from teachers ’ behaviors rather than to recover the reward function for some certain tasks . The algorithms in imitation learning can be classified into two categories : behavioral cloning , and inverse reinforcement learning . One fatal weakness for behavioral cloning is that these methods can only learn the actions from the teacher rather than learn the motivation from teachers ’ behaviors . To solve this problem , inverse reinforcement learning was proposed to recover the reward function for decision making problems . By assuming that the teachers ’ behavior is optimal , these methods tend to recover the reward function in a Markov decision process . So when combined with direct reinforcement learning methods , inverse reinforcement learning can realize the process for mimicking the teachers ’ behaviors . 2.4 ENERGY BASED MODEL . The essence of the energy based model LeCun et al . ( 2006 ) is to build a function that maps each point of an input space to a single scalar , which is called “ energy ” . The learning phase is a data driven process that shapes the energy surface in such a way that the desired configurations get assigned low energies , while the incorrect ones are given high energies . Supervised learning falls into this framework : for each x in the training set , the energy of the pair ( x , y ) takes low values when y is the correct label and higher values for incorrect y ’ s . Similarly , when modeling x alone within an unsupervised learning setting , lower energy is attributed to the data manifold . The term contrastive sample is often used to refer to a data point causing an energy pull up , such as the incorrect y ’ s in supervised learning and points from low data density regions in unsupervised learning . Denoted that the energy function is ε , the connection between probability and energy can be built through Gibbs distribution : P ( y|x ) = exp ( −βε ( y , x ) ) ∫ y∈Y exp ( −βε ( y , x ) ) , ( 1 ) the denominator here is the partition function which represents the total energy in the data space and β is an arbitrary positive constant . The formulation of this connection might seem arbitrary , but other formulation can be obtained by re-defining the energy function . 3 ENERGY BASED GENERATIVE ADVERSARIAL IMITATION LEARNING . ( EB-GAIL ) 3.1 METHODOLOGY . The output of the discriminator goes through an objective functional in order to shape the energy function , assigning low energy to the regions near the expert demonstrations and higher energy to the other regions . In this work , we use an auto-encoder to represent the discriminator and the reconstruction error of the auto-encoder is assumed to be the energy . Meanwhile , we use a margin loss function to train EB-GAIL while one loss function is to train the discriminator and the other loss function is assumed to be the reward function for the reinforcement learning procedure . Given a positive margin margin , a state-action pair χE sampled from expert demonstrations , and a state-action pair χi rolled out by a trained policy , the discriminator loss function LD is formally defined by : LD = D ( χE ) + [ margin−D ( χi ) ] + ( 2 ) where [ · ] + = max ( 0 , · ) . Meanwhile , the reward function for the reinforcement learning procedure is : r ( χi ) = −D ( χi ) ( 3 ) Maximizing the total reward for the reinforcement learning is similar to minimizing the second term of LD . In practice , we observe that the loss function can effectively avoid gradient vanishing and mode collapse problems .
The authors propose a discriminator-based approach to inverse reinforcement learning (IRL). The discriminator function is trained to attain large values ("energy") on trajectories from the current policy and small values on trajectories from an expert policy. The current policy is then improved by using the negative discriminator as a reward signal. The specific discriminator suggested is an autoencoder loss. The authors continue to provide a proof that assuming their discriminator/generator attain a Nash equilibrium, the occupancy measure of the trained policy matches that of the expert policy. They follow up with demonstrating better performance of their approach compared to certain baselines when tested on a number of tasks on Physics simulators.
SP:2f7f3a043edf8bbe4164dc748c7fbfc7c7338a02
Local Search Algorithms for Rank-Constrained Convex Optimization
rank ( A ) ≤r∗ R ( A ) given a convex function R : Rm×n → R and a parameter r∗ . These algorithms consist of repeating two steps : ( a ) adding a new rank-1 matrix to A and ( b ) enforcing the rank constraint on A . We refine and improve the theoretical analysis of Shalev-Shwartz et al . ( 2011 ) , and show that if the rank-restricted condition number of R is κ , a solution A with rank O ( r∗ · min { κ log R ( 0 ) −R ( A ∗ ) , κ 2 } ) and R ( A ) ≤ R ( A∗ ) + can be recovered , where A∗ is the optimal solution . This significantly generalizes associated results on sparse convex optimization , as well as rank-constrained convex optimization for smooth functions . We then introduce new practical variants of these algorithms that have superior runtime and recover better solutions in practice . We demonstrate the versatility of these methods on a wide range of applications involving matrix completion and robust principal component analysis . 1 INTRODUCTION . Given a real-valued convex function R : Rm×n → R on real matrices and a parameter r∗ ∈ N , the rank-constrained convex optimization problem consists of finding a matrix A ∈ Rm×n that minimizes R ( A ) among all matrices of rank at most r∗ : min rank ( A ) ≤r∗ R ( A ) ( 1 ) Even though R is convex , the rank constraint makes this problem non-convex . Furthermore , it is known that this problem is NP-hard and even hard to approximate ( Natarajan ( 1995 ) ; Foster et al . ( 2015 ) ) . In this work , we propose efficient greedy and local search algorithms for this problem . Our contribution is twofold : 1 . We provide theoretical analyses that bound the rank and objective value of the solutions returned by the two algorithms in terms of the rank-restricted condition number , which is the natural generalization of the condition number for low-rank subspaces . The results are significantly stronger than previous known bounds for this problem . 2 . We experimentally demonstrate that , after careful performance adjustments , the proposed general-purpose greedy and local search algorithms have superior performance to other methods , even for some of those that are tailored to a particular problem . Thus , these algorithms can be considered as a general tool for rank-constrained convex optimization and a viable alternative to methods that use convex relaxations or alternating minimization . The rank-restricted condition number Similarly to the work in sparse convex optimization , a restricted condition number quantity has been introduced as a reasonable assumption on R. If we let ρ+r be the maximum smoothness bound and ρ − r be the minimum strong convexity bound only along rank-r directions of R ( these are called rank-restricted smoothness and strong convexity respectively ) , the rank-restricted condition number is defined as κr = ρ+r ρ−r . If this quantity is bounded , one can efficiently find a solutionA withR ( A ) ≤ R ( A∗ ) + and rank r = O ( r∗ ·κr+r∗ R ( 0 ) ) using a greedy algorithm ( Shalev-Shwartz et al . ( 2011 ) ) . However , this is not an ideal bound since the rank scales linearly with R ( 0 ) , which can be particularly high in practice . Inspired by the analogous literature on sparse convex optimization by Natarajan ( 1995 ) ; Shalev-Shwartz et al . ( 2010 ) ; Zhang ( 2011 ) ; Jain et al . ( 2014 ) and more recently Axiotis & Sviridenko ( 2020 ) , one would hope to achieve a logarithmic dependence or no dependence at all on R ( 0 ) . In this paper we achieve this goal by providing an improved analysis showing that the greedy algorithm of Shalev-Shwartz et al . ( 2011 ) in fact returns a matrix of rank of r = O ( r∗ · κr+r∗ log R ( 0 ) ) . We also provide a new local search algorithm together with an analysis guaranteeing a rank of r = O ( r∗ · κ2r+r∗ ) . Apart from significantly improving upon previous work on rank-restricted convex optimization , these results directly generalize a lot of work in sparse convex optimization , e.g . Natarajan ( 1995 ) ; Shalev-Shwartz et al . ( 2010 ) ; Jain et al . ( 2014 ) . Our algorithms and theorem statements can be found in Section 2 . Runtime improvements Even though the rank bound guaranteed by our theoretical analyses is adequate , the algorithm runtimes leave much to be desired . In particular , both the greedy algorithm of Shalev-Shwartz et al . ( 2011 ) and our local search algorithm have to solve an optimization problem in each iteration in order to find the best possible linear combination of features added so far . Even for the case that R ( A ) = 12 ∑ ( i , j ) ∈Ω ( M − A ) 2ij , this requires solving a least squares problem on |Ω| examples and r2 variables . For practical implementations of these algorithms , we circumvent this issue by solving a related optimization problem that is usually much smaller . This instead requires solving n least squares problems with total number of examples |Ω| , each on r variables . This not only reduces the size of the problem by a factor of r , but also allows for a straightforward distributed implementation . Interestingly , our theoretical analyses still hold for these variants . We propose an additional heuristic that reduces the runtime even more drastically , which is to only run a few ( less than 10 ) iterations of the algorithm used for solving the inner optimization problem . Experimental results show that this modification not only does not significantly worsen results , but for machine learning applications also acts as a regularization method that can dramatically improve generalization . These matters , as well as additional improvements for making the local search algorithm more practical , are addressed in Section 2.3 . Roadmap In Section 2 , we provide the descriptions and theoretical results for the algorithms used , along with several modifications to boost performance . In Section 3 , we evaluate the proposed greedy and local search algorithms on optimization problems like robust PCA . Then , in Section 4 we evaluate their generalization performance in machine learning problems like matrix completion . 2 ALGORITHMS & THEORETICAL GUARANTEES . In Sections 2.1 and 2.2 we state and provide theoretical performance guarantees for the basic greedy and local search algorithms respectively . Then in Section 2.3 we state the algorithmic adjustments that we propose in order to make the algorithms efficient in terms of runtime and generalization performance . A discussion regarding the tightness of the theoretical analysis is deferred to Appendix A.4 . When the dimension is clear from context , we will denote the all-ones vector by 1 , and the vector that is 0 everywhere and 1 at position i by 1i . Given a matrix A , we denote by im ( A ) its column span . One notion that we will find useful is that of singular value thresholding . More specifically , given a rank-k matrix A ∈ Rm×n with SVD k∑ i=1 σiu ivi > such that σ1 ≥ · · · ≥ σk , as well as an integer parameter r ≥ 1 , we define Hr ( A ) = r∑ i=1 σiu ivi > to be the operator that truncates to the r highest singular values of A . 2.1 GREEDY . Algorithm 1 ( Greedy ) was first introduced in Shalev-Shwartz et al . ( 2011 ) as the GECO algorithm . It works by iteratively adding a rank-1 matrix to the current solution . This matrix is chosen as the rank-1 matrix that best approximates the gradient , i.e . the pair of singular vectors corresponding to the maximum singular value of the gradient . In each iteration , an additional procedure is run to optimize the combination of previously chosen singular vectors . In Shalev-Shwartz et al . ( 2011 ) guarantee on the rank of the solution returned by the algorithm is r∗κr+r∗ R ( 0 ) . The main bottleneck in order to improve on the R ( 0 ) factor is the fact that the analysis is done in terms of the squared nuclear norm of the optimal solution . As the worst-case discrepancy between the squared nuclear norm and the rank is R ( 0 ) / , their bounds inherit this factor . Our analysis works directly with the rank , in the spirit of sparse optimization results ( e.g . Shalev-Shwartz et al . ( 2011 ) ; Jain et al . ( 2014 ) ; Axiotis & Sviridenko ( 2020 ) ) . A challenge compared to these works is the need for a suitable notion of “ intersection ” between two sets of vectors . The main technical contribution of this work is to show that the orthogonal projection of one set of vectors into the span of the other is such a notion , and , based on this , to define a decomposition of the optimal solution that is used in the analysis . Algorithm 1 Greedy 1 : procedure GREEDY ( r ∈ N : target rank ) 2 : function to be minimized R : Rm×n → R 3 : U ∈ Rm×0 . Initially rank is zero 4 : V ∈ Rn×0 5 : for t = 0 . . . r − 1 do 6 : σuv > ← H1 ( ∇R ( UV > ) ) . Max singular value σ and corresp . singular vectors u , v 7 : U ← ( U u ) . Append new vectors as columns 8 : V ← ( V v ) 9 : U , V ← OPTIMIZE ( U , V ) 10 : return UV > 11 : procedure OPTIMIZE ( U ∈ Rm×r , V ∈ Rn×r ) 12 : X ← arg min X∈Rr×r R ( UXV > ) 13 : return UX , V Theorem 2.1 ( Algorithm 1 ( greedy ) analysis ) . Let A∗ be any fixed optimal solution of ( 1 ) for some function R and rank bound r∗ , and let > 0 be an error parameter . For any integer r ≥ 2r∗ · κr+r∗ log R ( 0 ) −R ( A∗ ) , if we let A = GREEDY ( r ) be the solution returned by Algorithm 1 , then R ( A ) ≤ R ( A∗ ) + . The number of iterations is r. The proof of Theorem 2.1 can be found in Appendix A.2 . 2.2 LOCAL SEARCH . One drawback of Algorithm 1 is that it increases the rank in each iteration . Algorithm 2 is a modification of Algorithm 1 , in which the rank is truncated in each iteration . The advantage of Algorithm 2 compared to Algorithm 1 is that it is able to make progress without increasing the rank of A , while Algorithm 1 necessarily increases the rank in each iteration . More specifically , because of the greedy nature of Algorithm 1 , some rank-1 components that have been added to A might become obsolete or have reduced benefit after a number of iterations . Algorithm 2 is able to identify such candidates and remove them , thus allowing it to continue making progress . Theorem 2.2 ( Algorithm 2 ( local search ) analysis ) . Let A∗ be any fixed optimal solution of ( 1 ) for some function R and rank bound r∗ , and let > 0 be an error parameter . For any integer r ≥ r∗ · ( 1 + 8κ2r+r∗ ) , if we let A = LOCAL SEARCH ( r ) be the solution returned by Algorithm 2 , then R ( A ) ≤ R ( A∗ ) + . The number of iterations is O ( r∗κr+r∗ log R ( 0 ) −R ( A∗ ) ) . The proof of Theorem 2.2 can be found in Appendix A.3 . Algorithm 2 Local Search 1 : procedure LOCAL SEARCH ( r ∈ N : target rank ) 2 : function to be minimized R : Rm×n → R 3 : U ← 0m×r . Initialize with all-zero solution 4 : V ← 0n×r 5 : for t = 0 . . . L− 1 do . Run for L iterations 6 : σuv > ← H1 ( ∇R ( UV > ) ) . Max singular value σ and corresp . singular vectors u , v 7 : U , V ← TRUNCATE ( U , V ) . Reduce rank of UV > by one 8 : U ← ( U u ) . Append new vectors as columns 9 : V ← ( V v ) 10 : U , V ← OPTIMIZE ( U , V ) 11 : return UV > 12 : procedure TRUNCATE ( U ∈ Rm×r , V ∈ Rn×r ) 13 : UΣV > ← SVD ( Hr−1 ( UV > ) ) . Keep all but minimum singular value 14 : return UΣ , V
This paper considers solving rank-constrained convex optimization. This is a fairly general problem that contains several special cases such as matrix completion and robust PCA. This paper presents a local search approach along with an interesting theoretical analysis of their approach. Furthermore, this paper provided extensive simulations to validate their approach. Overall, the paper provided solid justification for their approach.
SP:6b06c93bb2394dae7e4d6e76a8c134b6808a46e9
Semi-supervised learning by selective training with pseudo labels via confidence estimation
1 INTRODUCTION . Semi-supervised learning ( SSL ) is a powerful technique to deliver a full potential of complex models , such as deep neural networks , by utilizing unlabeled data as well as labeled data to train the model . It is especially useful in some practical situations where obtaining labeled data is costly due to , for example , necessity of expert knowledge . Since deep neural networks are known to be “ data-hungry ” models , SSL for deep neural networks has been intensely studied and has achieved surprisingly good performance in recent works ( Van Engelen & Hoos , 2020 ) . In this paper , we focus on SSL for a classification task , which is most commonly tackled in the literature . Many recent SSL methods adopt a common approach in which two processes are iteratively conducted : generating pseudo labels of unlabeled data by using a currently training model and updating the model by using both labeled and pseudo-labeled data . In the pioneering work ( Lee , 2013 ) , pseudo labels are hard ones , which are represented by one-hot vectors , but recent methods ( Tarvainen & Valpola , 2017 ; Miyato et al. , 2018 ; Berthelot et al. , 2019 ; 2020 ; Verma et al. , 2019 ; Wang et al. , 2019 ; Zhang & Qi , 2020 ) often utilize soft pseudo-labels , which may contain several nonzero elements within each label vector . One simple reason to adopt soft pseudo-labels is to alleviate confirmation bias caused by training with incorrectly pseudo-labeled data , and this attempt seems to successfully contribute to the excellent performance of those methods . However , since soft pseudolabels only provide weak supervisions , those methods often show slow convergence in the training ( Lokhande et al. , 2020 ) . For example , MixMatch ( Berthelot et al. , 2019 ) , which is one of the stateof-the-art SSL methods , requires nearly 1,000,000 iterations for training with CIFAR-10 dataset . On the other hand , in this paper , we aim to utilize hard pseudo-labels to design an easy-to-try SSL method in terms of computational efficiency . Obviously , the largest problem to be tackled in this approach is how to alleviate the negative impact caused by training with the incorrect pseudo-labels . In this work , we propose a novel SSL method that adopts selective training with pseudo labels . To avoid to train a model with incorrect pseudo-labels , we explicitly select which pseudo-labeled data should be used to update the model . Specifically , assuming that loss on incorrectly pseudo-labeled data sensitively increase against data augmentation , we select the data corresponding to relatively small loss after applying data augmentation . To effectively conduct this selective training , we estimate confidence of pseudo labels and utilize it not only for screening candidates of pseudo-labeled data to be selected but also for automatically deciding how many pseudo-labeled data should be selected within a mini-batch . For accurate estimation of the confidence , we also propose a new data augmentation method , called MixConf , that enables us to obtain confidence-calibrated models even when the number of training data is small . Experimental results with several benchmark datasets validate the advantage of our SSL method as well as MixConf . 2 PROPOSED METHOD . Figure 2 shows an overview of our method . Given a mini-batch from labeled data and that from unlabeled data , we first generate pseudo labels of the unlabeled data based on predictions of the current model . Let x ∈ Rm , y ∈ { 1 , 2 , ... C } , and f : Rm → RC denote input data , labels , and the classifier to be trained , respectively . Given the input unlabeled data xU , the pseudo label ŷU is generated by simply taking argmax of the classifier ’ s output f ( xU ) . Then , we conduct selective training using both the labeled data and the pseudo-labeled data . In this training , to alleviate negative effect caused by training with incorrect pseudo-labels , we explicitly select which data should be used to update the model . Below , we describe details of this selective training . 2.1 SELECTIVE TRAINING WITH PSEUDO LABELS BASED ON CONFIDENCE . As described previously , the pseudo labels are generated based on the predictions of the current model , and we assume that the confidence of those predictions can be also computed in addition to the pseudo labels . When we use a popular architecture of deep neural networks , it can be obtained by simply taking max of the classifier ’ s output ( Hendrycks & Gimpel , 2016 ) as : ci = max j∈ { 1,2 , ... , C } f ( xUi ) [ j ] , ( 1 ) where ci is the confidence of the classifier ’ s prediction on the i-th unlabeled data xUi , and f ( x ) [ j ] is the j-th element of f ( x ) . When the model is sufficiently confidence-calibrated , the confidence ci is expected to match the accuracy of the corresponding prediction f ( xUi ) ( Guo et al. , 2017 ) , which means it also matches the probability that the pseudo label ŷUi is correct . To avoid training with incorrect pseudo-labels , we explicitly select the data to be used to train the model based on the confidence . This data selection comprises two steps : thresholding the confidence and selecting relatively small loss calculated with augmented pseudo-labeled data . The first step is quite simple ; we pick up the pseudo-labeled data that have higher confidence than a certain threshold cthr and discard the remaining . In the second step , MixConf , which will be introduced later but is actually a variant of Mixup ( Zhang et al. , 2018 ) , is applied to both the labeled and unlabeled data to augment them . As conducted in ( Berthelot et al. , 2019 ) , we shuffle all data and mix them with the original labeled and pseudo-labeled data , which results in { ( x̃Li , p̃Li ) } BL i=1 and { ( x̃Uj , p̃Uj ) } BU j=1 , respectively , where p ∈ RC is a vector-style representation of the label that is adopted to represent a mixed label . Then , we calculate the standard cross entropy loss for each mixed data . Finally , we select the mixed data that result in relatively small loss among the all augmented data , and only the corresponding small-loss is minimized to train the model . Why does the small-loss selection work ? Our important assumption is that the loss calculated with incorrect labels tends to sensitively increase when the data is augmented . This assumption would be supported by effectiveness of the well-known technique , called test-time augmentation ( Simonyan & Zisserman , 2015 ) , in which incorrect predictions are suppressed by taking an average of the model ’ s outputs over several augmentations . Since we conduct the confidence thresholding , the loss corresponding to the pseudo-labeled data is guaranteed to be smaller than a certain loss level defined by the threshold cthr . However , when we apply data augmentation , that is MixConf , to the pseudo-labeled data , the loss related to incorrectly pseudo-labeled data becomes relatively large , if the above assumption is valid . It means that selecting relatively small loss after applying MixConf leads to excluding incorrect pseudo-labels , and we can safely train the model by using only the selected data . Han et al . ( 2018 ) and Lokhande et al . ( 2020 ) have presented similar idea , called small-loss trick ( Han et al. , 2018 ) or speed as a supervisor ( Lokhande et al. , 2020 ) , to avoid training with incorrect labels . However , their assumption is different from ours ; it is that loss of incorrectly labeled data decreases much slower than that of correctly labeled data during training . Due to this assumption , their methods require joint training of two distinct models ( Han et al. , 2018 ) or nested loop for training ( Lokhande et al. , 2020 ) to confirm which data show relatively slow convergence during training , which leads to substantially large computational cost . On the other hand , since our method focuses on change of loss values against data augmentation , not that during training , we can efficiently conduct the selective training by just utilizing data augmentation in each iteration . Since the confidence of the pseudo label represents the probability that the pseudo label is correct , we can estimate how many data we should select based on the confidence by calculating an expected number of the mixed data generated from two correctly labeled data . Specifically , when the averaged confidence within the unlabeled data is equal to cave , the number of the data to be selected can be determined as follows : nL = BL + caveBU BL +BU BL , ( 2 ) nU = min ( BL , BL + caveBU BL +BU caveBU ) , ( 3 ) where nL is for the data generated by mixing the labeled data and shuffled data , and nU is for those generated by mixing the unlabeled data and shuffled data . Here , to avoid too much contribution from the pseudo-labeled data , we restrict nU to be smaller than BL . Within this restriction , we can observe that , if we aim to perfectly balance nL and nU , BU should be set to BL/cave . However , cave can not be estimated before training and can fluctuate during training . Therefore , for stable training , we set BU = BL/cthr instead and fix it during training . Finally , the total loss L to be minimized in our method is formulated as the following equation : L = 1 BL nL∑ i=1 l ( x̃Ls [ i ] , p̃ L s [ i ] ) + λU 1 BL nU∑ j=1 l ( x̃Ut [ j ] , p̃ U t [ j ] ) , ( 4 ) where l is the standard cross entropy loss , s and t represent the sample index sorted by loss in an ascending order within each mini-batch , and λU is a hyper-parameter that balances the two terms . To improve the accuracy of pseudo labels as well as their confidence , we can average the model ’ s outputs over K augmentations to estimate pseudo labels as conducted in ( Berthelot et al. , 2019 ) . In that case , we conduct MixConf for all augmented pseudo-labeled data , which results in K minibatches each of which containsBU mixed data . Therefore , we need to modify the second term in the right-hand side of Eq . ( 4 ) to take the average of losses over all the mini-batches . In our experiments , we used K = 4 except for an ablation study . 2.2 MIXCONF TO OBTAIN BETTER CALIBRATED MODELS . In the previous section , we assumed that the model is sufficiently confidence-calibrated , but deep neural networks are often over-confident on their predictions in general ( Guo et al. , 2017 ) . This problem gets bigger in case of training with a small-scale dataset as we will show in our experiments . Consequently , it should occur in our SSL setting , because there are only a small amount of labeled training data in the early stage of the training . If the confidence is over-estimated , incorrect pseudolabels are more likely to be selected to calculate the loss due to loose confidence-thresholding and over-estimated ( nL , nU ) , which should significantly degrade the performance of the trained model . To tackle this problem , we propose a novel data augmentation method , called MixConf , to obtain well-calibrated models even when the number of training data is small . MixConf basically follows the scheme of Mixup , which is known to contribute to model ’ s calibration ( Thulasidasan et al. , 2019 ) , but is more carefully designed for confidence calibration . Figure 2 shows an overview of MixConf . In a similar way with Mixup , MixConf randomly picks up two samples { ( x0 , p0 ) , ( x1 , p1 ) } from the given training dataset and generates a new training sample ( x̃ , p̃ ) by linearly interpolating these samples as the following equations : x̃ = λax0 + ( 1− λa ) x1 , ( 5 ) p̃ = λbp0 + ( 1− λb ) p1 , ( 6 ) where λa ∈ [ 0 , 1 ] and λb ∈ [ 0 , 1 ] denote interpolation ratios for data and labels , respectively . Note that λa is not restricted to be equal to λb in MixConf , while λa = λb in Mixup . Since Mixup is not originally designed to obtain confidence-calibrated models , we have to tackle the following two questions to obtain better calibrated models by such a Mixup-like data augmentation method : • How should we set the ratio for the data interpolation ? ( In case of Mixup , λa is randomly sampled from the beta distribution ) • How should we determine the labels of the interpolated data ? ( In case of Mixup , λb is set to be equal to λa ) We first tackle the second question to clarify what kind of property the generated samples should have . Then , we derive how to set λa and λb so that the generated samples have this property .
The paper uses selective training with pseudo labels. Specifically, the method selects the pseudo-labeled data associated with small loss after performing the data augmentation, and then uses the selected data for training the model. Here, the model computes the confidence of the pseudo labels and then puts a threshold to determine the number of the selected samples and ignore inaccurate pseudo labels. Moreover, MixConf, a variation of mixup, for data augmentation is proposed to train a more confidence calibrated model. Finally, experimental results on the standard datasets show the effectiveness of the proposed model compared to SOA SSL methods.
SP:9eeb3b40542889b8a8e196f126a11f80e177f031
Empirical Frequentist Coverage of Deep Learning Uncertainty Quantification Procedures
Uncertainty quantification for complex deep learning models is increasingly important as these techniques see growing use in high-stakes , real-world settings . Currently , the quality of a model ’ s uncertainty is evaluated using point-prediction metrics such as negative log-likelihood or the Brier score on heldout data . In this study , we provide the first large scale evaluation of the empirical frequentist coverage properties of well known uncertainty quantification techniques on a suite of regression and classification tasks . We find that , in general , some methods do achieve desirable coverage properties on in distribution samples , but that coverage is not maintained on out-of-distribution data . Our results demonstrate the failings of current uncertainty quantification techniques as dataset shift increases and establish coverage as an important metric in developing models for real-world applications . 1 INTRODUCTION . Predictive models based on deep learning have seen dramatic improvement in recent years ( LeCun et al. , 2015 ) , which has led to widespread adoption in many areas . For critical , high-stakes domains such as medicine or self-driving cars , it is imperative that mechanisms are in place to ensure safe and reliable operation . Crucial to the notion of safe and reliable deep learning is the effective quantification and communication of predictive uncertainty to potential end-users of a system . Many approaches have recently been proposed that fall into two broad categories : ensembles and Bayesian methods . Ensembles ( Lakshminarayanan et al. , 2017 ) aggregate information from many individual models to provide a measure of uncertainty that reflects the ensembles agreement about a given data point . Bayesian methods offer direct access to predictive uncertainty through the posterior predictive distribution , which combines prior knowledge with the observed data . Although conceptually elegant , calculating exact posteriors of even simple neural models is computationally intractable ( Yao et al. , 2019 ; Neal , 1996 ) , and many approximations have been developed ( Hernández-Lobato & Adams , 2015 ; Blundell et al. , 2015 ; Graves , 2011 ; Pawlowski et al. , 2017 ; Hernández-Lobato et al. , 2016 ; Louizos & Welling , 2016 ; 2017 ) . Though approximate Bayesian methods scale to modern sized data and models , recent work has questioned the quality of the uncertainty provided by these approximations ( Yao et al. , 2019 ; Wenzel et al. , 2020 ; Ovadia et al. , 2019 ) . Previous work assessing the quality of uncertainty estimates have focused on calibration metrics and scoring rules such as the negative-loglikelihood ( NLL ) , expected calibration error ( ECE ) , and Brier score . Here we provide a complementary perspective based on the notion of empirical coverage , a well-established concept in the statistical literature ( Wasserman , 2013 ) that evaluates the quality of a predictive set or interval instead of a point prediction . Informally , coverage asks the question : If a model produces a predictive uncertainty interval , how often does that interval actually contain the observed value ? Ideally , predictions on examples for which a model is uncertain would produce larger intervals and thus be more likely to cover the observed value . More formally , given features xn ∈ Rd and a response yn ∈ R , coverage is defined in terms of a set Ĉn ( x ) and a level α ∈ [ 0 , 1 ] . The set Ĉn ( x ) is said to have coverage at the 1− α level if for all distributions P ∈ Rd × R where ( x , y ) ∼ P , the following inequality holds : P { yn ∈ Ĉn ( xn ) } ≥ 1− α ( 1 ) The set Ĉn ( x ) can be constructed using a variety of procedures . For example , in the case of simple linear regression a prediction interval for a new point xn+1 can be constructed1 using a simple , closed-form solution . Figure 1 provides a graphical depiction of coverage for two hypothetical regression models . A complementary metric to coverage is width , which is the size of the prediction interval or set . Width can provide a relative ranking of different methods , i.e . given two methods with the same level of coverage we should prefer the method that provides intervals with smaller widths . Contributions : In this study we investigate the empirical coverage properties of prediction intervals constructed from a catalog of popular uncertainty quantification techniques such as ensembling , Monte-Carlo dropout , Gaussian processes , and stochastic variational inference . We assess the coverage properties of these methods on nine regression tasks and two classification tasks with and without dataset shift . These tasks help us make the following contributions : • We introduce coverage and width as a natural and interpretable metrics for evaluating predictive uncertainty . • A comprehensive set of coverage evaluations on a suite of popular uncertainty quantification techniques . • An examination of how dataset shift affects these coverage properties . 2 BACKGROUND AND RELATED WORK . Obtaining Predictive Uncertainty Estimates Several lines of work focus on improving approximations of the posterior of a Bayesian neural network ( Graves , 2011 ; Hernández-Lobato & Adams , 2015 ; Blundell et al. , 2015 ; Hernández-Lobato et al. , 2016 ; Louizos & Welling , 2016 ; Pawlowski et al. , 2017 ; Louizos & Welling , 2017 ) . Yao et al . 1A well-known result from the statistics literature ( c.f . chapter 13 of Wasserman ( 2013 ) ) is that the interval is given by ŷn+1± tn−2sy √ 1/n+ ( xn+1 − x̄ ) 2/ ( ( n− 1 ) s2x ) , where ŷn+1 is the predicted value , tn−2 is the 1 − α/2 critical value from a t-distribution with n − 2 degrees of freedom , x̄ is the mean of x in the training data , and sy , sx are the standard deviations for y and x respectively . such that ( 1 ) holds asymptotically . However , for more complicated models such as deep learning , closed form solutions with coverage guarantees are unavailable , and constructing these intervals via the bootstrap ( Efron , 1982 ) ) can be computationally infeasible or fail to provide the correct coverage ( Chatterjee & Lahiri , 2011 ) . ( 2019 ) provide a comparison of many of these methods and highlight issues with common metrics of comparison , such as test-set log likelihood and RMSE . Good scores on these metrics often indicates that the model posterior happens to match the test data rather than the true posterior ( Yao et al. , 2019 ) . Maddox et al . ( 2019 ) developed a technique to sample the approximate posterior from the first moment of SGD iterates . Wenzel et al . ( 2020 ) demonstrated that despite advances in these approximations , there are still outstanding challenges with Bayesian modeling for deep networks . Alternative methods that do not rely on estimating a posterior over the weights of a model can also be used to provide uncertainty estimates . Gal & Ghahramani ( 2016 ) , for instance , demonstrated that Monte Carlo dropout is related to a variational approximation to the Bayesian posterior implied by the dropout procedure . Lakshminarayanan et al . ( 2017 ) used ensembling of several neural networks to obtain uncertainty estimates . Guo et al . ( 2017 ) established that temperature scaling provides well calibrated predictions on an i.i.d test set . More recently , van Amersfoort et al . ( 2020 ) showed that the distance from the centroids in a RBF neural network yields high quality uncertainty estimates . Liu et al . ( 2020 ) also leveraged the notion of distance ( in this case , the distance from test to train examples ) to obtain uncertainty estimates with their Spectral-normalized Neural Gaussian Processes . Assessments of Uncertainty Properties under Dataset Shift Ovadia et al . ( 2019 ) analyzed the effect of dataset shift on the accuracy and calibration of Bayesian deep learning methods . Their large scale empirical study assessed these methods on standard datasets such as MNIST , CIFAR-10 , ImageNet , and other non-image based datasets . Additionally , they used translations , rotations , and corruptions ( Hendrycks & Gimpel , 2017 ) of these datasets to quantify performance under dataset shift . They found stochastic variational inference ( SVI ) to be promising on simpler datasets such as MNIST and CIFAR-10 , but more difficult to train on larger datasets . Deep ensembles had the most robust response to dataset shift . Theoretical Coverage Guarantees The Bernstein-von Mises theorem connects Bayesian credible sets and frequentist confidence intervals . Under certain conditions , Bayesian credible sets of level α are asymptotically frequentist confidence sets of level α and thus have the same coverage properties . However , when there is model misspecification , coverage properties no longer hold ( Kleijn & van der Vaart , 2012 ) . Barber et al . ( 2019 ) explored under what conditions conditional coverage guarantees can hold for arbitrary models ( i.e . guarantees for P { yn ∈ Ĉn ( x|x = xn ) } , which are per sample guarantees ) . They show that even when these coverage properties are not desired to hold for any possible distribution , there are provably no methods that can give such guarantees . By extension , no Bayesian deep learning methods can provide conditional coverage guarantees . 3 METHODS . In both the regression and classification settings , we analyzed the coverage properties of prediction intervals and sets of five different approximate Bayesian and non-Bayesian approaches for uncertainty quantification . These include Dropout ( Gal & Ghahramani , 2016 ; Srivastava et al. , 2015 ) , ensembles ( Lakshminarayanan et al. , 2017 ) , Stochastic Variational Inference ( Blundell et al. , 2015 ; Graves , 2011 ; Louizos & Welling , 2016 ; 2017 ; Wen et al. , 2018 ) , and last layer approximations of SVI and Dropout ( Riquelme et al. , 2019 ) . Additionally , we considered prediction intervals from linear regression and the 95 % credible interval of a Gaussian process with the squared exponential kernel as baselines in regression tasks . For classification , we also considered temperature scaling ( Guo et al. , 2017 ) and the softmax output of vanilla deep networks ( Hendrycks & Gimpel , 2017 ) . 3.1 REGRESSION METHODS AND METRICS . We evaluated the coverage properties of these methods on nine large real world regression datasets used as a benchmark in Hernández-Lobato & Adams ( 2015 ) and later Gal and Ghahramani ( Gal & Ghahramani , 2016 ) . We used the training , validation , and testing splits publicly available from Gal and Ghahramani and performed nested cross validation to find hyperparameters and evaluated coverage properties , defined as the fraction of prediction intervals which contained the true value in the test set . On the training sets , we did 100 trials of a random search over hyperparameter space of a multi-layer-perceptron architecture with an Adam optimizer ( Kingma & Ba , 2015 ) and selected hyperparameters based on RMSE on the validation set . Each approach required slightly different ways to obtain a 95 % prediction interval . For an ensemble of neural networks , we trained N = 40 vanilla networks and used the 2.5 % and 97.5 % quantiles as the boundaries of the prediction interval . For dropout and last layer dropout , we made 200 predictions per sample and similarly discarded the top and bottom 2.5 % quantiles . For SVI , last layer SVI ( LL SVI ) , and Gaussian processes we had approximate variances available for the posterior which we used to calculate the prediction interval . We calculated 95 % prediction intervals from linear regression using the closed-form solution . Then we calculated two metrics : • Coverage : A sample is considered covered if the true label is contained in this 95 % predic- tion interval . We average over all samples in a test set to estimate the coverage of a method on this dataset . • Width : The width is the average over the test set of the ranges of the 95 % prediction intervals . Coverage measures how often the true label is in the prediction region while width measures how specific that prediction region is . Ideally , we would have high levels of coverage with low levels of width on in-distribution data . As data becomes increasingly out of distribution , we would like coverage to remain high while width increases to indicate model uncertainty .
Paper provides an evaluation of the reliability of confidence levels of well known uncertainty quantification techniques in deep learning on classification and regression tasks. The question that the authors are trying to answer empirically is: when a model claims accuracy at a confidence level within a certain interval , how often does the actual accuracy fall within that interval? This is conceptually similar to the recent slew of papers seeking to empirically evaluate the softmax calibration of deep models where the question there is how often do predicted probabilities of the winning class reflect the true probability of the correct answer, but in this paper the focus is on confidence level and confidence intervals.
SP:d818bed28daccbda111c39cdc9d097b5755b3d89
Teaching with Commentaries
1 INTRODUCTION . Training , regularising , and understanding complex neural network models is challenging . There remain central open questions on making training faster and more data-efficient ( Kornblith et al. , 2019 ; Raghu et al. , 2019a ; b ) , ensuring better generalisation ( Zhang et al. , 2016 ) and improving transparency and robustness ( Bau et al. , 2017 ; Madry et al. , 2017 ) . A promising approach for addressing these questions is learning to teach ( Zhu , 2015 ) , in which learned auxiliary information about a task is provided to a neural network to inform the training process and help downstream objectives . Examples include providing auxiliary training targets ( Liu et al. , 2019 ; Navon et al. , 2020 ; Pham et al. , 2020 ) and reweighting training examples to emphasise important datapoints ( Fan et al. , 2020 ; Jiang et al. , 2018 ; Ren et al. , 2018 ; Shu et al. , 2019 ) . Learning to teach approaches have achieved promising results in vision and language applications ( Jiang et al. , 2018 ; Ren et al. , 2018 ; Shu et al. , 2019 ; Hu et al. , 2019 ) using a handful of specific modifications to the training process . In this paper , we take steps towards generalising these approaches , introducing a flexible and effective learning to teach framework using commentaries . Commentaries represent learned meta-information helpful for training a model on a task , and once learned , such commentaries can be reused as is to improve the training of new models . We demonstrate that commentaries can be used for applications ranging from speeding up training to gaining insights into the neural network model . Specifically , our contributions are : 1 . We formalise the notion of commentaries , providing a unified framework for learning metainformation that can be used to improve network training and examine model learning . 2 . We present gradient-based methods to learn commentaries by optimising a network ’ s validation loss , leveraging recent work in implicit differentiation to scale to larger models . 3 . We use commentaries to define example-weighting curricula , a common method of teaching neural networks . We show that these learned commentaries hold interpretable insights , lead to speedups in training , and improve performance on few-shot learning tasks . ∗Work done while interning at Google . 4 . We define data augmentation policies with label-dependent commentaries , and obtain insights into the design of effective augmentation strategies and improved performance on benchmark tasks as compared to baselines . 5 . We parameterise commentaries as attention masks to find important regions of images . Through qualitative and quantitative evaluation , we show these masks identify salient image regions and can be used to improve the robustness of neural networks to spurious background correlations . 6 . We show that learned commentaries can generalise : when training new models , reusing learned commentaries can lead to learning speed/performance improvements . This suggests a use-case for commentaries : being stored with a dataset and leveraged to improve training of new models . 2 TEACHING WITH COMMENTARIES . Definition : We define a commentary to be learned information helpful for ( i ) training a model on a task or ( ii ) providing insights on the learning process . We envision that commentaries , once learned , could be stored alongside a dataset and reused as is to assist in the training of new models . Appendix A explores a simple instantiation of commentaries for Celeb-A ( Liu et al. , 2015 ) , to provide intuition of the structures that commentaries can encode . Formally , let t ( x , y , i ; φ ) denote a commentary that is a function of a data point x , prediction target y , and iteration of training i , with parameters φ . The commentary may be represented in a tabular fashion for every combination of input arguments , or using a neural network that takes these arguments as inputs . The commentary is used to train a student network n ( x ; θ ) with parameters θ . 2.1 LEARNING COMMENTARIES . We now describe algorithms to learn commentaries 1 . Throughout , we denote the training set as DT , the validation set as DV and the loss function ( e.g . cross-entropy ) as L. With θ denoting the parameters of the student network and φ denoting the commentary parameters , we let θ̂ , φ̂ be the respective optimised parameters . We seek to find φ̂ such that the student network ’ s validation loss , LV , is minimised . As the commentary is used during the training of the student network , LV implicitly depends on φ , enabling the use of gradient-based optimisation algorithms to find φ̂ . Algorithm 1 : Backpropagation Through Training : When student network training has a small memory footprint , we optimise commentary parameters by iterating the following process , detailed in Algorithm 1 : ( 1 ) train a student and store the computation graph during training ; ( 2 ) compute the student ’ s validation loss ; ( 3 ) calculate the gradient of this loss w.r.t . the commentary parameters by backpropagating through training ; ( 4 ) update commentary parameters using gradient descent . By optimizing the commentary parameters over the entire trajectory of student learning , we encourage this commentary to be effective when used in the training of new student networks . This supports the goal of the commentary being stored with the dataset and reused in future model learning . Algorithm 2 : Large-Scale Commentary Learning with Implicit Differentiation : When training the student model has a large memory footprint , backpropagating through training to obtain exact commentary parameter gradients is too memory expensive . We therefore leverage the Implicit Function Theorem ( IFT ) and efficient inverse Hessian approximation to obtain approximate gradients , following Lorraine et al . ( 2020 ) . The gradient of the validation loss w.r.t . the commentary parameters can be expressed as : ∂LV ∂φ = ∂LV ∂θ̂ × ∂θ̂ ∂φ . ( 3 ) The first term on the right hand side in equation 3 is simple to compute , but the second term is expensive . Under fixed-point and regularity assumptions on student and commentary parameters ( θ̂ ( φ ) , φ ) , the IFT allows expressing this second term ∂θ̂∂φ as the following product : ∂θ̂ ∂φ = − [ ∂2LT ∂θ ∂θT ] −1 × ∂ 2LT ∂θ ∂φT ∣∣∣ θ̂ ( φ ) , ( 4 ) 1Code at https : //github.com/googleinterns/commentaries Algorithm 1 Commentary Learning through Backpropagation Through Training . 1 : Initialise commentary parameters φ 2 : for t = 1 , . . . , T meta-training steps do 3 : Initialise student network n ( x ; θ ) with parameters θ0 4 : Train student network with N steps of gradient descent to optimise : LT ( θ , φ ) = Ex , y∼DT [ L̃ ( n ( x ; θ ) , t ( · ; φ ) , y ) ] , ( 1 ) where L̃ is a loss function adjusted from L to incorporate the commentary , and LT ( θ , φ ) is the expected adjusted loss over the training data . Output : θ̂ , the optimised parameters of student network ( implicitly a function of φ , θ̂ ( φ ) ) . 5 : Compute validation loss : LV ( φ ) = Ex , y∼DV [ L ( n ( x ; θ̂ ( φ ) ) , y ) ] ( 2 ) 6 : Compute ∂LV ( φ ) ∂φ , by backpropagating through the N steps of student training , and update φ . 7 : end for 8 : Output : φ̂ , the optimised parameters of the commentary . Algorithm 2 Commentary Learning through Implicit Differentiation . 1 : Initialise commentary parameters φ and student network parameters θ 2 : for t = 1 , . . . , M do 3 : Compute the student network ’ s training loss , LT ( θ , φ ) , equation 1 . 4 : Compute the gradient of this loss w.r.t the student parameters θ . 5 : Perform a single gradient descent update on the parameters to obtain θ̂ ( implicitly a function of φ , θ̂ ( φ ) ) . 6 : Compute the student network ’ s validation loss , LV ( φ ) , equation 2 . 7 : Compute ∂LV ∂θ̂ . 8 : Approximately compute ∂θ̂∂φ with equation 4 , using a truncated Neumann series with a single term and implicit vector-Jacobian products ( Lorraine et al. , 2020 ) . 9 : Compute the overall derivative ∂LV∂φ using steps ( 7 ) and ( 8 ) , and update φ . 10 : Set θ ← θ̂ . 11 : end for 12 : Output : φ̂ , the optimised parameters of the commentary . i.e. , a product of an inverse Hessian and a matrix of mixed partial derivatives . Following Lorraine et al . ( 2020 ) , we efficiently approximate this product using a truncated Neumann series and implicit vector-Jacobian products . Leveraging this approximation then yields a second method for commentary learning , described in Algorithm 2 . Since a single term in the Neumann series is sufficient for learning , each iteration of this algorithm has similar time complexity to a single iteration of training . In this method , commentary parameters are learned jointly with student parameters , avoiding training a single student model multiple times . This approach therefore scales to millions of commentary parameters and large student models ( Hataya et al. , 2020 ; Lorraine et al. , 2020 ) . However , since the commentary is not directly optimised over the entire trajectory of learning , its generalisability to new models is not ensured . We examine this in our experiments , demonstrating that commentaries learned in this manner can indeed generalise to training new student networks . 3 COMMENTARIES FOR EXAMPLE WEIGHTING CURRICULA . We now explore our first main application of commentaries : encoding a separate weight for each training example at each training iteration . Since the commentaries are a function of the training iteration , they can encode curriculum structure , so we refer to them as curriculum commentaries . We specify these weights using a commentary neural network ( or teacher network ) t ( x , i ; φ ) → [ 0 , 1 ] that produces a weight for every training example at every iteration of training of the student network . When training a student network , using the notation of §2.1 , the commentary is incorporated in the training loss as : L̃ = t ( x , i ; φ ) · L ( n ( x ; θ ) , y ) , where L ( · ) is the original loss function for the task . The validation loss is unweighted . 3.1 SYNTHETIC EXAMPLE : ROTATED MNIST DIGITS . We first learn example weight curriculum commentaries on a synthetic MNIST binary classification problem . Each example in the dataset is a rotated MNIST digit ‘ 1 ’ , with variable rotation angle that defines the class . We generate two datasets : the non-overlapping dataset and the overlapping dataset . In the non-overlapping dataset , the rotation angle for each example from class 1 and class 0 is drawn from non-overlapping distributions Uniform [ 15 , 45 ] and Uniform [ −45 , −15 ] respectively . In the overlapping dataset , the rotation angles are drawn from overlapping distributions Uniform [ −5 , 30 ] and Uniform [ −30 , 5 ] respectively ( Figure 1 ) . We use two block CNNs as both the commentary neural network and student network . The commentary network takes as input the image and the iteration of student training , and outputs a weight for each example in the batch . We learn commentary parameters by backpropagating through student training ( Algorithm 1 , §2.1 ) , and use 500 gradient steps for inner optimisation ( i.e. , N = 500 ) . Implementation is with the higher library ( Grefenstette et al. , 2019 ) . Further details in Appendix B.1 . Results : Figure 1 visualises the two datasets and plots the learned example weights as a function of rotation at iteration 500 of the student training . When classes do not overlap ( left ) , the example weights are highest for those examples near to the decision boundary ( small rotation magnitude ) . When the classes do overlap ( right ) , the more representative examples further from the boundary are upweighted and ambiguous examples in the overlap region are downweighted : a sensible result . We perform further analysis of the learned example weighting curriculum in Appendix B.1 , demonstrating that the learned curricula in both cases are meaningful . Overall , these results demonstrate that the learned commentaries capture interesting and intuitive structure .
This paper proposes a general framework for boosting CNNs performance on different tasks by using'commentary' to learn meta-information. The obtained meta-information can also be used for other purposes such as the mask of objects within spurious background and the similarities among classes. The commentary module would be incorporated into standard networks and be iteratively optimized with the host via the proposed objective. To effectively optimize both the commentary and the standard network, this paper adopts the techniques including implicit function theorem and efficient inverse Hessian approximations.
SP:74f12645ba675ccd4217ebfc0579cb4232406009
Computational Separation Between Convolutional and Fully-Connected Networks
1 INTRODUCTION . Convolutional neural networks ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ) achieve state-of-the-art performance on every possible task in computer vision . However , while the empirical success of convolutional networks is indisputable , the advantage of using them is not well understood from a theoretical perspective . Specifically , we consider the following fundamental question : Why do convolutional networks ( CNNs ) perform better than fully-connected networks ( FCNs ) ? Clearly , when considering expressive power , FCNs have a big advantage . Since convolution is a linear operation , any CNN can be expressed using a FCN , whereas FCNs can express a strictly larger family of functions . So , any advantage of CNNs due to expressivity can be leveraged by FCNs as well . Therefore , expressive power does not explain the superiority of CNNs over FCNs . There are several possible explanations to the superiority of CNNs over FCNs : parameter efficiency ( and hence lower sample complexity ) , weight sharing , and locality prior . The main result of this paper is arguing that locality is a key factor by proving a computational separation between CNNs and FCNs based on locality . But , before that , let ’ s discuss the other possible explanations . First , we observe that CNNs seem to be much more efficient in utilizing their parameters . A FCN needs to use a greater number of parameters compared to an equivalent CNN : each neuron of a CNN is limited to a small receptive field , and moreover , many of the parameters of the CNN are shared . From classical results in learning theory , using a large number of param- eters may result in inferior generalization . So , can the advantage of CNNs be explained simply by counting parameters ? To answer this question , we observe the performance of CNN and FCN based architecture of various widths and depths trained on the CIFAR-10 dataset . For each architecture , we observe the final test accuracy versus the number of trainable parameters . The results are shown in Figure 1 . As can be seen , CNNs have a clear advantage over FCNs , regardless of the number of parameters used . As is often observed , a large number of parameters does not hurt the performance of neural networks , and so parameter efficiency can not explain the advantage of CNNs . This is in line with various theoretical works on optimization of neural networks , which show that over-parameterization is beneficial for convergence of gradient-descent ( e.g. , Du et al . ( 2018 ) ; Soltanolkotabi et al . ( 2018 ) ; Li & Liang ( 2018 ) ) . The superiority of CNNs can be also attributed to the extensive weight sharing between the different convolutional filters . Indeed , it has been previously shown that weight sharing is important for the optimization of neural networks ( Shalev-Shwartz et al. , 2017b ) . Moreover , the translation-invariant nature of CNNs , which relies on weight sharing , is often observed to be beneficial in various signal processing tasks ( Kauderer-Abrams , 2017 ; Kayhan & Gemert , 2020 ) . So , how much does the weight sharing contribute to the superiority of CNNs over FCNs ? To understand the effect of weight sharing on the behavior of CNNs , it is useful to study locallyconnected network ( LCN ) architectures , which are similar to CNNs , but have no weight sharing between the kernels of the network . While CNNs are far more popular in practice ( also due to the fact that they are much more efficient in terms of model size ) , LCNs have also been used in different contexts ( e.g. , Bruna et al . ( 2013 ) ; Chen et al . ( 2015 ) ; Liu et al . ( 2020 ) ) . It has been recently observed that in some cases , the performance of LCNs is on par with CNNs ( Neyshabur , 2020 ) . So , even if weight sharing explains some of the advantage of CNNs , it clearly doesn ’ t tell the whole story . Finally , a key property of CNN architectures is their strong utilization of locality in the data . Each neuron in a CNN is limited to a local receptive field of the input , hence encoding a strong locality bias . In this work we demonstrate how CNNs can leverage the local structure of the input , giving them a clear advantage in terms of computational complexity . Our results hint that locality is the principal property that explains the advantage of using CNNs . Our main result is a computational separation result between CNNs and FCNs . To show this result , we introduce a family of functions that have a very strong local structure , which we call k-patterns . A k-pattern is a function that is determined by k consecutive bits of the input . We show that for inputs of n bits , when the target function is a ( log n ) -pattern , training a CNN of polynomial size with gradient-descent achieves small error in polynomial time . However , gradient-descent will fail to learn ( log n ) -patterns , when training a FCN of polynomial-size . 1.1 RELATED WORK . It has been empirically observed that CNN architectures perform much better than FCNs on computer vision tasks , such as digit recognition and image classification ( e.g. , Urban et al . ( 2017 ) ; Driss et al . ( 2017 ) ) . While some works have applied various techniques to improve the performance of FCNs ( Lin et al . ( 2015 ) ; Fernando et al . ( 2016 ) ; Neyshabur ( 2020 ) ) , there is still a gap between performance of CNNs and FCNs , where the former give very good performance “ out-of-the-box ” . The focus of this work is to understand , from a theoretical perspective , why CNNs give superior performance when trained on input with strong local structure . Various theoretical works show the advantage of architectures that leverage local and hierarchical structure . The work of Poggio et al . ( 2015 ) shows the advantage of using deep hierarchical models over wide and shallow functions . These results are extended in Poggio et al . ( 2017 ) , showing an exponential gap between deep and shallow networks , when approximating locally compositional functions . The works of Mossel ( 2016 ) ; Malach & Shalev-Shwartz ( 2018 ) study learnability of deep hierarchical models . The work of Cohen et al . ( 2017 ) analyzes the expressive efficiency of convolutional networks via hierarchical tensor decomposition . While all these works show that indeed CNNs powerful due to their hierarchical nature and the efficiency of utilizing local structure , they do not explain why these models are superior to fully-connected models . There are a few works that provide a theoretical analysis of CNN optimization . The works of Brutzkus & Globerson ( 2017 ) ; Du et al . ( 2018 ) show that gradient-descent can learn a shallow CNN with a single filter , under various distributional assumptions . The work of Zhang et al . ( 2017 ) shows learnability of a convex relaxation of convolutional networks . While these works focus on computational properties of learning CNNs , as we do in this work , they do not compare CNNs to FCNs , but focus only on the behavior of CNNs . The works of Cohen & Shashua ( 2016 ) ; Novak et al . ( 2018 ) study the implicit bias of simplified CNN models . However , these result are focused on generalization properties of CNNs , and not on computational efficiency of the optimization . 2 DEFINITIONS AND NOTATIONS . Let X = { ±1 } n be our instance space , and let Y = { ±1 } be the label space . Throughout the paper , we focus on learning a binary classification problem using the hinge-loss : ` ( ŷ , y ) = max { 1 yŷ , 0 } . Given some distribution D over X , some target function f : X ! Y and some hypothesis h : X ! Y , we define the loss of h with respect to f on the distribution D by : Lf , D ( h ) = E x⇠D [ ` ( h ( x ) , f ( x ) ) ] The goal of a supervised learning algorithm is , given access to examples sampled from D and labeled by f , to find a hypothesis h that minimizes Lf , D ( h ) . We focus on the gradient-descent ( GD ) algorithm : given some parametric hypothesis class H = { hw : w 2 Rq } , gradient-descent starts with some ( randomly initialized ) hypothesis hw ( 0 ) and , for some learning rate ⌘ > 0 , updates : w ( t ) = w ( t 1 ) ⌘rwLf , D ( hw ( t 1 ) ) We compare the behavior of gradient-descent , when learning two possible neural network architectures : a convolutional network ( CNN ) and a fully-connected network ( FCN ) . Definition 1 . A convolutional network hu , W , b is defined as follows : hu , W , b ( x ) = n kX j=1 D u ( j ) , ( Wxj ... j+k 1 + b ) E for activation function , with kernel W 2 Rq⇥k , bias b 2 Rq and readout layer u ( 1 ) , . . . , u ( n ) 2 Rq . Note that this is a standard depth-2 CNN with kernel k , stride 1 and q filters . Definition 2 . A fully-connected network hu , w , b is defined as follows : hu , w , b ( x ) = qX i=1 ui ⇣D w ( i ) , x E + bi ⌘ for activation function , first layer w ( 1 ) , . . . , w ( q ) 2 Rn , bias b 2 Rq and second layer u 2 Rq . We demonstrate the advantage of CNNs over FCNs by observing a problem that can be learned using CNNs , but is hard to learn using FCNs . We call this problem the k-pattern problem : Definition 3 . A function f : X ! Y is a k-pattern , if for some g : { ±1 } k ! Y and index j⇤ : f ( x ) = g ( xj⇤ ... j⇤+k 1 ) Namely , a k-pattern is a function that depends only on a small pattern of consecutive bits of the input . The k-pattern problem is the problem of learning k-patterns : for some k-pattern f and some distribution D over X , given access to D labeled by f , find a hypothesis h with Lf , D ( h ) ✏ . We note that a similar problem has been studied in Golovnev et al . ( 2017 ) , providing results on PAC learnability of a related target class . 3 CNNS EFFICIENTLY LEARN ( log n ) -PATTERNS The main result in this section shows that gradient-descent can learn k-patterns when training convolutional networks for poly ( 2k , n ) iterations , and when the network has poly ( 2k , n ) neurons : Theorem 4 . Assume we uniformly initialize W ( 0 ) ⇠ { ±1/k } q⇥k , bi = 1/k 1 and u ( 0 , j ) = 0 for every j . Assume the activation satisfies | | c , | 0 | 1 , for some constant c. Fix some > 0 , some k-pattern f and some distribution D over X . Then , if q > 2k+3 log ( 2k/ ) , with probability at least 1 over the initialization , when training a convolutional network hu , W , b using gradient descent with ⌘ = p np qT we have : 1 T TX t=1 Lf , D ( hu ( t ) , W ( t ) , b ) 2cn2k22k q + 2 ( 2kk ) 2 p qn + c 2 n 1.5p q T Before we prove the theorem , observe that the above immediately implies that when k = O ( log n ) , gradient-descent can efficiently learn to solve the k-pattern problem , when training a CNN : Corollary 5 . Let k = O ( log n ) . Then , running GD on a CNN with q = O ( ✏ 2n3 log2 n ) neurons for T = O ( ✏ 2n3 log n ) iterations , using a sample S ⇠ D of size O ( ✏ 2nkq log ( nkq/ ) ) , learns the k-pattern problem up to accuracy ✏ w.p . 1 . Proof . Sample S ⇠ D , and let bD be the uniform distribution over S. Then , from Theorem 4 and the choice of q and T there exists t 2 [ T ] with L f , bD ( hu ( t ) , W ( t ) , b ) ✏/2 , i.e . GD finds a hypothesis with train loss at most ✏/2 . Now , using the fact the VC dimension of depth-2 ReLU networks with W weights is O ( W logW ) ( see Bartlett et al . ( 2019 ) ) , we can bound the generalization gap by ✏/2 . To prove Theorem 4 , we show that , for a large enough CNN , the k-pattern problem becomes linearly separable , after applying the first layer of the randomly initialized CNN : Lemma 6 . Assume we uniformly initialize W ⇠ { ±1/k } q⇥k and bi = 1/k 1 . Fix some > 0 . Then if q > 2k+3 log ( 2k/ ) , w.p . 1 over the choice of W , for every k-pattern f there exist u⇤ ( 1 ) , . . . , u⇤ ( n k ) 2 Rq with u⇤ ( j⇤ ) 2 k+1 kp q and u⇤ ( j ) = 0 for j 6= j⇤ , s.t . hu⇤ , W , b = f ( x ) . Proof . Fix some z 2 { ±1 } k , then for every w ( i ) ⇠ { ±1/k } k , we have : P ⇥ sign ( w ( i ) ) = z ⇤ = 2 k. Denote by Jz ✓ [ q ] the subset of indexes satisfying signw ( i ) = z , for every i 2 Jz , and note that EW |Jz| = q2 k. From Chernoff bound : P ⇥ |Jz| q2 k /2 ⇤ e q2 k/8 2 k by choosing q > 2k+3 log ( 2k/ ) . So , using the union bound , w.p . at least 1 , for every z 2 { ±1 } k we have |Jz| q2 k 1 . By the choice of bi we have ( ⌦ w ( i ) , z ↵ + bi ) = ( 1/k ) 1 { signw ( i ) = z } . Now , fix some k-pattern f , where f ( x ) = g ( xj⇤ , ... , j⇤+k 1 ) . For every i 2 Jz we choose u ⇤ ( j⇤ ) i = k |Jz|g ( z ) and u ⇤ ( j ) = 0 for every j 6= j⇤ . Therefore , we get : hu⇤ , W , b ( x ) = n kX j=1 D u⇤ ( j ) , ( Wxj ... j+k 1 + b ) E = X z2 { ±1 } k i2Jz u⇤ ( j ⇤ ) i ⇣D w ( i ) , xj⇤ ... j⇤+k 1 E + bi ⌘ = X z2 { ±1 } k 1 { z = xj⇤ ... j⇤+k 1 } g ( z ) = g ( xj⇤ ... j⇤+k 1 ) = f ( x ) Note that by definition of u⇤ ( j ⇤ ) we have u⇤ ( j⇤ ) 2 = P z2 { ±1 } k P i2Jz k 2 |Jz|2 4 ( 2 k k ) 2 q . Comment 7 . Admittedly , the initialization assumed above is non-standard , but is favorable for the analysis . A similar result can be shown for more natural initialization ( e.g. , normal distribution ) , using known results from random features analysis ( for example , Bresler & Nagaraj ( 2020 ) ) . From Lemma 6 and known results on learning linear classifiers with gradient-descent , solving the k-pattern problem can be achieved by optimizing the second layer of a randomly initialized CNN . However , since in gradient-descent we optimize both layers of the network , we need a more refined analysis to show that full gradient-descent learns to solve the problem . We follow the scheme introduced in Daniely ( 2017 ) , adapting it our setting . We start by showing that the first layer of the network does not deviate from the initialization during the training : Lemma 8 . We have u ( T , j ) ⌘Tpq for all j 2 [ n k ] , and W ( 0 ) W ( T ) c⌘2T 2n p qk We can now bound the difference in the loss when the weights of the first layer change during the training process : Lemma 9 . For every u⇤ we have : Lf , D ( hu⇤ , W ( T ) , b ) Lf , D ( hu⇤ , W ( 0 ) , b ) c⌘2T 2nkpq n kX j=1 u⇤ ( j ) The proofs of Lemma 8 and Lemma 9 are shown in the appendix . Finally , we use the following result on the convergence of online gradient-descent to show that gradient-descent converges to a good solution . The proof of the Theorem is given in Shalev-Shwartz et al . ( 2011 ) , with an adaptation to a similar setting in Daniely & Malach ( 2020 ) . Theorem 10 . ( Online Gradient Descent ) Fix some ⌘ , and let f1 , . . . , fT be some sequence of convex functions . Fix some ✓1 , and update ✓t+1 = ✓t ⌘rft ( ✓t ) . Then for every ✓⇤ the following holds : 1 T TX t=1 ft ( ✓t ) 1 T TX t=1 ft ( ✓ ⇤ ) + 1 2⌘T k✓ ⇤ k 2 + k✓1k 1 T TX t=1 krft ( ✓t ) k+ ⌘ 1 T TX t=1 krft ( ✓t ) k 2 Proof of Theorem 4 . From Lemma 6 , with probability at least 1 over the initialization , there exist u⇤ ( 1 ) , . . . , u⇤ ( n k ) 2 Rq with u⇤ ( 1 ) 2 k+1 kp q and u⇤ ( j ) = 0 for j > 1 such that hu⇤ , W ( 0 ) , b ( x ) = f ( x ) , and so Lf , D ( hu⇤ , W ( 0 ) , b ) = 0 . Using Theorem 10 , since Lf , D ( hu , W , b ) is convex with respect to u , we have : 1 T TX t=1 Lf , D ( hu ( t ) , W ( t ) , b ) 1 T TX t=1 Lf , D ( hu⇤ , W ( t ) , b ) + 1 2⌘T n kX j=1 u⇤ ( j ) 2 + ⌘ 1 T TX t=1 @ @ u Lf , D ( fu ( t ) , W ( t ) , b ) 2 1 T TX t=1 Lf , D ( hu⇤ , W ( t ) , b ) + 2 ( 2kk ) 2 q⌘T + c2⌘nq = ( ⇤ ) Using Lemma 9 we have : ( ⇤ ) 1 T TX t=1 Lf , D ( hu⇤ , W ( 0 ) , b ) + c⌘ 2 T 2 nk p q n kX j=1 u⇤ ( j ) + 2 ( 2kk ) 2 q⌘T + c2⌘nq 2c⌘2T 2nk22k + 2 ( 2kk ) 2 q⌘T + c2⌘nq Now , choosing ⌘ = p np qT we get the required .
It is well-known that neural networks (NN) perform very well in various areas and in particular if one looks at computer vision convolutional neural networks perform very well. Although convolutional neural networks (CNN) are limited in their architecture (since they only allow nearest-neighbour connections) compared to fully-connected NNs (FCNN), their superiority in performance is unclear. In this paper they answer the following fundamental question: can one formally show that CNNs are better than FCNNs for a specific learning task? In this direction they answer in the affirmative.  In particular, more than just giving an example, they show that an interesting property called locality, instead of other parameters like parameter and efficiency weight sharing is the reason for its superior performance. 
SP:19e2493d7bdb4be73c3b834affdb925201243aef
Latent Convergent Cross Mapping
1 INTRODUCTION . Inferring a right causal model of a physical phenomenon is at the heart of scientific inquiry . It is fundamental to how we understand the world around us and to predict the impact of future interventions ( Pearl , 2009 ) . Correctly inferring causal pathways helps us reason about a physical system , anticipate its behavior in previously unseen conditions , design changes to achieve some objective , or synthesize new systems with desirable behaviors . As an example , in medicine , causality inference could allow predicting whether a drug will be effective for a specific patient , or in climatology , to assess human activity as a causal factor in climate change . Causal mechanisms are best uncovered by making use of interventions because this framework leads to an intuitive and robust notion of causality . However , there is a significant need to identify causal dependencies when only observational data is available , because such data is more readily available as it is more practical and less costly to collect ( e.g. , relying on observational studies when interventional clinical trials are not yet available ) . However , real-world data arising from less controlled environment than , for instance , clinical trials poses many challenges for analysis . Confounding and selection bias come into play , which bias standard statistical estimators . If no intervention is possible , some causal configurations can not be identified . Importantly , with real-world data comes the major issue of missing values . In particular , when collecting longitudinal data , the resulting time series are often sporadic : sampling is irregular ⇤Both authors contributed equally †Corresponding author in time and across dimensions leading to varying time intervals between observations of a given variable and typically multiple missing observations at any given time . This problem is ubiquitous in various fields , such as healthcare ( De Brouwer et al. , 2019 ) , climate science ( Thomson , 1990 ) , or astronomy ( Cuevas-Tello et al. , 2010 ) . A key problem in causal inference is to assess whether one temporal variable is causing another or is merely correlated with it . From assessing causal pathways for neural activity ( Roebroeck et al. , 2005 ) to ecology ( Sugihara et al. , 2012 ) or healthcare , it is a necessary step to unravel underlying generating mechanisms . A common way to infer causal direction between two temporal variables is to use Granger causality ( Granger , 1969 ) , which defines “ predictive causality ” in terms of the predictability of one time series from the other . A key requirement of Granger causality is then separability ( i.e. , that information about causes are not contained in the caused variable itself ) . This assumption holds in purely stochastic linear systems , but fails in more general cases ( such as weakly coupled nonlinear dynamical systems ) ( Sugihara et al. , 2012 ) . To address this nonseparability issue , Sugihara et al . ( Sugihara et al. , 2012 ) introduced the Convergent Cross Mapping ( CCM ) method , which is based on the theory of chaotic dynamical systems , particularly on Takens ’ theorem . This method has been applied successfully in various fields such as ecology , climatology ( Wang et al. , 2018 ) , and neuroscience ( Schiecke et al. , 2015 ) . However , as the method relies on embedding the time series under study with time lags , it is highly sensitive to missing values and usually requires long uninterrupted time series . This method is thus not applicable in settings with repeated short sporadic time series , despite their occurrence in many practical situations . To address this important limitation , we propose to learn the causal dependencies between time series by checking the existence of convergent cross mappings between latent processes of those time series . Using a joint model across all segments of sporadically observed time series and forcing the model to learn the inherent dynamic of the data , we show that our method can detect causal relationship from short and sporadic time series , without computing delay embeddings . To learn a continuous time latent representation of the system ’ s state-space , we leverage GRU-ODE-Bayes ( De Brouwer et al. , 2019 ) , a recently introduced filtering method that extends the Neural ODE model ( Chen et al. , 2018 ) . Importantly for causal inference , the filtering nature of the model makes sure no future information can leak into the past . We then check the existence of continuous maps between the learnt latent representations and infer the causal direction accordingly . In a series of increasingly challenging test cases , our method accurately detects the correct causal dependencies with high confidence , even when fed very few observations , and outperforms competing methods such as multi-spatial CCM or CCM with multivariate Gaussian process interpolation . 2 RELATED WORK . CCM to address failure of Granger causality . Granger causality ( Granger , 1969 ) provided the first significant framework to infer causal dependencies from time series . Relying on predictability between dynamical systems , it was extended to account for different limitations , such as nonlinearity ( Chen et al. , 2004 ) or instantaneous relationships ( Schiatti et al. , 2015 ) . However , the assumption of separability of information between causative and caused variables leads to the failure of the Granger paradigm for a significant number of time series coupling scenarios ( Sugihara et al. , 2012 ) ( see Appendix D for a revealing worked out example ) . Convergent Cross Mapping , a technique based on nonlinear state space reconstruction was introduced to tackle this issue ( Sugihara et al. , 2012 ) . Recently , several works have proposed extensions of CCM , such as the extended CCM , to address issues such as synchrony ( Ye et al. , 2015 ) or to improve the discrimination of the confounding case ( Benkő et al. , 2018 ) . Synchrony occurs when one time series can be expressed as a function of the other ( e.g . Y ( t ) = ( X ( t ) ) and attractors of both dynamical systems become homeomorphic to each other ( Rulkov et al. , 1995 ) . This occurs when coupling between two chaotic system is too strong . Confounding , on the other hand , occurs when two variables are causally driven by a third one . In general we say that X confounds the relation between Y and Z if X causes both Y and Z. Huang et al . ( 2020 ) also proposed to predict directly the driving time series from the driven one with reservoir computing , bypassing the delay embedding step , making it more robust to noise . However , those methods still require long regularly sampled time series . Causality for short or sporadic time series . Short time series are very common in practice and there has been some work proposing to learn causality from short time series relying on state space reconstruction . Ma et al . ( 2014 ) proposed a method for short , fully observed , unique time series . Multi-spatial CCM ( Clark et al. , 2015 ) , considered the problem of inferring causality from several short fully observed snippets of the same dynamical system by computing delay embeddings compatible with the lengths of the time series and aggregating them . In comparison , on top of addressing irregular sampling , our approach computes more informative state-space representations by sharing a model across all segments . Techniques to infer causal direction from incomplete time series have also been proposed , but all are relying on the Granger causality framework , which limits their applicability to separable dynamical systems . They use direct partial correlations on regularly sampled data ( but with missing values ) ( Elsegai , 2019 ) or generalizations of similarity measures for sporadic time series ( Bahadori & Liu , 2012 ) . To the best of our knowledge , this is the first work investigating the identification of causal dependencies from short sporadic time series using state-space reconstruction . 3 METHOD . We consider the problem of inferring a causal dependency between two temporal variables from several segments of their multivariate time series X [ t ] 2 RdX and Y [ t ] 2 RdY . We assume that X [ t ] and Y [ t ] have been generated by an unknown dynamical system . In this work , we refer to the dynamical system of a time varying variable X as the smallest dynamical system that fully describes the dynamics of X . As an example , let ’ s consider the following system of ODEs representing the dynamics of X and Y : dX ( t ) dt = f ( X ( t ) ) ( 1 ) dY ( t ) dt = g ( X ( t ) ) + h ( Y ( t ) ) . ( 2 ) The dynamical system of X is given by Equation ( 1 ) . On the other hand , the dynamical system of Y is Equation ( 1 ) + ( 2 ) as Equation ( 1 ) is required to describe the dynamics of Y . To account for the more general and most frequent case , we consider those time series are only observed in segments of finite duration . X [ t ] and Y [ t ] then consist of collections of N short time series ( X1 [ t ] , ... , XN [ t ] ) ) and ( Y 1 [ t ] , ... , Y N [ t ] ) ) respectively . Importantly , each segment of X and Y is observed concomitantly . To proceed with a lighter notation , we ’ ll drop the superscript when referring to a segment of time series . Each of those time series is also sporadic namely the are not regularly sampled and not all dimensions are observed each time . In this work , we define the notion of causality by considering the equations of the dynamical system as a structural causal model . In this framework , X causes Y if p ( Y |do ( X ) ) 6= P ( Y ) where do ( X ) is an intervention on X ( Pearl , 2009 ) . Then , if X causes Y , X is part of the dynamical system of Y ( X is required to describe the dynamics of Y ) . In the case of the example described by Equations 1 and 2 , X causes Y if g ( · ) is not a constant function . 3.1 CONVERGENT CROSS MAPPING AND TAKENS ’ THEOREM . CCM aims at discovering the causal direction between temporal variables in dynamical systems by checking if the state-space dynamics of their time series can be recovered from one another . As shown above , if X causes Y , X is then contained in the dynamical system of Y and it should be possible to recover a representation of the dynamical system of X from the dynamical system of Y . A common way to obtain a representation of a dynamical system from its time series relies on Takens ’ embedding theorem ( Takens , 1981 ) . Let X [ t ] 2 RdX be issued from a chaotic dynamical system that has a strange attractor M with box-counting dimension dM , where we define an attractor as the manifold toward which the state of a chaotic dynamical system tends to evolve . The dynamics of this system are specified by a flow on M , ( · ) ( · ) : R⇥M ! M , where ⌧ ( Mt ) = Mt+⌧ and Mt stands for the point on the manifold at time index t. This flow is encoded in the ODE of the system . The observed time series X [ t ] is then obtained through an observation function fobs ( · ) : X [ t ] = fobs ( Mt ) . Takens ’ theorem then states that a delay embedding with delay ⌧ and embedding dimension k k , ⌧ , ↵ ( Mt ) = ( ↵ ( 0 ( Mt ) ) , ↵ ( ⌧ ( Mt ) ) , . . . , ↵ ( k⌧ ( Mt ) ) ) is an embedding of the strange attractor M if k > 2dM and ↵ : RdM ! R is a twice-differentiable observation function . More specifically , the embedding map is a diffeomorphism between the original strange attractor manifold M and a shadow attractor manifold M0 generated by the delay embeddings . Under these assumptions , one can then theoretically reconstruct the original time series from the delay embedding . The simplest observation function ↵ consists in simply taking one of the dimensions of observations of the dynamical system . In this case , writing Xi [ t ] as the i-th dimension of X [ t ] , Takens ’ theorem ensures that there is a diffeomorphism between the original attractor manifold of the full dynamical system and the shadow manifold M0 that would be generated by X 0 [ t ] = ( Xi [ t ] , Xi [ t ⌧ ] , . . . , Xi [ t k⌧ ] ) . To see how this theorem can be used to infer the causal direction , let us consider the manifold MZ of the joint dynamical system resulting of the concatenation of X [ t ] and Y [ t ] . We then generate two shadow manifolds M0 X and M0 Y from the delay embeddings X 0 [ t ] = ( Xi [ t ] , Xi [ t ⌧ ] , . . . , Xi [ t k⌧ ] ) and Y 0 [ t ] : ( Yj [ t ] , Yj [ t ⌧ ] , . . . , Yj [ t k⌧ ] ) . Now , if X unidirectionally causes Y ( i.e. , Y does not cause X ) , it means that X is part of an autonomous dynamical system and that Y is part of a larger one , containing X . The attractor of Y is then the same as the one of the joint dynamical system Z . By contrast , the attractor of X is only a subset of it . From Taken ’ s theorem , it is theoretically possible to recover the original MZ from M0Y and hence , by extension , recover M0 X from M0 Y . However , the contrary is not true and it is in general not possible to recover M0 Y from M0 X . The CCM algorithm uses this property to infer causal dependency . It embeds both dynamical systems X and Y and use k-nearest neighbors to predict points on M0 X from M0 Y and inversely . The result then consists in the correlation of the predictions with the true values . We write Ccm ( X , Y ) the Pearson correlation for the task of reconstructing M0 X from M0 Y , Ccm ( X , Y ) = Corr ( M0 X , M̂0 X ) where M̂0 X stands fro the prediction of M0 X obtained from M0 Y . Importantly , this measure is nonsymmetric as an non-injective map between M0 X and M0 Y would lead to an accurate reconstruction being possible in one direction only . To infer that there is a causal link between the predictor dynamical system and the predicted one , this correlation should be high and , importantly , increase with the length of the observed time series , as the observed manifolds become denser . The potential results are then interpreted in the following way ( 1 ) X causes Y if one can reconstruct with high accuracy M0 X from M0 Y ; ( 2 ) X and Y are not causally related ( but not necessarily statistically independent ) if nor M0 X nor M0 Y can be reconstructed from the other ; ( 3 ) X and Y are in a circular causal relation if both M0 Y and M0 X can be reconstructed from the other . In the extreme case of strong coupling , the two systems are said to be in synchrony , and it becomes hard to distinguish between unidirectional or bidirectional coupling ( Ye et al. , 2015 ) .
This paper studies short, chaotic time series and uses the Taken's theorem to discover the causality between two time series. The main challenge is that for short time series, the delay embedding is not possible. Thus, the authors propose to fit a latent neural ODE and theoretically argue that they can use the Neural ODE embeddings in place of the delay maps. The authors provide two sets of experiments, both on simulation data. Unfortunately, they never tested the algorithm on real data.
SP:b7b4e29defc84ee37a5a4dcaf2d393363c153b52
Global Self-Attention Networks for Image Recognition
1 INTRODUCTION . Self-attention is a mechanism in neural networks that focuses on modeling long-range dependencies . Its advantage in terms of establishing global dependencies over other mechanisms , e.g. , convolution and recurrence , has made it prevalent in modern deep learning . In computer vision , several recent works have augmented Convolutional Neural Networks ( CNNs ) with global self-attention modules and showed promising results for various image and video understanding tasks ( Bello et al. , 2019 ; Chen et al. , 2018 ; Huang et al. , 2019 ; Shen et al. , 2018 ; Wang et al. , 2018 ; Yue et al. , 2018 ) . For brevity , in the rest of the paper , we refer to self-attention simply as attention . The main challenge in using the global attention mechanism for computer vision tasks is the large spatial dimensions of the input . An input image in a computer vision task typically contains tens of thousands of pixels , and the quadratic computational and memory complexities of the attention mechanism make global attention prohibitively expensive for such large inputs . Because of this , earlier works such as Bello et al . ( 2019 ) ; Wang et al . ( 2018 ) restricted the use of global attention mechanism to low-resolution feature maps in later stages of a deep network . Alternatively , other recent works such as Hu et al . ( 2019 ) ; Ramachandran et al . ( 2019 ) ; Zhao et al . ( 2020 ) restricted the receptive field of the attention operation to small local regions . While both these strategies are effective at capping the resource consumption of attention modules , they deprive the network of the ability to model long-range pixel interactions in its early and middle stages , preventing the attention mechanism from reaching its full potential . Different from the above works , Chen et al . ( 2018 ) ; Huang et al . ( 2019 ) ; Shen et al . ( 2018 ) ; Yue et al . ( 2018 ) made the global attention mechanism efficient by either removing the softmax normalization on the product of queries and keys and changing the order of matrix multiplications involved in the attention computation ( Chen et al. , 2018 ; Shen et al. , 2018 ; Yue et al. , 2018 ) or decomposing one global attention layer into a sequence of multiple axial attention layers ( Huang et al. , 2019 ) . However , all these works use content-only attention which does not take the spatial arrangement of pixels into account . Since images are spatially-structured inputs , an attention mechanism that ignores spatial information is not best-suited for image understanding tasks on its own . Hence , these works incorporate attention modules as auxiliary modules into standard CNNs . To address the above issues , we introduce a new global self-attention module , referred to as the GSA module , that performs attention taking both the content and spatial positions of the pixels into account . This module consists of two parallel layers : a content attention layer and a positional attention layer , whose outputs are summed at the end . The content attention layer attends to all the pixels at once based only on their content . It uses an efficient global attention mechanism similar to Chen et al . ( 2018 ) ; Shen et al . ( 2018 ) whose computational and memory complexities are linear in the number of pixels . The positional attention layer computes the attention map for each pixel based on its own content and its relative spatial positions with respect to other pixels . Following the axial formulation ( Ho et al. , 2019 ; Huang et al. , 2019 ) , the positional attention layer is implemented as a column-only attention layer followed by a row-only attention layer . The computational and memory complexities of this axial positional attention layer are O ( N √ N ) in the number of pixels . The proposed GSA module is efficient enough to act as the backbone component of a deep network . Based on this module , we introduce new standalone global attention-based deep networks , referred to as global self-attention networks . A GSA network uses GSA modules instead of convolutions to model pixel interactions . By virtue of the global extent of the GSA module , a GSA network has the ability to model long-range pixel interactions throughout the network . Recently , Wang et al . ( 2020 ) also introduced standalone global attention-based deep networks that use axial attention mechanism for both content and positional attentions . Different from Wang et al . ( 2020 ) , the proposed GSA module uses a non-axial global content attention mechanism that attends to the entire image at once rather than just a row or column . Our experimental results show that GSA-ResNet , a GSA network that adopts ResNet ( He et al. , 2016 ) structure , outperforms the original convolution-based ResNet and various recent global or local attention-based ResNets on the widely-used ImageNet dataset . MAJOR CONTRIBUTIONS • GSA module : We introduce a new global attention module that is efficient enough to act as the backbone component of a deep network . Different from Wang et al . ( 2018 ) ; Yue et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Shen et al . ( 2018 ) ; Huang et al . ( 2019 ) , the proposed module attends to pixels based on both content and spatial positions . Different from Zhao et al . ( 2020 ) ; Hu et al . ( 2019 ) ; Ramachandran et al . ( 2019 ) , the proposed module attends to the entire input rather than a small local neighborhood . Different from Wang et al . ( 2020 ) , the proposed GSA module uses a non-axial global content attention mechanism that attends to the entire image at once rather than just a row or column . • GSA network : We introduce new standalone global attention-based networks that use GSA modules instead of spatial convolutions to model pixel interactions . This is one of the first works ( Wang et al . ( 2020 ) being the only other work ) to explore standalone global attention-based networks for image understanding tasks . Existing global attention-based works insert their attention modules into CNNs as auxiliary blocks at later stages of the network , and existing standalone attention-based networks use local attention modules . • Experiments : We show that the proposed GSA networks outperform the corresponding CNNs significantly on the CIFAR-100 and ImageNet datasets while using less parameters and computations . We also show that the GSA networks outperform various existing attention-based networks including the latest standalone global attention-based network of Wang et al . ( 2020 ) on the ImageNet dataset . 2 RELATED WORKS . 2.1 AUXILIARY VISUAL ATTENTION . Wang et al . ( 2018 ) proposed the non-local block , which is the first adaptation of the dot-product attention mechanism for long-range dependency modeling in computer vision . They empirically verified its effectiveness on video classification and object detection . Follow-up works extended it to different tasks such as generative adversarial image modeling ( Zhang et al. , 2019 ; Brock et al. , 2019 ) , video person re-identification ( Liao et al. , 2018 ) , image de-raining ( Li et al. , 2018 ) etc . Several recent works focused on mitigating the high computational cost of Wang et al . ( 2018 ) . Chen et al . ( 2018 ) ; Shen et al . ( 2018 ) utilized the associative property of matrix multiplication to reduce the complexity from quadratic to linear . Huang et al . ( 2019 ) proposed to decompose global attention into row attention and column attention to save resources . Recently , a series of works ( Sun et al. , 2019 ; Carion et al. , 2020 ) have used Transformers ( Vaswani et al. , 2017 ) for various computer vision applications . These works first use a deep CNN to extract semantic features , and then use a Transformer to model interactions among the high-level semantic features . For example , Carion et al . ( 2020 ) used a Transformer to model object-level interactions for object detection , and Sun et al . ( 2019 ) used a Transformer to model inter-frame dependencies for video representation learning . All these methods use attention modules as auxiliary modules to enhance long-range dependency modeling of a CNN , and relegate most of the feature extraction work to the convolution operation . In contrast , a GSA network uses attention as the primitive operation instead of spatial convolution . 2.2 BACKBONE VISUAL ATTENTION . Bello et al . ( 2019 ) were the first to test attention as a primitive operation for computer vision tasks . However , they used the costly non-local block ( Wang et al. , 2018 ) which prevented them from fully replacing convolutional layers . Ramachandran et al . ( 2019 ) , Hu et al . ( 2019 ) and Zhao et al . ( 2020 ) solved this problem by limiting the receptive field of attention to a local neighborhood . In contrast to these works , the proposed GSA network uses global attention throughout the network and is still efficient . Recently , Wang et al . ( 2020 ) used axial decomposition to make global attention efficient . Different from them , the proposed GSA network uses a non-axial global content attention mechanism which is better than axial mechanism as later shown in the experiments . 3 GLOBAL SELF-ATTENTION NETWORK . 3.1 GLOBAL SELF-ATTENTION MODULE . Let F i ∈ RWH×din and F o ∈ RWH×dout , respectively , denote the ( spatially ) flattened input and output feature maps of the proposed GSA module . Here , W , H represent the spatial dimensions , and din , dout represent the channel dimensions . Each pixel in the output feature map is generated by aggregating information from every pixel in the input feature map based on their content and spatial positions . Let K = [ kij ] ∈ RWH×dk , Q = [ qij ] ∈ RWH×dk , and V = [ vij ] ∈ RWH×dout respectively denote the matrices of keys , queries , and values generated using three 1×1 convolutions on the input feature map F i . Here , dk denotes the number of channels used for keys and queries . Each row in these matrices corresponds to one input pixel . The proposed GSA module ( see Fig . 1 ) consists of two parallel layers : a content attention layer and a positional attention layer . 3.1.1 CONTENT ATTENTION LAYER . This layer uses the keys , queries , and values to generate new features F c = [ f cij ] ∈ RWH×dout using the following content-based global attention operation : F c = Q ( ρ ( K > ) V ) , ( 1 ) where K > denotes the matrix transpose of K , and ρ denotes the operation of applying softmax normalization for each row separately . This attention operation can be interpreted as first aggregating the pixel features in V into dk global context vectors using the weights in ρ ( K > ) , and then redistributing the global context vectors back to individual pixels using the weights in Q . The computational and memory complexities of this operation are O ( N ) in the number of pixels . This attention operation is similar to the attention operation used in Chen et al . ( 2018 ) ; Shen et al . ( 2018 ) except that it does not use softmax normalization on queries . Normalizing the queries constrains the output features to be convex combinations of the global context vectors . As these constraints could restrict the expressive power of the attention mechanism , we remove the softmax normalization on queries . This allows the output features to span the entire subspace of the dk global context vectors . When we experimented with softmax normalization on the queries , the top-1 accuracy on the ImageNet validation dataset decreased significantly ( 1 % ) .
There have been multiple attempts to use self-attention in computer vision backbones for image classification and object detection. Most of these approaches either tried to combine convolution with global self-attention, or replace it completely with local self-attention operation. The proposed approach naturally combines the two, by employing query-key-value switching trick, with axial positional attention.
SP:474e2b9be8a3ec69a48c4ccd04a7e390ebb96347
Randomized Automatic Differentiation
1 INTRODUCTION . Deep neural networks have taken center stage as a powerful way to construct and train massivelyparametric machine learning ( ML ) models for supervised , unsupervised , and reinforcement learning tasks . There are many reasons for the resurgence of neural networks—large data sets , GPU numerical computing , technical insights into overparameterization , and more—but one major factor has been the development of tools for automatic differentiation ( AD ) of deep architectures . Tools like PyTorch and TensorFlow provide a computational substrate for rapidly exploring a wide variety of differentiable architectures without performing tedious and error-prone gradient derivations . The flexibility of these tools has enabled a revolution in AI research , but the underlying ideas for reverse-mode AD go back decades . While tools like PyTorch and TensorFlow have received huge dividends from a half-century of AD research , they are also burdened by the baggage of design decisions made in a different computational landscape . The research on AD that led to these ubiquitous deep learning frameworks is focused on the computation of Jacobians that are exact up to numerical precision . However , in modern workflows these Jacobians are used for stochastic optimization . We ask : Why spend resources on exact gradients when we ’ re going to use stochastic optimization ? This question is motivated by the surprising realization over the past decade that deep neural network training can be performed almost entirely with first-order stochastic optimization . In fact , empirical evidence supports the hypothesis that the regularizing effect of gradient noise assists model generalization ( Keskar et al. , 2017 ; Smith & Le , 2018 ; Hochreiter & Schmidhuber , 1997 ) . Stochastic gradient descent variants such as AdaGrad ( Duchi et al. , 2011 ) and Adam ( Kingma & Ba , 2015 ) form the core of almost all successful optimization techniques for these models , using small subsets of the data to form the noisy gradient estimates . 1Department of Computer Science 2Department of Astrophysical Sciences from math import sin , exp def f ( x1 , x2 ) : a = exp ( x1 ) b = sin ( x2 ) c = b ∗ x2 d = a ∗ c return a ∗ d a exp ( x1 ) b sin ( x2 ) c b * x2 d a * c f a * d a b c d f exp ( x1 ) cos ( x2 ) x2 b c a d a a b c d f exp ( x1 ) cos ( x2 ) x2 b c a d a The goals and assumptions of automatic differentiation as performed in classical and modern systems are mismatched with those required by stochastic optimization . Traditional AD computes the derivative or Jacobian of a function accurately to numerical precision . This accuracy is required for many problems in applied mathematics which AD has served , e.g. , solving systems of differential equations . But in stochastic optimization we can make do with inaccurate gradients , as long as our estimator is unbiased and has reasonable variance . We ask the same question that motivates mini-batch SGD : why compute an exact gradient if we can get noisy estimates cheaply ? By thinking of this question in the context of AD , we can go beyond mini-batch SGD to more general schemes for developing cheap gradient estimators : in this paper , we focus on developing gradient estimators with low memory cost . Although previous research has investigated approximations in the forward or reverse pass of neural networks to reduce computational requirements , here we replace deterministic AD with randomized automatic differentiation ( RAD ) , trading off of computation for variance inside AD routines when imprecise gradient estimates are tolerable , while retaining unbiasedness . 2 AUTOMATIC DIFFERENTIATION . Automatic ( or algorithmic ) differentiation is a family of techniques for taking a program that computes a differentiable function f : Rn → Rm , and producing another program that computes the associated derivatives ; most often the Jacobian : J [ f ] = f ′ : Rn → Rm×n . ( For a comprehensive treatment of AD , see Griewank & Walther ( 2008 ) ; for an ML-focused review see Baydin et al . ( 2018 ) . ) In most machine learning applications , f is a loss function that produces a scalar output , i.e. , m = 1 , for which the gradient with respect to parameters is desired . AD techniques are contrasted with the method of finite differences , which approximates derivatives numerically using a small but non-zero step size , and also distinguished from symbolic differentiation in which a mathematical expression is processed using standard rules to produce another mathematical expression , although Elliott ( 2018 ) argues that the distinction is simply whether or not it is the compiler that manipulates the symbols . There are a variety of approaches to AD : source-code transformation ( e.g. , Bischof et al . ( 1992 ) ; Hascoet & Pascual ( 2013 ) ; van Merrienboer et al . ( 2018 ) ) , execution tracing ( e.g. , Walther & Griewank ( 2009 ) ; Maclaurin et al . ) , manipulation of explicit computational graphs ( e.g. , Abadi et al . ( 2016 ) ; Bergstra et al . ( 2010 ) ) , and category-theoretic transformations ( Elliott , 2018 ) . AD implementations exist for many different host languages , although they vary in the extent to which they take advantage of native programming patterns , control flow , and language features . Regardless of whether it is constructed at compile-time , run-time , or via an embedded domain-specific language , all AD approaches can be understood as manipulating the linearized computational graph ( LCG ) to collapse out intermediate variables . Figure 1 shows the LCG for a simple example . These computational graphs are always directed acyclic graphs ( DAGs ) with vertices as variables . Let the outputs of f be yj , the inputs θi , and the intermediates zl . AD can be framed as the computation of a partial derivative as a sum over all paths through the LCG DAG ( Bauer , 1974 ) : ∂yj ∂θi = Jθ [ f ] j , i = ∑ [ i→j ] ∏ ( k , l ) ∈ [ i→j ] ∂zl ∂zk ( 1 ) where [ i→ j ] indexes paths from vertex i to vertex j and ( k , l ) ∈ [ i→ j ] denotes the set of edges in that path . See Figure 1d for an illustration . Although general , this naïve sum over paths does not take advantage of the structure of the problem and so , as in other kinds of graph computations , dynamic programming ( DP ) provides a better approach . DP collapses substructures of the graph until it becomes bipartite and the remaining edges from inputs to outputs represent exactly the entries of the Jacobian matrix . This is referred to as the Jacobian accumulation problem ( Naumann , 2004 ) and there are a variety of ways to manipulate the graph , including vertex , edge , and face elimination ( Griewank & Naumann , 2002 ) . Forward-mode AD and reverse-mode AD ( backpropagation ) are special cases of more general dynamic programming strategies to perform this summation ; determination of the optimal accumulation schedule is unfortunately NP-complete ( Naumann , 2008 ) . While the above formulation in which each variable is a scalar can represent any computational graph , it can lead to structures that are difficult to reason about . Often we prefer to manipulate vectors and matrices , and we can instead let each intermediate zl represent a dl dimensional vector . In this case , ∂zl/∂zk ∈ Rdl×dk represents the intermediate Jacobian of the operation zk → zl . Note that Equation 1 now expresses the Jacobian of f as a sum over chained matrix products . 3 RANDOMIZING AUTOMATIC DIFFERENTIATION . We introduce techniques that could be used to decrease the resource requirements of AD when used for stochastic optimization . We focus on functions with a scalar output where we are interested in the gradient of the output with respect to some parameters , Jθ [ f ] . Reverse-mode AD efficiently calculates Jθ [ f ] , but requires the full linearized computational graph to either be stored during the forward pass , or to be recomputed during the backward pass using intermediate variables recorded during the forward pass . For large computational graphs this could provide a large memory burden . The most common technique for reducing the memory requirements of AD is gradient checkpointing ( Griewank & Walther , 2000 ; Chen et al. , 2016 ) , which saves memory by adding extra forward pass computations . Checkpointing is effective when the number of `` layers '' in a computation graph is much larger than the memory required at each layer . We take a different approach ; we instead aim to save memory by increasing gradient variance , without extra forward computation . Our main idea is to consider an unbiased estimator Ĵθ [ f ] such that EĴθ [ f ] = Jθ [ f ] which allows us to save memory required for reverse-mode AD . Our approach is to determine a sparse ( but random ) linearized computational graph during the forward pass such that reverse-mode AD applied on the sparse graph yields an unbiased estimate of the true gradient . Note that the original computational graph is used for the forward pass , and randomization is used to determine a LCG to use for the backward pass in place of the original computation graph . We may then decrease memory costs by storing the sparse LCG directly or storing intermediate variables required to compute the sparse LCG . In this section we provide general recipes for randomizing AD by sparsifying the LCG . In sections 4 and 5 we apply these recipes to develop specific algorithms for neural networks and linear PDEs which achieve concrete memory savings . 3.1 PATH SAMPLING . Observe that in Bauer ’ s formula each Jacobian entry is expressed as a sum over paths in the LCG . A simple strategy is to sample paths uniformly at random from the computation graph , and form a Monte Carlo estimate of Equation 1 . Naïvely this could take multiple passes through the graph . However , multiple paths can be sampled without significant computation overhead by performing a topological sort of the vertices and iterating through vertices , sampling multiple outgoing edges for each . We provide a proof and detailed algorithm in the appendix . Dynamic programming methods such as reverse-mode automatic differentiation can then be applied to the sparsified LCG . 3.2 RANDOM MATRIX INJECTION . In computation graphs consisting of vector operations , the vectorized computation graph is a more compact representation . We introduce an alternative view on sampling paths in this case . A single path in the vectorized computation graph represents many paths in the underlying scalar computation graph . As an example , Figure 2c is a vector representation for Figure 2b . For this example , ∂y ∂θ = ∂y ∂C ∂C ∂B ∂B ∂A ∂A ∂θ ( 2 ) where A , B , C are vectors with entries ai , bi , ci , ∂C/∂B , ∂B/∂A are 3× 3 Jacobian matrices for the intermediate operations , ∂y/∂C is 1× 3 , and ∂A/∂θ is 3× 1 . We now note that the contribution of the path p = θ → a1 → b2 → c2 → y to the gradient is , ∂y ∂C P2 ∂C ∂B P2 ∂B ∂A P1 ∂A ∂θ ( 3 ) where Pi = eieTi ( outer product of standard basis vectors ) . Sampling from { P1 , P2 , P3 } and right multiplying a Jacobian is equivalent to sampling the paths passing through a vertex in the scalar graph . In general , if we have transition B → C in a vectorized computational graph , where B ∈ Rd , C ∈ Rm , we can insert a random matrix P = d/k ∑k s=1 Ps where each Ps is sampled uniformly from { P1 , P2 , . . . , Pd } . With this construction , EP = Id , so E [ ∂C ∂B P ] = ∂C ∂B . ( 4 ) If we have a matrix chain product , we can use the fact that the expectation of a product of independent random variables is equal to the product of their expectations , so drawing independent random matrices PB , PC would give E [ ∂y ∂C PC ∂C ∂B PB ] = ∂y ∂C E [ PC ] ∂C ∂B E [ PB ] = ∂y ∂C ∂C ∂B ( 5 ) Right multiplication by P may be achieved by sampling the intermediate Jacobian : one does not need to actually assemble and multiply the two matrices . For clarity we adopt the notation SP [ ∂C/∂B ] = ∂C/∂BP . This is sampling ( with replacement ) k out of the d vertices represented by B , and only considering paths that pass from those vertices . The important properties of P that enable memory savings with an unbiased approximation are EP = Id and P = RRT , R ∈ Rd×k , k < d . ( 6 ) We could therefore consider other matrices with the same properties . In our additional experiments in the appendix , we also let R be a random projection matrix of independent Rademacher random variables , a construction common in compressed sensing and randomized dimensionality reduction . In vectorized computational graphs , we can imagine a two-level sampling scheme . We can both sample paths from the computational graph where each vertex on the path corresponds to a vector . We can also sample within each vector path , with sampling performed via matrix injection as above . In many situations the full intermediate Jacobian for a vector operation is unreasonable to store . Consider the operation B → C where B , C ∈ Rd . The Jacobian is d × d. Thankfully many common operations are element-wise , leading to a diagonal Jacobian that can be stored as a d-vector . Another common operation is matrix-vector products . Consider Ab = c , ∂c/∂b = A . Although A has many more entries than c or b , in many applications A is either a parameter to be optimized or is easily recomputed . Therefore in our implementations , we do not directly construct and sparsify the Jacobians . We instead sparsify the input vectors or the compact version of the Jacobian in a way that has the same effect . Unfortunately , there are some practical operations such as softmax that do not have a compactly-representable Jacobian and for which this is not possible .
In the context of deep learning, back-propagation is stochastic in the sample level to attain bette efficiency than full-dataset gradient descent. The authors asked that, can we further randomize the gradient compute within each single minibatch / sample with the goal to achieve strong model accuracy. In modern deep learning, training memory consumption is high due to activation caching. Thus this randomized approach can help attain strong model accuracy under memory constraints.
SP:bf70c9e16933774746d621a5b8475843e723ac24
A Simple Unified Information Regularization Framework for Multi-Source Domain Adaptation
1 INTRODUCTION . Although a large number of studies have demonstrated the ability of deep neural networks to solve challenging tasks , the tasks solved by networks are mostly confined to a similar type or a single domain . One remaining challenge is the problem known as domain shift ( Gretton et al . ( 2009 ) ) , where a direct transfer of information gleaned from a single source domain to unseen target domains may lead to significant performance impairment . Domain adaptation ( DA ) approaches aim to mitigate this problem by learning to map data of both domains onto a common feature space . Whereas several theoretical results ( Ben-David et al . ( 2007 ) ; Blitzer et al . ( 2008 ) ; Zhao et al . ( 2019a ) ) and algorithms for DA ( Long et al . ( 2015 ; 2017 ) ; Ganin et al . ( 2016 ) ) have focused on the case in which only a single-source domain dataset is given , we consider a more challenging and generalized problem of knowledge transfer , referred to as Multi-source unsupervised DA ( MDA ) . Following a seminal theoretical result on MDA ( Blitzer et al . ( 2008 ) ; Ben-David et al . ( 2010 ) ) , technical advances have been made , mainly on the adversarial methods . ( Xu et al . ( 2018 ) ; Zhao et al . ( 2019c ) ) . While most of adversarial MDA methods use multiple independent domain discriminators ( Xu et al . ( 2018 ) ; Zhao et al . ( 2018 ) ; Li et al . ( 2018 ) ; Zhao et al . ( 2019c ; b ) ) , the potential pitfalls of this setting have not been fully explored . The existing works do not provide a theoretical guarantee that the unnecessary domain-specific information is fully filtered out , because the domain-discriminative information is inevitably distributed across multiple discriminators . For example , the multiple domain discriminators focus only on estimating the domain shift between source domains and the target , while the discrepancies between the source domains are neglected , making it hard to align all the given domains . This necessitates garnering the domain-discriminative information with a unified discriminator . Moreover , the multiple domain discriminator setting is not scalable in terms of computational resources especially when large number of source domains are given , e.g. , medical reports from multiple patients . Finally , it may undermine the stability of training , as earlier works solve multiple independent adversarial minimax problems . To overcome such limitations , we propose a novel MDA method , called Multi-source Informationregularized Adaptation Networks ( MIAN ) , that constrains the mutual information between latent representations and domain labels . First , we show that such mutual information regularization is closely related to the explicit optimization of theH-divergence between the source and target domains . This affords the theoretical insight that the conventional adversarial DA can be translated into an information-theoretic-regularization problem . Second , based on our findings , we propose a new optimization problem for MDA : minimizing adversarial loss over multiple domains with a single domain discriminator . We show that the domain shift between each source domain can be indirectly penalized , which is known to be beneficial in MDA ( Li et al . ( 2018 ) ; Peng et al . ( 2019 ) ) , with a single domain discriminator . Moreover , by analyzing existing studies in terms of information regularization , we found that the variance of the stochastic gradients increases when using multiple discriminators . Despite its structural simplicity , we found that MIAN works efficiently across a wide variety of MDA scenarios , including the DIGITS-Five ( Peng et al . ( 2019 ) ) , Office-31 ( Saenko et al . ( 2010 ) ) , and Office-Home datasets ( Venkateswara et al . ( 2017 ) ) . Intriguingly , MIAN reliably and significantly outperformed several state-of-the-art methods that either employ a domain discriminator separately for each source domain ( Xu et al . ( 2018 ) ) or align the moments of deep feature distribution for every pairwise domain ( Peng et al . ( 2019 ) ) . 2 RELATED WORKS . Several DA methods have been used in attempt to learn domain-invariant representations . Along with the increasing use of deep neural networks , contemporary work focuses on matching deep latent representations from the source domain with those from the target domain . Several measures have been introduced to handle domain shift , such as maximum mean discrepancy ( MMD ) ( Long et al . ( 2014 ; 2015 ) ) , correlation distance ( Sun et al . ( 2016 ) ; Sun & Saenko ( 2016 ) ) , and Wasserstein distance ( Courty et al . ( 2017 ) ) . Recently , adversarial DA methods ( Ganin et al . ( 2016 ) ; Tzeng et al . ( 2017 ) ; Hoffman et al . ( 2017 ) ; Saito et al . ( 2018 ; 2017 ) ) have become mainstream approaches owing to the development of generative adversarial networks ( Goodfellow et al . ( 2014 ) ) . However , the abovementioned single-source DA approaches inevitably sacrifice performance for the sake of multi-source DA . Some MDA studies ( Blitzer et al . ( 2008 ) ; Ben-David et al . ( 2010 ) ; Mansour et al . ( 2009 ) ; Hoffman et al . ( 2018 ) ) have provided the theoretical background for algorithm-level solutions . ( Blitzer et al . ( 2008 ) ; Ben-David et al . ( 2010 ) ) explore the extended upper bound of true risk on unlabeled samples from the target domain with respect to a weighted combination of multiple source domains . Following these theoretical studies , MDA studies with shallow models ( Duan et al . ( 2012b ; a ) ; Chattopadhyay et al . ( 2012 ) ) as well as with deep neural networks ( Mancini et al . ( 2018 ) ; Peng et al . ( 2019 ) ; Li et al . ( 2018 ) ) have been proposed . Recently , some adversarial MDA methods have also been proposed . Xu et al . ( 2018 ) implemented a k-way domain discriminator and classifier to battle both domain and category shifts . Zhao et al . ( 2018 ) also used multiple discriminators to optimize the average case generalization bounds . Zhao et al . ( 2019c ) chose relevant source training samples for the DA by minimizing the empirical Wasserstein distance between the source and target domains . Instead of using separate encoders , domain discriminators or classifiers for each source domain as in earlier works , our approach uses unified networks , thereby improving resource-efficiency and scalability . Several existing MDA works have proposed methods to estimate the source domain weights following ( Blitzer et al . ( 2008 ) ; Ben-David et al . ( 2010 ) ) . Mansour et al . ( 2009 ) assumed that the target hypothesis can be approximated by a convex combination of the source hypotheses . ( Peng et al . ( 2019 ) ; Zhao et al . ( 2018 ) ) suggested ad-hoc schemes for domain weights based on the empirical risk of each source domain . Li et al . ( 2018 ) computed a softmax-transformed weight vector using the empirical Wasserstein-like measure instead of the empirical risks . Compared to the proposed methods without robust theoretical justifications , our analysis does not require any assumption or estimation for the domain coefficients . In our framework , the representations are distilled to be independent of the domain , thereby rendering the performance relatively insensitive to explicit weighting strategies . 3 THEORETICAL INSIGHTS . We first introduce the notations for the MDA problem in classification . A set of source domains and the target domain are denoted by { DSi } N i=1 and DT , respectively . Let XSi = { xjSi } m j=1 and YSi = { yjSi } m j=1 be a set of m i.i.d . samples from DSi . Let XT = { xT j } m j=1 ∼ ( DXT ) m be the set of m i.i.d . samples generated from the marginal distribution DXT . The domain label and its probability distribution are denoted by V and PV ( v ) , where v ∈ V and V is the set of domain labels . In line with prior works ( Hoffman et al . ( 2012 ) ; Gong et al . ( 2013 ) ; Mancini et al . ( 2018 ) ; Gong et al . ( 2019 ) ) , domain label can be generally treated as a stochastic latent random variable in our framework . However , for simplicity , we take the empirical version of the true distributions with given samples assuming that the domain labels for all samples are known . The latent representation of the sample is given by Z , and the encoder is defined as F : X → Z , with X and Z representing data space and latent space , respectively . Accordingly , ZSi and ZT refer to the outputs of the encoder F ( XSi ) and F ( XT ) , respectively . For notational simplicity , we will omit the index i from DSi , XSi and ZSi when N = 1 . A classifier is defined as C : Z → Y where Y is the class label space . 3.1 PROBLEM FORMULATION . For comparison with our formulation , we recast single-source DA as a constrained optimization problem . The true risk T ( h ) on unlabeled samples from the target domain is bounded above the sum of three terms ( Ben-David et al . ( 2010 ) ) : ( 1 ) true risk S ( h ) of hypothesis h on the source domain ; ( 2 ) H-divergence dH ( DS , DT ) between a source and a target domain distribution ; and ( 3 ) the optimal joint risk λ∗ . Theorem 1 ( Ben-David et al . ( 2010 ) ) . Let hypothesis classH be a set of binary classifiers h : X → { 0 , 1 } . Then for the given domain distributions DS and DT , ∀h ∈ H , T ( h ) ≤ S ( h ) + dH ( DS , DT ) + λ∗ , ( 1 ) where dH ( DS , DT ) = 2sup h∈H ∣∣∣ E x∼DXS [ I ( h ( x ) = 1 ) ] − E x∼DXT [ I ( h ( x ) = 1 ) ] ∣∣∣ and I ( a ) is an indicator function whose value is 1 if a is true , and 0 otherwise . The empiricalH-divergence d̂H ( XS , XT ) can be computed as follows ( Ben-David et al . ( 2010 ) ) : Lemma 1. d̂H ( XS , XT ) = 2 ( 1−min h∈H [ 1 m ∑ x∈XS I [ h ( x ) = 1 ] + 1 m ∑ x∈XT I [ h ( x ) = 0 ] ] ) ( 2 ) Following Lemma 1 , a domain classifier h : Z → V can be used to compute the empirical Hdivergence . Suppose the optimal joint risk λ∗ is sufficiently small as assumed in most adversarial DA studies ( Saito et al . ( 2017 ) ; Chen et al . ( 2019 ) ) . Thus , one can obtain the ideal encoder and classifier minimizing the upper bound of T ( h ) by solving the following min-max problem : F ∗ , C∗ = argmin F , C L ( F , C ) + βd̂H ( ZS , ZT ) = argmin F , C max h∈H L ( F , C ) + β 1 m ( ∑ i : zi∈ZS I [ h ( zi ) = 1 ] + ∑ j : zj∈ZT I [ h ( zj ) = 0 ] ) , ( 3 ) where L ( F , C ) is the loss function on samples from the source domain , β is a Lagrangian multiplier , V = { 0 , 1 } such that each source instance and target instance are labeled as 1 and 0 , respectively , and h is the binary domain classifier . Note that the latter min–max problem is obtained by converting −min into max and removing the constant term from Lemma 1 .
This paper studies the multi-source domain adaptation problem. The authors examine the existing MDA solutions, i.e. using a domain discriminator for each source-target pair, and argue that the existing ones are likely to distribute the domain-discriminative information across multiple discriminators. By theoretically analyzing from the information regularization point, the authors present a simple yet powerful architecture called multi-source information-regularized adaptation network, MIAN.
SP:5b707bffe506d9556ffedbe49425c57d0e21c9fa
Three Dimensional Reconstruction of Botanical Trees with Simulatable Geometry
1 INTRODUCTION . Human-inhabited outdoor environments typically contain ground surfaces such as grass and roads , transportation vehicles such as cars and bikes , buildings and structures , and humans themselves , but are also typically intentionally populated by a large number of trees and shrubbery ; most of the motion in such environments comes from humans , their vehicles , and wind-driven plants/trees . Tree reconstruction and simulation are obviously useful for AR/VR , architectural design and modeling , film special effects , etc . For example , when filming actors running through trees , one would like to create virtual versions of those trees with which a chasing dinosaur could interact . Other uses include studying roots and plants for agriculture ( Zheng et al. , 2011 ; Estrada et al. , 2015 ; Fuentes et al. , 2017 ) or assessing the health of trees especially in remote locations ( similar in spirit to Zuffi et al . ( 2018 ) ) . 2.5D data , i.e . 2D images with some depth information , is typically sufficient for robotic navigation , etc . ; however , there are many problems that require true 3D scene understanding to the extent one could 3D print objects and have accurate geodesics . Whereas navigating around objects might readily generalize into categories or strategies such as ‘ move left , ’ ‘ move right , ’ ‘ step up , ’ ‘ go under , ’ etc. , the 3D object understanding required for picking up a cup , knocking down a building , moving a stack of bricks or a pile of dirt , or simulating a tree moving in the wind requires significantly higher fidelity . As opposed to random trial and error , humans often use mental simulations to better complete a task , e.g . consider stacking a card tower , avoiding a falling object , or hitting a baseball ( visualization is quite important in sports ) ; thus , physical simulation can play an important role in end-to-end tasks , e.g . see Kloss et al . ( 2017 ) ; Peng et al . ( 2017 ) ; Jiang & Liu ( 2018 ) for examples of combining simulation and learning . Accurate 3D shape reconstruction is still quite challenging . Recently , Malik argued1 that one should not apply general purpose reconstruction algorithms to say a car and a tree and expect both reconstructions to be of high quality . Rather , he said that one should use domain-specific knowledge as he has done for example in Kanazawa et al . ( 2018 ) . Another example of this specialization strategy is to rely on the prior that many indoor surfaces are planar in order to reconstruct office spaces ( Huang et al. , 2017 ) or entire buildings ( Armeni et al. , 2016 ; 2017 ) . Along the same lines , Zuffi et al . ( 2018 ) uses a base animal shape as a prior for their reconstructions of wild animals . Thus , we similarly take a specialized approach using a generalized cylinder prior for both large and medium scale features . In Section 3 , we discuss our constraints on data collection as well as the logistics behind the choices we made for the hardware ( cameras and drones ) and software ( structure from motion , multi-view 1Jitendra Malik , Stanford cs231n guest lecture , 29 May 2018 stereo , inverse rendering , etc . ) used to obtain our raw and processed data . Section 4 discusses our use of machine learning , and Section 5 presents a number of experimental results . In Appendices A , B , and C we describe how we create geometry from the data with enough efficacy for physical simulation . 2 PREVIOUS WORK . Tree Modeling and Reconstruction : Researchers in computer graphics have been interested in modeling trees and plants for decades ( Lindenmayer , 1968 ; Bloomenthal , 1985 ; Weber & Penn , 1995 ; Prusinkiewicz et al. , 1997 ; Stava et al. , 2014 ) . SpeedTree2 is probably the most popular software utilized , and their group has begun to consider the incorporation of data-driven methods . Amongst the data-driven approaches , Tan et al . ( 2007 ) is most similar to ours combining point cloud and image segmentation data to build coarse-scale details of a tree ; however , they generate fine-scale details procedurally using a self-similarity assumption and image-space growth constraints , whereas we aim to capture more accurate finer structures from the image data . Other data-driven approaches include Livny et al . ( 2010 ) which automatically estimates skeletal structure of trees from point cloud data , Xie et al . ( 2015 ) which builds tree models by assembling pieces from a database of scanned tree parts , etc . Many of these specialized , data-driven approaches for trees are built upon more general techniques such as the traditional combination of structure from motion ( see e.g . Wu ( 2013 ) ) and multi-view stereo ( see e.g . Furukawa & Ponce ( 2010 ) ) . In the past , researchers studying 3D reconstruction have engineered general approaches to reconstruct fine details of small objects captured by sensors in highly controlled environments ( Seitz et al. , 2006 ) . At the other end of the spectrum , researchers have developed approaches for reconstructing building- or even city-scale objects using large amounts of image data available online ( Agarwal et al. , 2009 ) . Our goal is to obtain a 3D model of a tree with elements from both of these approaches : the scale of a large structure with the fine details of its many branches and twigs . However , unlike in general reconstruction approaches , we can not simply collect images online or capture data using a high-end camera . To address similar challenges in specialized cases , researchers take advantage of domain-specific prior knowledge . Zhou et al . ( 2008 ) uses a generalized cylinder prior ( similar to us ) for reconstructing tubular structures observed during medical procedures and illustrates that this approach performs better than simple structure from motion . The process of creating a mesh that faithfully reflects topology and subsequently refining its geometry is similar in spirit to Xu et al . ( 2018 ) , which poses a human model first via its skeleton and then by applying fine-scale deformations . Learning and Networks : So far , our use of networks is limited to segmentation tasks , where we rely on segmentation masks for semi-automated tree branch labeling . Due to difficulties in getting sharp details from convolutional networks , the study of network-based segmentation of thin structures is still an active field in itself ; there has been recent work on designing specialized multiscale architectures ( Ronneberger et al. , 2015 ; Lin et al. , 2017 ; Qu et al. , 2018 ) and also on incorporating perceptual losses ( Johnson et al. , 2016 ) during network training ( Mosinska et al. , 2018 ) . 3 RAW AND PROCESSED DATA . As a case study , we select a California oak ( quercus agrifolia ) as our subject for tree reconstruction and simulation ( see Figure 1 ) . The mere size of this tree imposes a number of restrictions on our data capture : one has to deal with an outdoor , unconstrained environment , wind and branch motion will be an issue , it will be quite difficult to observe higher up portions of the tree especially at close proximities , there will be an immense number of occluded regions because of the large number of branches that one can not see from any feasible viewpoint , etc . In an outdoor setting , commodity structured light sensors that use infrared light ( e.g . the Kinect ) fail to produce reliable depth maps as their projected pattern is washed out by sunlight ; thus , we opted to use standard RGB cameras . Because we want good coverage of the tree , we can not simply capture images from the ground ; instead , we mounted our cameras on a quadcopter drone that was piloted around the tree . The decision to use a drone introduces additional constraints : the cameras must be 2https : //speedtree.com lightweight , the camera locations can not be known a priori , the drone creates its own air currents which can affect the tree ’ s motion , etc . Balancing the weight constraint with the benefits of using cameras with a global shutter and minimal distortion , we mounted a pair of Sony rx100 v cameras to a DJI Matrice 100 drone . We calibrated the stereo offset between the cameras before flight , and during flight each camera records a video with 4K resolution at 30 fps . Data captured in this manner is subject to a number of limitations . Compression artifacts in the recorded videos may make features harder to track than when captured in a RAW format . Because the drone must keep a safe distance from the tree , complete 360◦ coverage of a given branch is often infeasible . This lack of coverage is compounded by occlusions caused by other branches and leaves ( in seasons when the latter are present ) . Furthermore , the fact that the tree may be swaying slightly in the wind even on a calm day violates the rigidity assumption upon which many multi-view reconstruction algorithms rely . Since we know from the data collection phase that our data coverage will be incomplete , we will need to rely on procedural generation , inpainting , “ hallucinating ” structure , etc . in order to complete the model . After capturing the raw data , we augment it to begin to estimate the 3D structure of the environment . We subsample the videos at a sparse 1 or 2 fps and use the Agisoft PhotoScan tool3 to run structure from motion and multi-view stereo on those images , yielding a set of estimated camera frames and a dense point cloud . We align cameras and point clouds from separate structure from motion problems by performing a rigid fit on a sparse set of control points . This is a standard workflow also supported by open-source tools ( Wu , 2011 ; Schönberger & Frahm , 2016 ; Moulon et al. , 2016 ) . Some cameras may be poorly aligned ( or in some cases , so severely incorrect that they require manual correction ) . Once the cameras are relatively close , one can utilize an inverse rendering approach like that of Loper & Black ( 2014 ) adjusting the misaligned cameras ’ parameters relative to the point cloud . In the case of more severely misaligned cameras , one may select correspondences between 3D points and points in the misaligned image and then find the camera ’ s extrinsics by solving a perspective-n-point problem ( Fischler & Bolles , 1981 ) . In the supplemental appendices , we describe our approach to constructing large scale geometry using this processed data . Recovering “ medium ” scale structures that are not captured in the point cloud , however , is a problem that lends itself well to a learning-based treatment . 4 ANNOTATION AND LEARNING . Annotating images is a challenging task for human labelers and automated methods alike . Branches and twigs heavily occlude one another , connectivity can be difficult to infer , and the path of even a relatively large branch can often not be traced visually from a single view . Thus it is desirable to augment the image data during annotation to aid human labelers . 3Agisoft PhotoScan , http : //www.agisoft.com/ One method for aiding the labeler is to automatically extract a “ flow field ” of vectors tracking the anisotropy of the branches in image space ( see Figure 6 ) . The flow field is overlaid on the image in the annotation tool , and the labeler may select endpoints to be automatically connected using the projection-advection scheme discussed in Section 5.3 . Section 5.3 also discusses how we generate the flow field itself , after first creating a segmentation mask . Note that segmentation ( i.e . discerning tree or not tree for each pixel in the image ) is a simpler problem than annotation ( i.e . discerning medial axes , topology , and thickness in image space ) . Obtaining segmentation masks is straightforward under certain conditions , e.g . in areas where branches and twigs are clearly silhouetted against the grass or sky , but segmentation can be difficult in visually dense regions of an image . Thus , we explore deep learning-based approaches for performing semantic segmentation on images from our dataset . In particular , we use UNet ( Ronneberger et al. , 2015 ) , a state-of-the-art fully convolutional architecture for segmentation ; the strength of this model lies in its many residual connections , which give the model the capacity to retain sharp edges despite its hourglass structure . See Section 5.2 for further discussion .
This paper tackles the problem of geometrical and topological 3D reconstruction of a (botanical) tree using a drone-mounted stereo vision system and deep learning-based/aided tree branch image annotation procedures. This is an interesting computer vision 3D reconstruction task, which has important practical applications (e.g in AR/VR or for plant phonemics study), however has not been extensively researched in the past. Part of the reasons are due to some unique challenges that the problem of tree reconstruction is facing, in particular, how to accurately recover complex visual occlusions caused by dense tree branches and leaves, and how to ensure the reconstructed topology is accurate.
SP:825132782872f2167abd5e45773bfdef83e4bb2e
Target Training: Tricking Adversarial Attacks to Fail
1 INTRODUCTION . Neural network classifiers are vulnerable to malicious adversarial samples that appear indistinguishable from original samples ( Szegedy et al. , 2013 ) , for example , an adversarial attack can make a traffic stop sign appear like a speed limit sign ( Eykholt et al. , 2018 ) to a classifier . An adversarial sample created using one classifier can also fool other classifiers ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) , even ones with different structure and parameters ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Papernot et al. , 2016b ; Tramèr et al. , 2017b ) . This transferability of adversarial attacks ( Papernot et al. , 2016b ) matters because it means that classifier access is not necessary for attacks . The increasing deployment of neural network classifiers in security and safety-critical domains such as traffic ( Eykholt et al. , 2018 ) , autonomous driving ( Amodei et al. , 2016 ) , healthcare ( Faust et al. , 2018 ) , and malware detection ( Cui et al. , 2018 ) makes countering adversarial attacks important . Gradient-based attacks use the classifier gradient to generate adversarial samples from nonadversarial samples . Gradient-based attacks minimize at the same time classifier adversarial loss and perturbation ( Szegedy et al. , 2013 ) , though attacks can relax this minimization to allow for bigger perturbations , for example in the Carlini & Wagner attack ( CW ) ( Carlini & Wagner , 2017c ) for κ > 0 , in the Projected Gradient Descent attack ( PGD ) ( Kurakin et al. , 2016 ; Madry et al. , 2017 ) , in FastGradientMethod ( FGSM ) ( Goodfellow et al. , 2014 ) . Other gradient-based adversarial attacks include DeepFool ( Moosavi-Dezfooli et al. , 2016 ) , Zeroth order optimization ( ZOO ) ( Chen et al. , 2017 ) , Universal Adversarial Perturbation ( UAP ) ( Moosavi-Dezfooli et al. , 2017 ) . Many recent proposed defenses have been broken ( Athalye et al. , 2018 ; Carlini & Wagner , 2016 ; 2017a ; b ; Tramer et al. , 2020 ) . They fall largely into these categories : ( 1 ) adversarial sample detection , ( 2 ) gradient masking and obfuscation , ( 3 ) ensemble , ( 4 ) customized loss . Detection defenses ( Meng & Chen , 2017 ; Ma et al. , 2018 ; Li et al. , 2019 ; Hu et al. , 2019 ) aim to detect , cor- rect or reject adversarial samples . Many detection defenses have been broken ( Carlini & Wagner , 2017b ; a ; Tramer et al. , 2020 ) . Gradient obfuscation is aimed at preventing gradient-based attacks from access to the gradient and can be achieved by shattering gradients ( Guo et al. , 2018 ; Verma & Swami , 2019 ; Sen et al. , 2020 ) , randomness ( Dhillon et al. , 2018 ; Li et al. , 2019 ) or vanishing or exploding gradients ( Papernot et al. , 2016a ; Song et al. , 2018 ; Samangouei et al. , 2018 ) . Many gradient obfuscation methods have also been successfully defeated ( Carlini & Wagner , 2016 ; Athalye et al. , 2018 ; Tramer et al. , 2020 ) . Ensemble defenses ( Tramèr et al. , 2017a ; Verma & Swami , 2019 ; Pang et al. , 2019 ; Sen et al. , 2020 ) have also been broken ( Carlini & Wagner , 2016 ; Tramer et al. , 2020 ) , unable to even outperform their best performing component . Customized attack losses defeat defenses ( Tramer et al. , 2020 ) with customized losses ( Pang et al. , 2020 ; Verma & Swami , 2019 ) but also , for example ensembles ( Sen et al. , 2020 ) . Even though it has not been defeated , Adversarial Training ( Kurakin et al. , 2016 ; Szegedy et al. , 2013 ; Madry et al. , 2017 ) assumes that the attack is known in advance and takes time to generate adversarial samples at every iteration . The inability of recent defenses to counter adversarial attacks calls for new kinds of defensive approaches . In this paper , we make the following major contributions : • We develop Target Training - a novel , white-box adversarial defense that converts untargeted gradient-based attacks into attacks targeted at designated target classes , from which correct classes are derived . Target Training is based on the minimization at the core of untargeted gradient-based adversarial attacks . • For all attacks that minimize perturbation , we eliminate the need to know the attack or to generate adversarial samples during training . • We show that Target Training withstands non-L∞ adversarial attacks without resorting to increased network capacity . With default accuracy of 84.3 % in CIFAR10 , Target Training achieves 86.6 % against the DeepFool attack , and 83.2 % against the CW-L2 ( κ=0 ) attack without using adversarial samples and against an adaptive attack aware of our defense . Against an adaptive CW-L2 ( κ=40 ) attack , we achieve 75.6 % while using adversarial samples . Our choice of low-capacity classifiers makes Target Training not withstand L∞ adaptive attacks , except for CW-L∞ ( κ=0 ) in MNIST . • We conclude that Adversarial Training might not be defending by populating sparse areas with samples , but by minimizing the same minimization that Target Training minimizes . 2 BACKGROUND AND RELATED WORK . Here , we present the state-of-the-art in adversarial attacks and defenses , as well as a summary . Notation A k-class neural network classifier that has θ parameters is denoted by a function f ( x ) that takes input x ∈ Rd and outputs y ∈ Rk , where d is the dimensionality and k is the number of classes . An adversarial sample is denoted by xadv . Classifier output is y , yi is the probability that the input belongs to class i. Norms are denoted as L0 , L2 and L∞ . 2.1 ADVERSARIAL ATTACKS . Szegedy et al . ( 2013 ) were first to formulate the generation of adversarial samples as a constrained minimization of the perturbation under an Lp norm . Because this formulation can be hard to solve , Szegedy et al . ( 2013 ) reformulated the problem as a gradient-based , two-term minimization of the weighted sum of perturbation , and classifier loss . For untargeted attacks , this minimization is : minimize c · ‖xadv − x‖22 + lossf ( xadv ) ( Minimization 1 ) subject to xadv ∈ [ 0 , 1 ] n where f is the classifier , lossf is classifier loss on adversarial input , and c a constant value evaluated in the optimization . Term ( 1 ) is a norm to ensure a small adversarial perturbation . Term ( 2 ) utilizes the classifier gradient to find adversarial samples that minimize classifier adversarial loss . Minimization 1 is the foundation for many gradient-based attacks , though many tweaks can and have been applied . Some attacks follow Minimization 1 implicitly ( Moosavi-Dezfooli et al. , 2016 ) , and others explicitly ( Carlini & Wagner , 2017c ) . The type of Lp norm in term ( 1 ) of the minimization also varies . For example the CW attack ( Carlini & Wagner , 2017c ) uses L0 , L2 and L∞ , whereas DeepFool ( Moosavi-Dezfooli et al. , 2016 ) uses the L2 norm . A special perturbation case is the Pixel attack by Su et al . ( 2019 ) which changes exactly one pixel . Some attacks even exclude term ( 1 ) from the Minimization 1 and introduce an external parameter to control perturbation . The FGSM attack by Goodfellow et al . ( 2014 ) , for example , uses an parameter , while the CW attack ( Carlini & Wagner , 2017c ) uses a κ confidence parameter . The Fast Gradient Sign Method by Goodfellow et al . ( 2014 ) is a simple , L∞-bounded attack that constructs adversarial samples by perturbing each input dimension in the direction of the gradient by a magnitude of : xadv = x+ · sign ( ∇xloss ( θ , x , y ) ) . The current strongest attack is CW ( Carlini & Wagner , 2017c ) . CW customizes Minimization 1 by passing c to the second term , and using it to tune the relative importance of the terms . With a further change of variable , CW obtains an unconstrained minimization problem that allows it to optimize directly through back-propagation . In addition , CW has a κ parameter for controlling the confidence of the adversarial samples . For κ > 0 and up to 100 , the CW attack allows for more perturbation in the adversarial samples it generates . The DeepFool attack by Moosavi-Dezfooli et al . ( 2016 ) follows the Minimization 1 implicitly . DeepFool ( Moosavi-Dezfooli et al. , 2016 ) looks at the smallest distance of a point from the classifier decision boundary as the minimum amount of perturbation needed to change its classification . DeepFool approximates the classifier with a linear one , estimates the distance from the linear boundary , and then takes steps in the direction of the closest boundary until an adversarial sample is found . Black-box attacks Black-box attacks assume no access to classifier gradients . Such attacks with access to output class probabilities are called score-based attacks , for example the ZOO attack ( Chen et al. , 2017 ) , a black-box variant of the CW attack ( Carlini & Wagner , 2017c ) . Attacks with access to only the final class label are decision-based attacks , for example the Boundary ( Brendel et al. , 2017 ) and the HopSkipJumpAttack ( Chen et al. , 2019 ) attacks . Multi-step attacks The PGD attack ( Kurakin et al. , 2016 ) is an iterative method with an α parameter that determines a step-size perturbation magnitude . PGD starts at a random point x0 , projects the perturbation on an Lp-ball B at each iteration : x ( j + 1 ) = ProjB ( x ( j ) + α · sign ( ∇xloss ( θ , x ( j ) , y ) ) . The BIM attack ( Kurakin et al. , 2016 ) applies FGSM ( Goodfellow et al. , 2014 ) iteratively with an α step . To find a universal perturbation , UAP ( Moosavi-Dezfooli et al. , 2017 ) iterates over the images and aggregates perturbations calculated as in DeepFool . 2.2 ADVERSARIAL DEFENSES . Adversarial Training . Adversarial Training ( Szegedy et al. , 2013 ; Kurakin et al. , 2016 ; Madry et al. , 2017 ) is one of the first and few , undefeated defenses . It defends by populating low probability , so-called blind spots ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) with adversarial samples labelled correctly , redrawing boundaries . The drawback of Adversarial Training is that it needs to know the attack in advance , and it needs to generate adversarial samples during training . The Adversarial Training algorithm 2 in the Appendix is based on Kurakin et al . ( 2016 ) . Madry et al . ( 2017 ) formulate their defense as a robust optimization problem , and use adversarial samples to augment the training . Their solution however necessitates high-capacity classifiers - bigger models with more parameters . Detection defenses Such defenses detect adversarial samples implicitly or explicitly , then correct or reject them . So far , many detection defenses have been defeated . For example , ten diverse detection methods ( other network , PCA , statistical properties ) were defeated by attack loss customization ( Carlini & Wagner , 2017a ) ; Tramer et al . ( 2020 ) used attack customization against ( Hu et al. , 2019 ) ; attack transferability ( Carlini & Wagner , 2017b ) was used against MagNet by Meng & Chen ( 2017 ) ; deep feature adversaries ( Sabour et al. , 2016 ) against ( Roth et al. , 2019 ) . Gradient masking and obfuscation Many defenses that mask or obfuscate the classifier gradient have been defeated ( Carlini & Wagner , 2016 ; Athalye et al. , 2018 ) . Athalye et al . ( 2018 ) identify three types of gradient obfuscation : ( 1 ) Shattered gradients - incorrect gradients caused by nondifferentiable components or numerical instability , for example with multiple input transformations by Guo et al . ( 2018 ) . Athalye et al . ( 2018 ) counter such defenses with Backward Pass Differentiable Approximation . ( 2 ) Stochastic gradients in randomized defenses are overcome with Expectation Over Transformation ( Athalye et al. , 2017 ) by Athalye et al . Examples are Stochastic Activation Pruning ( Dhillon et al. , 2018 ) , which drops layer neurons based on a weighted distribution , and ( Xie et al. , 2018 ) which adds a randomized layer to the classifier input . ( 3 ) Vanishing or exploding gradients are used , for example , in Defensive Distillation ( DD ) ( Papernot et al. , 2016a ) which reduces the amplitude of gradients of the loss function . Other examples are PixelDefend ( Song et al. , 2018 ) and Defense-GAN ( Samangouei et al. , 2018 ) . Vanishing or exploding gradients are broken with parameters that avoid vanishing or exploding gradients ( Carlini & Wagner , 2016 ) . Complex defenses Defenses combining several approaches , for example ( Li et al. , 2019 ) which uses detection , randomization , multiple models and losses , can be defeated by focusing on the main defense components ( Tramer et al. , 2020 ) . ( Verma & Swami , 2019 ; Pang et al. , 2019 ; Sen et al. , 2020 ) are defeated ensemble defenses combined with numerical instability ( Verma & Swami , 2019 ) , regularization ( Pang et al. , 2019 ) , or mixed precision on weights and activations ( Sen et al. , 2020 ) .
This paper addresses the task of adversarial defense, particularly against untargeted attack. It starts from the observation that these attacks mostly minimize the perturbation and the classification loss, and proposes a new training strategy named Target Training. The method duplicate training examples with a special ground-truth label, to fool the adversarial attackers. Experiments are conducted on MNIST and CIFAR10 under several attacks.
SP:8e3a07ed19e7b0c677aae1106da801d246f5aa0c
Characterizing signal propagation to close the performance gap in unnormalized ResNets
Batch Normalization is a key component in almost all state-of-the-art image classifiers , but it also introduces practical challenges : it breaks the independence between training examples within a batch , can incur compute and memory overhead , and often results in unexpected bugs . Building on recent theoretical analyses of deep ResNets at initialization , we propose a simple set of analysis tools to characterize signal propagation on the forward pass , and leverage these tools to design highly performant ResNets without activation normalization layers . Crucial to our success is an adapted version of the recently proposed Weight Standardization . Our analysis tools show how this technique preserves the signal in networks with ReLU or Swish activation functions by ensuring that the per-channel activation means do not grow with depth . Across a range of FLOP budgets , our networks attain performance competitive with the state-of-the-art EfficientNets on ImageNet . Our code is available at http : //dpmd.ai/nfnets . 1 INTRODUCTION . BatchNorm has become a core computational primitive in deep learning ( Ioffe & Szegedy , 2015 ) , and it is used in almost all state-of-the-art image classifiers ( Tan & Le , 2019 ; Wei et al. , 2020 ) . A number of different benefits of BatchNorm have been identified . It smoothens the loss landscape ( Santurkar et al. , 2018 ) , which allows training with larger learning rates ( Bjorck et al. , 2018 ) , and the noise arising from the minibatch estimates of the batch statistics introduces implicit regularization ( Luo et al. , 2019 ) . Crucially , recent theoretical work ( Balduzzi et al. , 2017 ; De & Smith , 2020 ) has demonstrated that BatchNorm ensures good signal propagation at initialization in deep residual networks with identity skip connections ( He et al. , 2016b ; a ) , and this benefit has enabled practitioners to train deep ResNets with hundreds or even thousands of layers ( Zhang et al. , 2019 ) . However , BatchNorm also has many disadvantages . Its behavior is strongly dependent on the batch size , performing poorly when the per device batch size is too small or too large ( Hoffer et al. , 2017 ) , and it introduces a discrepancy between the behaviour of the model during training and at inference time . BatchNorm also adds memory overhead ( Rota Bulò et al. , 2018 ) , and is a common source of implementation errors ( Pham et al. , 2019 ) . In addition , it is often difficult to replicate batch normalized models trained on different hardware . A number of alternative normalization layers have been proposed ( Ba et al. , 2016 ; Wu & He , 2018 ) , but typically these alternatives generalize poorly or introduce their own drawbacks , such as added compute costs at inference . Another line of work has sought to eliminate layers which normalize hidden activations entirely . A common trend is to initialize residual branches to output zeros ( Goyal et al. , 2017 ; Zhang et al. , 2019 ; De & Smith , 2020 ; Bachlechner et al. , 2020 ) , which ensures that the signal is dominated by the skip path early in training . However while this strategy enables us to train deep ResNets with thousands of layers , it still degrades generalization when compared to well-tuned baselines ( De & Smith , 2020 ) . These simple initialization strategies are also not applicable to more complicated architectures like EfficientNets ( Tan & Le , 2019 ) , the current state of the art on ImageNet ( Russakovsky et al. , 2015 ) . This work seeks to establish a general recipe for training deep ResNets without normalization layers , which achieve test accuracy competitive with the state of the art . Our contributions are as follows : • We introduce Signal Propagation Plots ( SPPs ) : a simple set of visualizations which help us inspect signal propagation at initialization on the forward pass in deep residual networks . Leveraging these SPPs , we show how to design unnormalized ResNets which are constrained to have signal propagation properties similar to batch-normalized ResNets . • We identify a key failure mode in unnormalized ResNets with ReLU or Swish activations and Gaussian weights . Because the mean output of these non-linearities is positive , the squared mean of the hidden activations on each channel grows rapidly as the network depth increases . To resolve this , we propose Scaled Weight Standardization , a minor modification of the recently proposed Weight Standardization ( Qiao et al. , 2019 ; Huang et al. , 2017b ) , which prevents the growth in the mean signal , leading to a substantial boost in performance . • We apply our normalization-free network structure in conjunction with Scaled Weight Standardization to ResNets on ImageNet , where we for the first time attain performance which is comparable or better than batch-normalized ResNets on networks as deep as 288 layers . • Finally , we apply our normalization-free approach to the RegNet architecture ( Radosavovic et al. , 2020 ) . By combining this architecture with the compound scaling strategy proposed by Tan & Le ( 2019 ) , we develop a class of models without normalization layers which are competitive with the current ImageNet state of the art across a range of FLOP budgets . 2 BACKGROUND . Deep ResNets at initialization : The combination of BatchNorm ( Ioffe & Szegedy , 2015 ) and skip connections ( Srivastava et al. , 2015 ; He et al. , 2016a ) has allowed practitioners to train deep ResNets with hundreds or thousands of layers . To understand this effect , a number of papers have analyzed signal propagation in normalized ResNets at initialization ( Balduzzi et al. , 2017 ; Yang et al. , 2019 ) . In a recent work , De & Smith ( 2020 ) showed that in normalized ResNets with Gaussian initialization , the activations on the ` th residual branch are suppressed by factor of O ( √ ` ) , relative to the scale of the activations on the skip path . This biases the residual blocks in deep ResNets towards the identity function at initialization , ensuring well-behaved gradients . In unnormalized networks , one can preserve this benefit by introducing a learnable scalar at the end of each residual branch , initialized to zero ( Zhang et al. , 2019 ; De & Smith , 2020 ; Bachlechner et al. , 2020 ) . This simple change is sufficient to train deep ResNets with thousands of layers without normalization . However , while this method is easy to implement and achieves excellent convergence on the training set , it still achieves lower test accuracies than normalized networks when compared to well-tuned baselines . These insights from studies of batch-normalized ResNets are also supported by theoretical analyses of unnormalized networks ( Taki , 2017 ; Yang & Schoenholz , 2017 ; Hanin & Rolnick , 2018 ; Qi et al. , 2020 ) . These works suggest that , in ResNets with identity skip connections , if the signal does not explode on the forward pass , the gradients will neither explode nor vanish on the backward pass . Hanin & Rolnick ( 2018 ) conclude that multiplying the hidden activations on the residual branch by a factor of O ( 1/d ) or less , where d denotes the network depth , is sufficient to ensure trainability at initialization . Alternate normalizers : To counteract the limitations of BatchNorm in different situations , a range of alternative normalization schemes have been proposed , each operating on different components of the hidden activations . These include LayerNorm ( Ba et al. , 2016 ) , InstanceNorm ( Ulyanov et al. , 2016 ) , GroupNorm ( Wu & He , 2018 ) , and many more ( Huang et al. , 2020 ) . While these alternatives remove the dependency on the batch size and typically work better than BatchNorm for very small batch sizes , they also introduce limitations of their own , such as introducing additional computational costs during inference time . Furthermore for image classification , these alternatives still tend to achieve lower test accuracies than well-tuned BatchNorm baselines . As one exception , we note that the combination of GroupNorm with Weight Standardization ( Qiao et al. , 2019 ) was recently identified as a promising alternative to BatchNorm in ResNet-50 ( Kolesnikov et al. , 2019 ) . 3 SIGNAL PROPAGATION PLOTS . Although papers have recently theoretically analyzed signal propagation in ResNets ( see Section 2 ) , practitioners rarely empirically evaluate the scales of the hidden activations at different depths in- side a specific deep network when designing new models or proposing modifications to existing architectures . By contrast , we have found that plotting the statistics of the hidden activations at different points inside a network , when conditioned on a batch of either random Gaussian inputs or real training examples , can be extremely beneficial . This practice both enables us to immediately detect hidden bugs in our implementation before launching an expensive training run destined to fail , and also allows us to identify surprising phenomena which might be challenging to derive from scratch . We therefore propose to formalize this good practice by introducing Signal Propagation Plots ( SPPs ) , a simple graphical method for visualizing signal propagation on the forward pass in deep ResNets . We assume identity residual blocks of the form x ` +1 = f ` ( x ` ) + x ` , where x ` denotes the input to the ` th block and f ` denotes the function computed by the ` th residual branch . We consider 4- dimensional input and output tensors with dimensions denoted by NHWC , where N denotes the batch dimension , C denotes the channels , and H and W denote the two spatial dimensions . To generate SPPs , we initialize a single set of weights according to the network initialization scheme , and then provide the network with a batch of input examples sampled from a unit Gaussian distribution . Then , we plot the following hidden activation statistics at the output of each residual block : • Average Channel Squared Mean , computed as the square of the mean across the NHW axes , and then averaged across the C axis . In a network with good signal propagation , we would expect the mean activations on each channel , averaged across a batch of examples , to be close to zero . Importantly , we note that it is necessary to measure the averaged squared value of the mean , since the means of different channels may have opposite signs . • Average Channel Variance , computed by taking the channel variance across the NHW axes , and then averaging across the C axis . We generally find this to be the most informative measure of the signal magnitude , and to clearly show signal explosion or attenuation . • Average Channel Variance on the end of the residual branch , before merging with the skip path . This helps assess whether the layers on the residual branch are correctly initialized . We explore several other possible choices of statistics one could measure in Appendix G , but we have found these three to be the most informative . We also experiment with feeding the network real data samples instead of random noise , but find that this step does not meaningfully affect the key trends . We emphasize that SPPs do not capture every property of signal propagation , and they only consider the statistics of the forward pass . Despite this simplicity , SPPs are surprisingly useful for analyzing deep ResNets in practice . We speculate that this may be because in ResNets , as discussed in Section 2 ( Taki , 2017 ; Yang & Schoenholz , 2017 ; Hanin & Rolnick , 2018 ) , the backward pass will typically neither explode nor vanish so long as the signal on the forward pass is well behaved . As an example , in Figure 1 we present the SPP for a 600-layer pre-activation ResNet ( He et al. , 2016a ) 1 with BatchNorm , ReLU activations , and He initialization ( He et al. , 2015 ) . We compare the standard BN-ReLU-Conv ordering to the less common ReLU-BN-Conv ordering . Immediately , several key patterns emerge . First , we note that the Average Channel Variance grows linearly with the depth in a given stage , and resets at each transition block to a fixed value close to 1 . The linear growth arises because , at initialization , the variance of the activations satisfy Var ( x ` +1 ) = Var ( x ` ) + Var ( f ` ( x ` ) ) , while BatchNorm ensures that the variance of the activations at the end 1See Appendix E for an overview of ResNet blocks and their order of operations . of each residual branch is independent of depth ( De & Smith , 2020 ) . The variance is reset at each transition block because in these blocks the skip connection is replaced by a convolution operating on a normalized input , undoing any signal growth on the skip path in the preceding blocks . With the BN-ReLU-Conv ordering , the Average Squared Channel Means display similar behavior , growing linearly with depth between transition blocks . This may seem surprising , since we expect BatchNorm to center the activations . However with this ordering the final convolution on a residual branch receives a rectified input with positive mean . As we show in the following section , this causes the outputs of the branch on any single channel to also have non-zero mean , and explains why Var ( f ` ( x ` ) ) ≈ 0.68 for all depths ` . Although this “ mean-shift ” is explicitly counteracted by the normalization layers in subsequent residual branches , it will have serious consequences when attempting to remove normalization layers , as discussed below . In contrast , the ReLU-BN-Conv ordering trains equally stably while avoiding this mean-shift issue , with Var ( f ` ( x ` ) ) ≈ 1 for all ` .
This paper proposes the signal propagation plot (spp) which is a tool for analyzing residual networks and analyzes ResNet with/without BN. Based on the investigation, the authors first provide ResNet results without normalization with the proposed scaled weight standardization. Furthermore, the authors provide a bunch of models that are competitive to EfficientNets based on RegNetY-400MF, which seem to be highly tuned in terms of architecture design.
SP:8e2ac7405015f9d2d59c4a511df83d796ac00a9e
Geometry-aware Instance-reweighted Adversarial Training
1 INTRODUCTION . Crafted adversarial data can easily fool the standard-trained deep models by adding humanimperceptible noise to the natural data , which leads to the security issue in applications such as medicine , finance , and autonomous driving ( Szegedy et al. , 2014 ; Nguyen et al. , 2015 ) . To mitigate this issue , many adversarial training methods employ the most adversarial data maximizing the loss for updating the current model such as standard adversarial training ( AT ) ( Madry et al. , 2018 ) , TRADES ( Zhang et al. , 2019 ) , robust self-training ( RST ) ( Carmon et al. , 2019 ) , and MART ( Wang et al. , 2020b ) . The adversarial training methods seek to train an adversarially robust deep model whose predictions are locally invariant to a small neighborhood of its inputs ( Papernot et al. , 2016 ) . By leveraging adversarial data to smooth the small neighborhood , the adversarial training methods acquire adversarial robustness against adversarial data but often lead to the undesirable degradation of standard accuracy on natural data ( Madry et al. , 2018 ; Zhang et al. , 2019 ) . Thus , there have been debates on whether there exists a trade-off between robustness and accuracy . For example , some argued an inevitable trade-off : Tsipras et al . ( 2019 ) showed fundamentally different representations learned by a standard-trained model and an adversarial-trained model ; Zhang et al . ( 2019 ) and Wang et al . ( 2020a ) proposed adversarial training methods that can trade off standard accuracy for adversarial robustness . On the other hand , some argued that there is no such the trade-off : Raghunathan et al . ( 2020 ) showed infinite data could eliminate this trade-off ; Yang et al . ( 2020 ) showed benchmark image datasets are class-separated . Recently , emerging adversarial training methods have empirically challenged this trade-off . For example , Zhang et al . ( 2020b ) proposed the friendly adversarial training method ( FAT ) , employing friendly adversarial data minimizing the loss given that some wrongly-predicted adversarial data have been found . Yang et al . ( 2020 ) introduced dropout ( Srivastava et al. , 2014 ) into existing AT , RST , and TRADES methods . Both methods can improve the accuracy while maintaining the robustness . However , the other direction—whether we can improve the robustness while keeping the accuracy—remains unsolved and is more interesting . In this paper , we show this direction is also achievable . Firstly , we show over-parameterized deep networks may still have insufficient model capacity , because adversarial training has an overwhelming smoothing effect . Fitting adversarial data is demanding for a tremendous model capacity : It requires a large number of trainable parameters or long-enough training epochs to reach near-zero error on the adversarial training data ( see Figure 2 ) . The over-parameterized models that fit natural data entirely in the standard training ( Zhang et al. , 2017 ) are still far from enough for fitting adversarial data . Compared with standard training fitting the natural data points , adversarial training smooths the neighborhoods of natural data , so that adversarial data consume significantly more model capacity than natural data . Thus , adversarial training methods should carefully utilize the limited model capacity to fit the neighborhoods of the important data that aid to fine-tune the decision boundary . Therefore , it may be unwise to give equal weights to all adversarial data . Secondly , data along with their adversarial variants are not equally important . Some data are geometrically far away from the class boundary . They are relatively guarded . Their adversarial variants are hard to be misclassified . On the other hand , some data are close to the class boundary . They are relatively attackable . Their adversarial variants are easily misclassified ( see Figure 3 ) . As the adversarial training progresses , the adversarially robust model engenders an increasing number of guarded training data and a decreasing number of attackable training data . Given limited model capacity , treating all data equally may cause the vast number of adversarial variants of the guarded data to overwhelm the model , leading to the undesirable robust overfitting ( Rice et al. , 2020 ) . Thus , it may be pessimistic to treat all data equally in adversarial training . To ameliorate this pessimism , we propose a heuristic method , i.e. , geometry-aware instancereweighted adversarial training ( GAIRAT ) . As shown in Figure 1 , GAIRAT treats data differently . Specifically , for updating the current model , GAIRAT gives larger/smaller weight to the loss of an adversarial variant of attackable/guarded data point which is more/less important in fine-tuning the decision boundary . An attackable/guarded data point has a small/large geometric distance , i.e. , its distance from the decision boundary . We approximate its geometric distance by the least number of iterations κ that projected gradient descent method ( Madry et al. , 2018 ) requires to generate a misclassified adversarial variant ( see the details in Section 3.3 ) . GAIRAT explicitly assigns instancedependent weight to the loss of its adversarial variant based on the least iteration number κ . Our contributions are as follows . ( a ) In adversarial training , we identify the pessimism in treating all data equally , which is due to the insufficient model capacity and the unequal nature of different data ( in Section 3.1 ) . ( b ) We propose a new adversarial training method , i.e. , GAIRAT ( its learning objective in Section 3.2 and its realization in Section 3.3 ) . GAIRAT is a general method : Besides standard AT ( Madry et al. , 2018 ) , the existing adversarial training methods such as FAT ( Zhang et al. , 2020b ) and TRADES ( Zhang et al. , 2019 ) can be modified to GAIR-FAT and GAIR-TRADES ( in Appendices B.1 and B.2 , respectively ) . ( c ) Empirically , our GAIRAT can relieve the issue of robust overfitting ( Rice et al. , 2020 ) , meanwhile leading to the improved robustness with zero or little degradation of accuracy ( in Section 4.1 and Appendix C.1 ) . Besides , we use Wide ResNets ( Zagoruyko & Komodakis , 2016 ) to corroborate the efficacy of our geometry-aware instance-reweighted methods : Our GAIRAT significantly boosts the robustness of standard AT ; combined with FAT , our GAIRFAT improves both the robustness and accuracy of standard AT ( in Section 4.2 ) . Consequently , we conjecture no inevitable trade-off between robustness and accuracy . 2 ADVERSARIAL TRAINING . In this section , we review adversarial training methods ( Madry et al. , 2018 ; Zhang et al. , 2020b ) . 2.1 LEARNING OBJECTIVE . Let ( X , d∞ ) denote the input feature space X with the infinity distance metric dinf ( x , x′ ) = ‖x − x′‖∞ , and B [ x ] = { x′ ∈ X | dinf ( x , x′ ) ≤ } be the closed ball of radius > 0 centered at x in X . Dataset S = { ( xi , yi ) } ni=1 , where xi ∈ X and yi ∈ Y = { 0 , 1 , ... , C − 1 } . The objective function of standard adversarial training ( AT ) ( Madry et al. , 2018 ) is min fθ∈F 1 n n∑ i=1 ` ( fθ ( x̃i ) , yi ) , ( 1 ) where x̃i = arg maxx̃∈B [ xi ] ` ( fθ ( x̃ ) , yi ) , ( 2 ) where x̃ is the most adversarial data within the -ball centered at x , fθ ( · ) : X → RC is a score function , and the loss function ` : RC×Y → R is a composition of a base loss ` B : ∆C−1×Y → R ( e.g. , the cross-entropy loss ) and an inverse link function ` L : RC → ∆C−1 ( e.g. , the soft-max activation ) , in which ∆C−1 is the corresponding probability simplex—in other words , ` ( fθ ( · ) , y ) = ` B ( ` L ( fθ ( · ) ) , y ) . AT employs the most adversarial data generated according to Eq . ( 2 ) for updating the current model . The objective function of friendly adversarial training ( FAT ) ( Zhang et al. , 2020b ) is x̃i = arg min x̃∈B [ xi ] ` ( fθ ( x̃ ) , yi ) s.t . ` ( fθ ( x̃ ) , yi ) −miny∈Y ` ( fθ ( x̃ ) , y ) ≥ ρ . ( 3 ) Note that the outer minimization remains the same as Eq . ( 1 ) , and the operator arg max is replaced by arg min . ρ is a margin of loss values ( i.e. , the misclassification confidence ) . The constraint of Eq . ( 3 ) firstly ensures x̃ is misclassified , and secondly ensures for x̃ the wrong prediction is better than the desired prediction yi by at least ρ in terms of the loss value . Among all such x̃ satisfying the constraint , Eq . ( 3 ) selects the one minimizing ` ( fθ ( x̃ ) , yi ) by a violation of the value ρ . There are no constraints on x̃i if x̃i is correctly classified . FAT employs the friendly adversarial data generated according to Eq . ( 3 ) for updating the current model . 2.2 REALIZATIONS . AT and FAT ’ s objective functions imply the optimization of adversarially robust networks , with one step generating adversarial data and one step minimizing loss on the generated adversarial data w.r.t . the model parameters θ . The projected gradient descent method ( PGD ) ( Madry et al. , 2018 ) is the most common approximation method for searching adversarial data . Given a starting point x ( 0 ) ∈ X and step size α > 0 , PGD works as follows : x ( t+1 ) = ΠB [ x ( 0 ) ] ( x ( t ) + α sign ( ∇x ( t ) ` ( fθ ( x ( t ) ) , y ) ) ) , t ∈ N ( 4 ) until a certain stopping criterion is satisfied . ` is the loss function ; x ( 0 ) refers to natural data or natural data perturbed by a small Gaussian or uniformly random noise ; y is the corresponding label for natural data ; x ( t ) is adversarial data at step t ; and ΠB [ x0 ] ( · ) is the projection function that projects the adversarial data back into the -ball centered at x ( 0 ) if necessary . There are different stopping criteria between AT and FAT . AT employs a fixed number of iterations K , namely , the PGD-K algorithm ( Madry et al. , 2018 ) , which is commonly used in many adversarial training methods such as CAT ( Cai et al. , 2018 ) , DAT ( Wang et al. , 2019 ) , TRADES ( Zhang et al. , 2019 ) , and MART ( Wang et al. , 2020b ) . On the other hand , FAT employs the misclassification-aware criterion . For example , Zhang et al . ( 2020b ) proposed the early-stopped PGD-K-τ algorithm ( τ ≤ K ; K is the fixed and maximally allowed iteration number ) : Once the PGD-K-τ finds the current model misclassifying the adversarial data , it stops the iterations immediately ( τ = 0 ) or slides a few more steps ( τ > 0 ) . This misclassification-aware criterion is used in the emerging adversarial training methods such as MMA ( Ding et al. , 2020 ) , FAT ( Zhang et al. , 2020b ) , ATES ( Sitawarin et al. , 2020 ) , and Customized AT ( Cheng et al. , 2020 ) . AT can enhance the robustness against adversarial data but , unfortunately , degrades the standard accuracy on the natural data significantly ( Madry et al. , 2018 ) . On the other hand , FAT has better standard accuracy with near-zero or little degradation of robustness ( Zhang et al. , 2020b ) . Nevertheless , both AT and FAT treat the generated adversarial data equally for updating the model parameters , which is not necessary and sometimes even pessimistic . In the next sections , we introduce our method GAIRAT , which is compatible with existing methods such as AT , FAT , and TRADES . Consequently , GAIRAT can significantly enhance robustness with little or even zero degradation of standard accuracy .
The paper focused on the sample importance in the adversarial training. The authors firstly revealed that over-parameterized deep models on natural data may have insufficient model capacity for adversarial data, because the training loss is hard to zero for adversarial training. Then, the authors argued that limited capacity should be used for these important samples, that is, we should not treat samples equally important. They used the distance to the decision boundary to distinguish important samples and proposed geometry-aware instance-reweighted adversarial training. Experiments show the superiority over baselines.
SP:206600e5bfcc9ccd494b82995a7898ae81a4e0bf
Continual Lifelong Causal Effect Inference with Real World Evidence
1 INTRODUCTION . Causal effect inference is a critical research topic across many domains , such as statistics , computer science , public policy , and economics . Randomized controlled trials ( RCT ) are usually considered as the gold-standard for causal effect inference , which randomly assigns participants into a treatment or control group . As the RCT is conducted , the only expected difference between the treatment and control groups is the outcome variable being studied . However , in reality , randomized controlled trials are always time-consuming and expensive , and thus the study can not involve many subjects , which may be not representative of the real-world population the intervention would eventually target . Nowadays , estimating causal effects from observational data has become an appealing research direction owing to a large amount of available data and low budget requirements , compared with RCT ( Yao et al. , 2020 ) . Researchers have developed various strategies for causal effect inference with observational data , such as tree-based methods ( Chipman et al. , 2010 ; Wager & Athey , 2018 ) , representation learning methods ( Johansson et al. , 2016 ; Li & Fu , 2017 ; Shalit et al. , 2017 ; Chu et al. , 2020 ) , adapting Bayesian algorithms ( Alaa & van der Schaar , 2017 ) , generative adversarial nets ( Yoon et al. , 2018 ) , variational autoencoders ( Louizos et al. , 2017 ) and so on . Although significant advances have been made to overcome the challenges in causal effect estimation with observational data , such as missing counterfactual outcomes and selection bias between treatment and control groups , the existing methods only focus on source-specific and stationary observational data . Such learning strategies assume that all observational data are already available during the training phase and from the only one source . This assumption is unsubstantial in practice due to two reasons . The first one is based on the characteristics of observational data , which are incrementally available from non-stationary data distributions . For instance , the number of electronic medical records in one hospital is growing every day , or the electronic medical records for one disease may be from different hospitals or even different countries . This characteristic implies that one can not have access to all observational data at one time point and from one single source . The second reason is based on the realistic consideration of accessibility . For example , when the new observational are available , if we want to refine the model previously trained by original data , maybe the original training data are no longer accessible due to a variety of reasons , e.g. , legacy data may be unrecorded , proprietary , too large to store , or subject to privacy constraint ( Zhang et al. , 2020 ) . This practical concern of accessibility is ubiquitous in various academic and industrial applications . That ’ s what it boiled down to : in the era of big data , we face the new challenges in causal inference with observational data : the extensibility for incrementally available observational data , the adaptability for extra domain adaptation problem except for the imbalance between treatment and control groups in one source , and the accessibility for a huge amount of data . Existing causal effect inference methods , however , are unable to deal with the aforementioned new challenges , i.e. , extensibility , adaptability , and accessibility . Although it is possible to adapt existing causal inference methods to address the new challenges , these adapted methods still have inevitable defects . Three straightforward adaptation strategies are described as follows . ( 1 ) If we directly apply the model previously trained based on original data to new observational data , the performance on new task will be very poor due to the domain shift issues among different data sources ; ( 2 ) If we utilize newly available data to re-train the previously learned model , adapting changes in the data distribution , old knowledge will be completely or partially overwritten by the new one , which can result in severe performance degradation on old tasks . This is the well-known catastrophic forgetting problem ( McCloskey & Cohen , 1989 ; French , 1999 ) ; ( 3 ) To overcome the catastrophic forgetting problem , we may rely on the storage of old data and combine the old and new data together , and then re-train the model from scratch . However , this strategy is memory inefficient and time-consuming , and it brings practical concerns such as copyright or privacy issues when storing data for a long time ( Samet et al. , 2013 ) . Our empirical evaluations in Section 4 demonstrate that any of these three strategies in combination with the existing causal effect inference methods is deficient . To address the above issues , we propose a Continual Causal Effect Representation Learning method ( CERL ) for estimating causal effect with incrementally available observational data . Instead of having access to all previous observational data , we only store a limited subset of feature representations learned from previous data . Combining the selective and balanced representation learning , feature representation distillation , and feature transformation , our method preserves the knowledge learned from previous data and update the knowledge by leveraging new data , so that it can achieve the continual causal effect estimation for new data without compromising the estimation capability for previous data . To summarize , our main contributions include : • Our work is the first to introduce the continual lifelong causal effect inference problem for the incrementally available observational data and three corresponding evaluation criteria , i.e. , extensibility , adaptability , and accessibility . • We propose a new framework for continual lifelong causal effect inference based on deep representation learning and continual learning . • Extensive experiments demonstrate the deficiency of existing methods when facing the incrementally available observational data and our model ’ s outstanding performance . 2 BACKGROUND AND PROBLEM STATEMENT . Suppose that the observational data contain n units collected from d different domains and the d-th dataset Dd contains the data { ( x , y , t ) |x ∈ X , y ∈ Y , t ∈ T } collected from d-th domain , which contains nd units . Let X denote all observed variables , Y denote the outcomes in the observational data , and T is a binary variable . Let D1 : d = { D1 , D2 , ... , Dd } be the set of combination of d dataset , separately collected from d different domains . For d datasets { D1 , D2 , ... , Dd } , they have the common observed variables but due to the fact that they are collected from different domains , they have different distributions with respect to X , Y , and T in each dataset . Each unit in the observational data received one of two treatments . Let ti denote the treatment assignment for unit i ; i = 1 , ... , n. For binary treatments , ti = 1 is for the treatment group , and ti = 0 for the control group . The outcome for unit i is denoted by yit when treatment t is applied to unit i ; that is , y i 1 is the potential outcome of unit i in the treatment group and yi0 is the potential outcome of unit i in the control group . For observational data , only one of the potential outcomes is observed . The observed outcome is called the factual outcome and the remaining unobserved potential outcomes are called counterfactual outcomes . In this paper , we follow the potential outcome framework for estimating treatment effects ( Rubin , 1974 ; Splawa-Neyman et al. , 1990 ) . The individual treatment effect ( ITE ) for unit i is the difference between the potential treated and control outcomes , and is defined as ITEi = yi1 − yi0 . The average treatment effect ( ATE ) is the difference between the mean potential treated and control outcomes , which is defined as ATE = 1n ∑n i=1 ( y i 1 − yi0 ) . The success of the potential outcome framework is based on the following assumptions ( Imbens & Rubin , 2015 ) , which ensure that the treatment effect can be identified . Stable Unit Treatment Value Assumption ( SUTVA ) : The potential outcomes for any units do not vary with the treatments assigned to other units , and , for each unit , there are no different forms or versions of each treatment level , which lead to different potential outcomes . Consistency : The potential outcome of treatment t is equal to the observed outcome if the actual treatment received is t. Positivity : For any value of x , treatment assignment is not deterministic , i.e. , P ( T = t|X = x ) > 0 , for all t and x. Ignorability : Given covariates , treatment assignment is independent to the potential outcomes , i.e. , ( y1 , y0 ) ⊥ t|x . The goal of our work is to develop a novel continual causal inference framework , given new available observational data Dd , to estimate the causal effect for newly available data Dd as well as the previous data D1 : ( d−1 ) without having access to previous training data in D1 : ( d−1 ) . 3 THE PROPOSED FRAMEWORK . The availability of “ real world evidence ” is expected to facilitate the development of causal effect inference models for various academic and industrial applications . How to achieve continual learning from incrementally available observational data from non-stationary data domains is a new direction in causal effect inference . Rather than only focusing on handling the selection bias problem , we also need to take into comprehensive consideration three aspects of the model , i.e. , the extensibility for incrementally available observational data , the adaptability for various data sources , and the accessibility for a huge amount of data . We propose the Continual Causal Effect Representation Learning method ( CERL ) for estimating causal effect with incrementally available observational data . Based on selective and balanced representation learning for treatment effect estimation , CERL incorporates feature representation distillation to preserve the knowledge learned from previous observational data . Besides , aiming at adapting the updated model to original and new data without having access to the original data , and solving the selection bias between treatment and control groups , we propose one representation transformation function , which maps partial original feature representations into new feature representation space and makes the global feature representation space balanced with respect to treatment and control groups . Therefore , CERL can achieve the continual causal effect estimation for new data and meanwhile preserve the estimation capability for previous data , without the aid of original data . 3.1 MODEL ARCHITECTURE . To estimate the incrementally available observational data , the framework of CERL is mainly composed of two components : ( 1 ) the baseline causal effect learning model is only for the first available observational data , and thus we don ’ t need to consider the domain shift issue among different data sources . This component is equivalent to the traditional causal effect estimation problem ; ( 2 ) the continual causal effect learning model is for the sequentially available observational data , where we need to handle more complex issues , such as knowledge transfer , catastrophic forgetting , global representation balance , and memory constraint . We present the details of each component as follows .
This paper considers adopting continual learning on the problem of causal effect estimation. The paper combines methods and algorithms for storing feature representation and representative samples (herding algorithm), avoiding drifting feature representation when new data is learned (feature representation distillation), balanced representation by regularization, etc. Consequently, the paper presents a system that makes use of existing methods as a loss function (the sum of losses and regularization terms).
SP:d729aacc2cd3f97011a04360a252ca7cb0489354
Global Node Attentions via Adaptive Spectral Filters
1 INTRODUCTION . Graph neural networks ( GNNs ) have recently demonstrated great power in graph-related learning tasks , such as node classification ( Kipf & Welling , 2017 ) , link prediction ( Zhang & Chen , 2018 ) and graph classification ( Lee et al. , 2018 ) . Most GNNs follow a message-passing architecture where , in each GNN layer , a node aggregates information from its direct neighbors indifferently . In this architecture , information from long-distance nodes is propagated and aggregated by stacking multiple GNN layers together ( Kipf & Welling , 2017 ; Velickovic et al. , 2018 ; Defferrard et al. , 2016 ) . However , this architecture underlies the assumption of local homophily , i.e . proximity of similar nodes . While this assumption seems reasonable and helps to achieve good prediction results on graphs with strong local homophily , such as citation networks and community networks ( Pei et al. , 2020 ) , it limits GNNs ’ generalizability . Particularly , determining whether a graph has strong local homophily or not is a challenge by itself . Furthermore , strong and weak local homophily can both exhibit in different parts of a graph , which makes a learning task more challenging . Pei et al . ( 2020 ) proposed a metric to measure local node homophily based on how many neighbors of a node are from the same class . Using this metric , they categorized graphs as assortative ( strong local homophily ) or disassortative ( weak local homophily ) , and showed that classical GNNs such as GCN ( Kipf & Welling , 2017 ) and GAT ( Velickovic et al. , 2018 ) perform poorly on disassortative graphs . Liu et al . ( 2020 ) further showed that GCN and GAT are outperformed by a simple multilayer perceptron ( MLP ) in node classification tasks on disassortative graphs . This is because the naive local aggregation of homophilic models brings in more noise than useful information for such graphs . These findings indicate that these GNN models perform sub-optimally when the fundamental assumption of local homophily does not hold . Based on the above observation , we argue that a well-generalized GNN should perform well on graphs , regardless of their local homophily . Furthermore , since a real-world graph can exhibit both strong and weak homophily in different node neighborhoods , a powerful GNN model should be able to aggregate node features using different strategies accordingly . For instance , in disassortative graphs where a node shares no similarity with any of its direct neighbors , such a GNN model should be able to ignore direct neighbors and reach farther to find similar nodes , or at least , resort to the node ’ s attributes to make a prediction . Since the validity of the assumption about local homophily is often unknown , such aggregation strategies should be learned from data rather than decided upfront . To circumvent this issue , in this paper , we propose a novel GNN model with global self-attention mechanism , called GNAN . Most existing attention-based aggregation architectures perform selfattention to the local neighborhood of a node ( Velickovic et al. , 2018 ) , which may add local noises in aggregation . Unlike these works , we aim to design an aggregation method that can gather informative features from both close and far-distant nodes . To achieve this , we employ graph wavelets under a relaxed condition of localization , which enables us to learn attention weights for nodes in the spectral domain . In doing so , the model can effectively capture not only local information but also global structure into node representations . To further improve the generalizability of our model , instead of using predefined spectral kernels , we propose to use multi-layer perceptrons ( MLP ) to learn the desired spectral filters without limiting their shapes . Existing works on graph wavelet transform choose wavelet filters heuristically , such as heat kernel , wave kernel and personalized page rank kernel ( Klicpera et al. , 2019b ; Xu et al. , 2019 ; Klicpera et al. , 2019a ) . They are mostly low-pass filters , which means that these models implicitly treat high-frequency components as “ noises ” and have them discarded ( Shuman et al. , 2013 ; Hammond et al. , 2011 ; Chang et al. , 2020 ) . However , this may hinder the generalizability of models since high-frequency components can carry meaningful information about local discontinuities , as analyzed in ( Shuman et al. , 2013 ) . Our model overcomes these limitations by incorporating fully learnable spectral filters into the proposed global self-attention mechanism . From a computational perspective , learning global self-attention may impose high computational overhead , particularly when graphs are large . We alleviate this problem from two aspects . First , we sparsify nodes according to their wavelet coefficients , which enables attention weights to be distributed across the graph sparsely . Second , we observed that spectral filters learned by different MLPs tend to converge to be of similar shapes . Thus , we use a single MLP to reduce redundancy among filters , where each dimension in the output corresponds to one learnable spectral filter . In addition to these , following ( Xu et al. , 2019 ; Klicpera et al. , 2019b ) , we use a fast algorithm to efficiently approximate graph wavelet transform , which has computational complexity O ( p× |E| ) , where p is the order of Chebyshev polynomials and |E| is the number of edges in a graph . To summarize , the main contributions of this work are as follows : 1 . We propose a generalized GNN model which performs well on both assortative and disassortative graphs , regardless of local node homophily . 2 . We exhibit that GNN ’ s aggregation strategy can be trained via a fully learnable spectral filter , thereby enabling feature aggregation from both close and far nodes . 3 . We show that , unlike commonly understood , higher-frequency on disassortative graphs provides meaningful information that helps improving prediction performance . We conduct extensive experiments to compare GNAN with well-known baselines on node classification tasks . The experimental results show that GNAN significantly outperforms the state-of-the-art methods on disassortative graphs where local node homophily is weak , and performs comparably with the state-of-the-art methods on assortative graphs where local node homophily is strong . This empirically verifies that GNAN is a general model for learning on different types of graphs . 2 PRELIMINARIES . Let G = ( V , E , A , x ) be an undirected graph with N nodes , where V , E , and A are the node set , edge set , and adjacency matrix of G , respectively , and x : V 7→ Rm is a graph signal function that associates each node with a feature vector . The normalized Laplacian matrix of G is defined as L = I −D−1/2AD−1/2 , whereD ∈ RN×N is the diagonal degree matrix of G. In spectral graph theory , the eigenvalues Λ = diag ( λ1 , ... , λN ) and eigenvectors U of L = UΛUH are known as the graph ’ s spectrum and spectral basis , respectively , where UH is the Hermitian transpose of U . The graph Fourier transform of x is x̂ = UHx and its inverse is x = Ux̂ . The spectrum and spectral basis carry important information on the connectivity of a graph ( Shuman et al. , 2013 ) . Intuitively , lower frequencies correspond to global and smooth information on the graph , while higher frequencies correspond to local information , discontinuities and possible noise ( Shuman et al. , 2013 ) . One can apply a spectral filter g as in Equation 1 and use graph Fourier transform to manipulate signals on a graph in various ways , such as smoothing and denoising ( Schaub & Segarra , 2018 ) , abnormally detection ( Miller et al. , 2011 ) and clustering ( Wai et al. , 2018 ) . Spectral convolutions on graphs is defined as the multiplication of a signal x with a filter g ( Λ ) in the Fourier domain , i.e . g ( L ) x = g ( UΛUH ) x = Ug ( Λ ) UHx = Ug ( Λ ) x̂ . ( 1 ) When a spectral filter is parameterized by a scale factor , which controls the radius of neighbourhood aggregation , Equation 1 is also known as the Spectral Graph Wavelet Transform ( SGWT ) ( Hammond et al. , 2011 ; Shuman et al. , 2013 ) . For example , Xu et al . ( 2019 ) uses a small scale parameter s < 2 for a heat kernel , g ( sλ ) = e−λs , to localize the wavelet at a node . 3 PROPOSED APPROACH . Graph neural networks ( GNNs ) learn lower-dimensional embeddings of nodes from graph structured data . In general , given a node , GNNs iteratively aggregate information from its neighbor nodes , and then combine the aggregated information with its own information . An embedding of node v at the kth layer of GNN is typically formulated as mv = aggregate ( { h ( k−1 ) u |u ∈ Nv } ) h ( k ) v = combine ( h ( k−1 ) v , mv ) , whereNv is the set of neighbor nodes of node v , mv is the aggregated information from the neighbors , and h ( k ) v is the embedding of the node v at the kth layer ( h ( 0 ) v = xv ) . The embedding hnv of the node v at the final layer is then used for some prediction tasks . In most GNNs , Nv is restricted to a set of one-hop neighbors of node v. Therefore , one needs to stack multiple aggregation layers in order to collect the information from more than one-hop neighborhood within this architecture . Adaptive spectral filters . Instead of stacking multiple aggregation layers , we introduce a spectral attention layer that rewires a graph based on spectral graph wavelets . A spectral graph wavelet ψv at node v is a modulation in the spectral domain of signals centered around the node v , given by an N -dimensional vector ψv = Ug ( Λ ) U Hδv , ( 2 ) where g ( · ) is a spectral filter and δv is a one-hot vector for node v. The common choice of a spectral filter is heat kernel . A wavelet coefficient ψvu computed from a heat kernel can be interpreted as the amount of energy that node v has received from node u in its local neighborhood . In this work , instead of using pre-defined localized kernels , we use multilayer perceptrons ( MLP ) to learn spectral filters . With learnable spectral kernels , we obtain wavelet coefficients ψv = Udiag ( MLP ( Λ ) ) UHδv . ( 3 ) Similar to that of a heat kernel , the wavelet coefficient with a learnable spectral filter ψvu can be understood as the amount of energy that is distributed from node v to node u , under the conditions regulated by the spectral filter . Note that we use the terminology wavelet and spectral filter interchangeably as we have relaxed the wavelet definition from ( Hammond et al. , 2011 ) so that learnable spectral filters in our work are not necessarily localized in the spectral and spatial domains . Equation 3 requires the eigen-decomposition of a Laplacian matrix , which is expensive and infeasible for large graphs . We follow Xu et al . ( 2019 ) ; Klicpera et al . ( 2019b ) to approximate graph wavelet transform using Chebyshev polynomials ( Shuman et al. , 2013 ) ( see Appendix A for details ) . Global self-attention . Unlike the previous work ( Xu et al. , 2019 ) where wavelet coefficients are directly used to compute node embeddings , we normalize wavelet coefficients through a softmax layer av = softmax ( ψv ) , where av ∈ RN is an attention weight vector . With attention weights , an update layer is then formalized as h ( k ) v = σ ( N∑ u=1 avuh ( k−1 ) u W ( k ) ) , ( 4 ) where W ( k ) is a weight matrix shared across all nodes in the kth layer and σ is ELU nonlinear activation . Unlike heat kernel , the wavelet coefficient with a learnable spectral kernel is not localized . Hence , our work can actively aggregate information from far-distant nodes . Note that the update layer is not divided into aggregation and combine steps in our work . Instead , we compute the attention avv directly from a spectral filter . Sparsified node attentions . With predefined localized spectral filters such as a heat kernel , most of wavelet coefficients are zero due to their locality . In our work , spectral filters are fully learned from data , and consequently attention weights obtained from learnable spectral filters do not impose any sparsity . This means that to perform an aggregation operation we need to retrieve all possible nodes in a graph , which is time consuming with large graphs . From our experiments , we observe that most of attention weights are negligible after softmax . Thus , we consider two sparsification techniques : 1 . Discard the entries of wavelet bases that are below a threshold t , i.e . ψ̄vu = { ψvu if ψvu > t −∞ otherwise . ( 5 ) The threshold t can be easily applied on all entries of wavelet bases . However , it offers little guarantee on attention sparsity since attention weights may vary , depending on the learning process of spectral filters and the characteristics of different datasets , as will be further discussed in Section 4.2 . 2 . Keep only the largest k entries of wavelet bases for each node , i.e . ψ̄vu = { ψvu if ψvu ∈ topK ( { ψv0 , ... , ψvN } , k ) −∞ otherwise , ( 6 ) where topK is a partial sorting function that returns the largest k entries from a set of wavelet bases { ψv0 , ... , ψvN } . This technique guarantees attention sparsity such that the embedding of each node can be aggregated from at most k other nodes . However , it takes more computational overhead to sort entries since topK has a time complexity of O ( N +k logN ) . The resulting ψ̄ from either of the above techniques is then fed into the softmax layer to compute attention weights . The experiments for comparing these techniques will be discussed in Section 4.2 . We adopt multi-head attention to model multiple spectral filters . Each attention head aggregates node information with a different spectral filter , and the aggregated embedding is concatenated before being sent to the next layer . We can allocate an independent MLP for each of attention heads ; however , we found independent MLPs tend to learn spectral filters of similar shapes . Hence , we adopt a single MLP : RN → RN×M , where M is the number of attention heads , and each column of the output corresponds to one adaptive spectral filter . We name the multi-head spectral attention architecture as a global node attention network ( GNAN ) . The design of GNAN is easily generalizable , and many existing GNNs can be expressed as special cases of GNAN ( see Appendix D ) . Figure 1 illustrates how GNAN works with two attention heads learned from the CITESEER dataset . As shown in the illustration , the MLP learns adaptive filters such as low-band pass and high-band pass filters . A low-band pass filter assigns high attention weights in local neighborhoods , while a high-band pass filter assigns high attention weights on far-distant nodes , which can not be captured by a one-hop aggregation scheme .
In this paper, the authors study the problem of GCN for disassortative graphs. The authors proposed the GNAN method to allow attention on distant nodes indeed of limiting to local neighbors. The authors generalized the idea of graph wavelet with MLP to generate the attention score and utilized it to generate multiple attention heads. The authors carried out experiments on several real-world networks (4 assortative and 3 disassortative) with comparison to several state-of-art GCN methods.
SP:864d98472c237daf2b227692c4765af9a89886cd
Calibration of Neural Networks using Splines
1 INTRODUCTION . Despite the success of modern neural networks they are shown to be poorly calibrated ( Guo et al . ( 2017 ) ) , which has led to a growing interest in the calibration of neural networks over the past few years ( Kull et al . ( 2019 ) ; Kumar et al . ( 2019 ; 2018 ) ; Müller et al . ( 2019 ) ) . Considering classification problems , a classifier is said to be calibrated if the probability values it associates with the class labels match the true probabilities of correct class assignments . For instance , if an image classifier outputs 0.2 probability for the “ horse ” label for 100 test images , then out of those 100 images approximately 20 images should be classified as horse . It is important to ensure calibration when using classifiers for safety-critical applications such as medical image analysis and autonomous driving where the downstream decision making depends on the predicted probabilities . One of the important aspects of machine learning research is the measure used to evaluate the performance of a model and in the context of calibration , this amounts to measuring the difference between two empirical probability distributions . To this end , the popular metric , Expected Calibration Error ( ECE ) ( Naeini et al . ( 2015 ) ) , approximates the classwise probability distributions using histograms and takes an expected difference . This histogram approximation has a weakness that the resulting calibration error depends on the binning scheme ( number of bins and bin divisions ) . Even though the drawbacks of ECE have been pointed out and some improvements have been proposed ( Kumar et al . ( 2019 ) ; Nixon et al . ( 2019 ) ) , the histogram approximation has not been eliminated.1 In this paper , we first introduce a simple , binning-free calibration measure inspired by the classical Kolmogorov-Smirnov ( KS ) statistical test ( Kolmogorov ( 1933 ) ; Smirnov ( 1939 ) ) , which also provides an effective visualization of the degree of miscalibration similar to the reliability diagram ( NiculescuMizil & Caruana ( 2005 ) ) . To this end , the main idea of the KS-test is to compare the respective classwise cumulative ( empirical ) distributions . Furthermore , by approximating the empirical cumulative distribution using a differentiable function via splines ( McKinley & Levine ( 1998 ) ) , we 1We consider metrics that measure classwise ( top-r ) calibration error ( Kull et al . ( 2019 ) ) . Refer to section 2 for details . obtain an analytical recalibration function2 which maps the given network outputs to the actual class assignment probabilities . Such a direct mapping was previously unavailable and the problem has been approached indirectly via learning , for example , by optimizing the ( modified ) cross-entropy loss ( Guo et al . ( 2017 ) ; Mukhoti et al . ( 2020 ) ; Müller et al . ( 2019 ) ) . Similar to the existing methods ( Guo et al . ( 2017 ) ; Kull et al . ( 2019 ) ) the spline-fitting is performed using a held-out calibration set and the obtained recalibration function is evaluated on an unseen test set . We evaluated our method against existing calibration approaches on various image classification datasets and our spline-based recalibration approach consistently outperforms existing methods on KS error , ECE as well as other commonly used calibration measures . Our approach to calibration does not update the model parameters , which allows it to be applied on any trained network and it retains the original classification accuracy in all the tested cases . 2 NOTATION AND PRELIMINARIES . We abstract the network as a function fθ : D → [ 0 , 1 ] K , where D ⊂ IRd , and write fθ ( x ) = z . Here , x may be an image , or other input datum , and z is a vector , sometimes known as the vector of logits . In this paper , the parameters θ will not be considered , and we write simply f to represent the network function . We often refer to this function as a classifier , and in theory this could be of some other type than a neural network . In a classification problem , K is the number of classes to be distinguished , and we call the value zk ( the k-th component of vector z ) the score for the class k. If the final layer of a network is a softmax layer , then the values zk satisfy ∑K k=1 zk = 1 , and zk ≥ 0 . Hence , the zk are pseudoprobabilities , though they do not necessarily have anything to do with real probabilities of correct class assignments . Typically , the value y∗ = argmaxk zk is taken as the ( top-1 ) prediction of the network , and the corresponding score , maxk zk is called the confidence of the prediction . However , the term confidence does not have any mathematical meaning in this context and we deprecate its use . We assume we are given a set of training data ( xi , yi ) ni=1 , where xi ∈ D is an input data element , which for simplicity we call an image , and yi ∈ K = { 1 , . . . , K } is the so-called ground-truth label . Our method also uses two other sets of data , called calibration data and test data . It would be desirable if the numbers zk output by a network represented true probabilities . For this to make sense , we posit the existence of joint random variables ( X , Y ) , where X takes values in a domain D ⊂ IRd , and Y takes values in K. Further , let Z = f ( X ) , another random variable , and Zk = fk ( X ) be its k-th component . Note that in this formulation X and Y are joint random variables , and the probability P ( Y | X ) is not assumed to be 1 for single class , and 0 for the others . A network is said to be calibrated if for every class k , P ( Y = k | Z = z ) = zk . ( 1 ) This can be written briefly as P ( k | f ( x ) ) = fk ( x ) = zk . Thus , if the network takes input x and outputs z = f ( x ) , then zk represents the probability ( given f ( x ) ) that image x belongs to class k. The probability P ( k | z ) is difficult to evaluate , even empirically , and most metrics ( such as ECE ) use or measure a different notion called classwise calibration ( Kull et al . ( 2019 ) ; Zadrozny & Elkan ( 2002 ) ) , defined as , P ( Y = k | Zk = zk ) = zk . ( 2 ) This paper uses this definition ( 2 ) of calibration in the proposed KS metric . Calibration and accuracy of a network are different concepts . For instance , one may consider a classifier that simply outputs the class probabilities for the data , ignoring the input x . Thus , if fk ( x ) = zk = P ( Y = k ) , this classifier f is calibrated but the accuracy is no better than the random predictor . Therefore , in calibration of a classifier , it is important that this is not done while sacrificing classification ( for instance top-1 ) accuracy . 2Open-source implementation available at https : //github.com/kartikgupta-at-anu/ spline-calibration The top-r prediction . The classifier f being calibrated means that fk ( x ) is calibrated for each class k , not only for the top class . This means that scores zk for all classes k give a meaningful estimate of the probability of the sample belonging to class k. This is particularly important in medical diagnosis where one may wish to have a reliable estimate of the probability of certain unlikely diagnoses . Frequently , however , one is most interested in the probability of the top scoring class , the top-1 prediction , or in general the top-r prediction . Suppose a classifier f is given with values in [ 0 , 1 ] K and let y be the ground truth label . Let us use f ( −r ) to denote the r-th top score ( so f ( −1 ) would denote the top score ; the notation follows python semantics in which A [ −1 ] represents the last element in array A ) . Similarly we define max ( −r ) for the r-th largest value . Let f ( −r ) : D → [ 0 , 1 ] be defined as f ( −r ) ( x ) = max ( −r ) k fk ( x ) , and y ( −r ) = { 1 if y = argmax ( −r ) k fk ( x ) 0 otherwise . ( 3 ) In words , y ( −r ) is 1 if the r-th top predicted class is the correct ( ground-truth ) choice . The network is calibrated for the top-r predictor if for all scores σ , P ( y ( −r ) = 1 | f ( −r ) ( x ) = σ ) = σ . ( 4 ) In words , the conditional probability that the top-r-th choice of the network is the correct choice , is equal to the r-th top score . Similarly , one may consider probabilities that a datum belongs to one of the top-r scoring classes . The classifier is calibrated for being within-the-top-r classes if P ( ∑r s=1 y ( −s ) = 1 ∣∣ ∑r s=1 f ( −s ) ( x ) = σ ) = σ . ( 5 ) Here , the sum on the left is 1 if the ground-truth label is among the top r choices , 0 otherwise , and the sum on the right is the sum of the top r scores . 3 KOLMOGOROV-SMIRNOV CALIBRATION ERROR . We now consider a way to measure if a classifier is classwise calibrated , including top-r and withintop-r calibration . This test is closely related to the Kolmogorov-Smirnov test ( Kolmogorov ( 1933 ) ; Smirnov ( 1939 ) ) for the equality of two probability distributions . This may be applied when the probability distributions are represented by samples . We start with the definition of classwise calibration : P ( Y = k | fk ( X ) = zk ) = zk . ( 6 ) P ( Y = k , fk ( X ) = zk ) = zk P ( fk ( X ) = zk ) , Bayes ’ rule . This may be written more simply but with a less precise notation as P ( zk , k ) = zk P ( zk ) . Motivation of the KS test . One is motivated to test the equality ( or difference between ) two distributions , defined on the interval [ 0 , 1 ] . However , instead of having a functional form of these distributions , one has only samples from them . Given samples ( xi , yi ) , it is not straight-forward to estimate P ( zk ) or P ( zk | k ) , since a given value zk is likely to occur only once , or not at all , since the sample set is finite . One possibility is to use histograms of these distributions . However , this requires selection of the bin size and the division between bins , and the result depends on these parameters . For this reason , we believe this is an inadequate solution . The approach suggested by the Kolmogorov-Smirnov test is to compare the cumulative distributions . Thus , with k given , one tests the equality∫ σ 0 P ( zk , k ) dzk = ∫ σ 0 zk P ( zk ) dzk . ( 7 ) Writing φ1 ( σ ) and φ2 ( σ ) to be the two sides of this equation , the KS-distance between these two distributions is defined as KS = maxσ |φ1 ( σ ) − φ2 ( σ ) | . The fact that simply the maximum is used here may suggest a lack of robustness , but this is a maximum difference between two integrals , so it reflects an accumulated difference between the two distributions . To provide more insights into the KS-distance , let us a consider a case where zk consistently over or under-estimates P ( k | zk ) ( which is usually the case , at least for top-1 classification ( Guo et al . ( 2017 ) ) ) , then P ( k | zk ) −zk has constant sign for all values of zk . It follows that P ( zk , k ) −zkP ( zk ) has constant sign and so the maximum value in the KS-distance is achieved when σ = 1 . In this case , KS = ∫ 1 0 ∣∣P ( zk , k ) − zkP ( zk ) ∣∣ dzk = ∫ 1 0 ∣∣P ( k | zk ) − zk∣∣P ( zk ) dzk , ( 8 ) which is the expected difference between zk and P ( k | zk ) . This can be equivalently referred to as the expected calibration error for the class k. Sampled distributions . Given samples ( xi , yi ) Ni=1 , and a fixed k , one can estimate these cumulative distributions by ∫ σ 0 P ( zk , k ) dzk ≈ 1 N N∑ i=1 1 ( fk ( xi ) ≤ σ ) × 1 ( yi = k ) , ( 9 ) where 1 : B → { 0 , 1 } is the function that returns 1 if the Boolean expression is true and otherwise 0 . Thus , the sum is simply a count of the number of samples for which yi = k and fk ( xi ) ≤ σ , and so the integral represents the proportion of the data satisfying this condition . Similarly , ∫ σ 0 zk P ( zk ) dzk ≈ 1 N N∑ i=1 1 ( fk ( xi ) ≤ σ ) fk ( xi ) . ( 10 ) These sums can be computed quickly by sorting the data according to the values fk ( xi ) , then defining two sequences as follows . h̃0 = h0 = 0 , hi = hi−1 + 1 ( yi = k ) /N , h̃i = h̃i−1 + fk ( xi ) /N . ( 11 ) The two sequences should be the same , and the metric KS ( fk ) = max i |hi − h̃i| , ( 12 ) gives a numerical estimate of the similarity , and hence a measure of the degree of calibration of fk . This is essentially a version of the Kolmogorov-Smirnov test for equality of two distributions . Remark . All this discussion holds also when k < 0 , for top-r and within-top-r predictions as discussed in section 2 . In ( 11 ) , for instance , f−1 ( xi ) means the top score , f−1 ( xi ) = maxk ( fk ( xi ) ) , or more generally , f−r ( xi ) means the r-th top score . Similarly , the expression yi = −r means that yi is the class that has the r-th top score . Note when calibrating the top-1 score , our method is applied after identifying the top-1 score , hence , it does not alter the classification accuracy .
The paper presents a post-hoc calibration method for deep neural net classification. The method proposes to first reduces the well-known ECE score to a special case of the Kolmogorov-Smirnov (KS) test, and this way solves the dependency of ECE on the limiting binning assumption. The method proposes next to recalibrate the classification probabilities by fitting a cubic spline to the KS test score.
SP:28a5570540fa769396ee73c14c25ada9669dd95f
ProxylessKD: Direct Knowledge Distillation with Inherited Classifier for Face Recognition
1 INTRODUCTION . Knowledge Distillation ( KD ) is a process of transferring knowledge from a large model to a smaller one . This technique is widely used to enhance model performance in many machine learning tasks such as image classification ( Hinton et al. , 2015 ) , object detection ( Chen et al. , 2017b ) and speech translation ( Liu et al. , 2019c ) . When applied to face recognition , the embeddings of a gallery are usually extracted by a larger teacher model while the embeddings of the query images are extracted by a smaller student model . The student model is encouraged to align its embedding space with that of the teacher , so as to improve its recognition capability . Previous KD works promote the consistency in final predictions ( Hinton et al. , 2015 ) , or in the activations of the hidden layer between student and teacher ( Romero et al. , 2014 ; Zagoruyko & Komodakis , 2016 ) . Such an idea of only optimizing the consistency in predictions or activations brings limited performance boost since the student is often a small model with weaker capacity compared with the teacher . Later , Park et al . ( 2019 ) ; Peng et al . ( 2019 ) propose to exploit the correlation between instances to guide the student to mimic feature relationships of the teacher over a batch of input data , which achieves better performance . However , the above works all aim at guiding the student to mimic the behavior of the teacher , which is not suitable for practical face recognition . In reality , it is very important to directly align embedding spaces between student and teacher , which can enable models across different devices to share the same embedding space for feasible similarity comparison . To solve this , a simple and direct method is to directly minimize the L2 distance of embeddings extracted by student and teacher . However , this method ( we call it L2KD ) only considers minimizing the intra-class distance and ignores maximizing the inter-class distance , and is unable to benefit from some powerful loss functions with large margin ( e.g . Cosface loss ( Wang et al. , 2018a ) , Arcface loss ( Deng et al. , 2019a ) ) constraint to further improve the performance . ( a ) L2KD ( b ) ProxylessKD 1.0 0.8 0.6 0.4 0.2 0.0 Figure 1 : The embedding distributions extracted by ( a ) L2KD , and ( b ) ProxylessKD In this work , we propose an effective knowledge distillation method named ProxylessKD . According to Ranjan et al . ( 2017 ) , the classifier neurons in a recognition model can be viewed as the approximate embedding centers of each class . This can be used to guide the embedding learning as in this way , the classifier can encourage the embedding to align with the approximate embedding centers corresponding to the label of the image . Inspired by this , we propose to initialize the weight of the student ’ s classifier with the weight of the teacher ’ s clas- sifier and fix it during the distillation process , which forces the student to produce an embedding space as consistent with that of the teacher as possible . Different from previous knowledge distillation works ( Hinton et al. , 2015 ; Zagoruyko & Komodakis , 2016 ; Romero et al. , 2014 ; Park et al. , 2019 ; Peng et al. , 2019 ) and L2KD , the proposed ProxylessKD not only directly optimizes the target task but also considers minimizing the intra-class distance and maximizing the inter-class distance . Meanwhile it can benefit from large margin constraints ( e.g . Cosface loss ( Wang et al. , 2018a ) and Arcface loss ( Deng et al. , 2019a ) ) . As shown in Figure 1 , the intra-class distance in ProxylessKD combined with Arcface loss is much closer than L2KD , and the inter-class distance in ProxylessKD combined with Arcface loss is much larger than L2KD . Thus it can be expected that our ProxylessKD is able to improve the performance of face recognition , which will be experimentally validated . The main contributions in this paper are summarized as follows : • We analyze the shortcomings of existing knowledge distillation methods : they only optimize the proxy task rather than the target task ; and they can not conveniently integrate with advanced large margin constraints to further lift performance . • We propose a simple yet effective KD method named ProxylessKD , which directly boosts embedding space alignment and can be easily combined with existing loss functions to achieve better performance . • We conduct extensive experiments on standard face recognition benchmarks , and the results well demonstrate the effectiveness of the proposed ProxylessKD . 2 RELATED WORK . Knowledge distillation . Knowledge distillation aims to transfer the knowledge from the teacher model to a small model . The pioneer work is Buciluǎ et al . ( 2006 ) , and Hinton et al . ( 2015 ) popularizes this idea by defining the concept of knowledge distillation ( KD ) as training the small model ( the student ) by exploiting the soft targets provided by a cumbersome model ( the teacher ) . Unlike the one-hot label , the soft targets from the teacher contain rich related information among classes , which can guide the student to better learn the fine-grained distribution of data and thus lift performance . Lots of variants of model distillation strategies have been proposed and widely adopted in the fields like image classification ( Chen et al. , 2018 ) , object detection ( Chen et al. , 2017a ) , semantic segmentation ( Liu et al. , 2019a ; Park & Heo , 2020 ) , etc . Concretely , Zagoruyko & Komodakis ( 2016 ) proposed a response-based KD model , Attention Transfer ( AT ) , which aims to teach the student to activate the same region as the teacher model . Some relation-based distillation methods have also been developed , which encourage the student to mimic the relation of the output in different stages ( Yim et al. , 2017 ) and the samples in a batch ( Park et al. , 2019 ) . The previous works mostly optimize the proxy tasks rather than the target task . In this work , we directly optimize face recognition accuracy by inheriting the teacher ’ s classifier as the student ’ s classifier to guide the student to learn discriminative embeddings in the teacher ’ s embedding space . In ( Deng et al. , 2019b ) , they also directly copy and fix the weights of the margin inner-product layer of the teacher model to the student model to train the student model and the motivation of ( Deng et al. , 2019b ) is the student model can be trained with better pre-defined inter-class information from the teacher model . However , different from ( Deng et al. , 2019b ) , we firstly analyze the shortcomings of existing knowledge distillation methods . Specifically , the existing methods target optimizing the proxy task rather than the target task ; and they can not conveniently integrate with advanced large margin constraints to further lift performance . These valuable analyses and observations are not found in ( Deng et al. , 2019b ) and other existing works . Secondly strong motivation and the physical explanation of the proposed ProxylessKD is well explained in our work . Figure 1 and corresponding analysis explained why ProxylessKD can achieve better performance than the existing methods that optimize the proxy task . Such in-depth analysis and strong physical explanation are novel and can not be found in ( Deng et al. , 2019b ) and other existing works . We believe these novel findings and the proposed solution are valuable to the face recognition community and will inspire researchers in related fields . Finally , solid experiments are designed and conducted to justify the importance of directly optimize the final task rather than the proxy task when doing knowledge distillation . And the properties of ProxylessKD about using different margin-based loss function and hyper-parameters are well examined . These detailed analyses about ProxylessKD can not be found in ( Deng et al. , 2019b ) and other existing works . We believe the above important differences and novel contributions make our work differs from ( Deng et al. , 2019b ) and existing works . Loss functions used in face recognition . Softmax loss is defined as the pipeline combination of the last fully connected layer , softmax function , and cross-entropy loss . Although it can help the network separate categories in a high-dimensional space , for fine-grained classification problems like face recognition , it offers limited accuracy due to the considerable inter-class similarity . Liu et al . ( 2017 ) proposed Sphereface to achieve smaller maximal intra-class distance than minimal inter-class distance , which can directly enhance feature discrimination . Compared with SphereFace in which the margin m is multiplied on the angle , Wang et al . ( 2018a ) ; Whitelam et al . ( 2017 ) proposed CosFace , where the margin is directly subtracted from cosine , achieving better performance than SphereFace and relieving the need for joint supervision from the softmax loss . To further improve feature discrimination , Deng et al . ( 2018 ) proposed the ArcFace that utilizes the arc-cosine function to calculate the angle , i.e . adding an additive angular margin and back again by the cosine function . In this paper , we combine our ProxylessKD with the above loss functions to further lift performance , e.g . Arcface loss function . 3 METHODOLOGY . We first revisit popular loss functions in face recognition in Sec . 3.1 , and elaborate on our ProxylessKD in Sec . 3.2 . Then we introduce how to combine our method with existing loss functions in Sec . 3.3 . 3.1 REVISIT LOSS FUNCTION IN FACE RECOGNITION . The most classical loss function in classification is the Softmax loss , which is represented as follows : L1 = − 1 N N∑ i=1 log es·cos ( θwy , xi ) es·cos ( θwy , xi ) + ∑K k 6=y e s·cos ( θwk , xi ) . ( 1 ) Here , wk denotes the weight of the model classifier , where k ∈ { 1 , 2 , ... , K } and K denotes the number of classes . xi is the embedding of i-th sample and usually normalized with magnitude replaced with a scale parameter of s. θwk , xi denotes the angle between wk and xi . y is the ground truth label for the input embedding xi . N is the batch size . In recent years , several margin-based softmax loss functions ( Liu et al. , 2017 ; Wang et al. , 2017 ; 2018a ; Deng et al. , 2019a ) have been proposed to boost the embedding discrimination , which is represented as follows : L2 = − 1 N N∑ i=1 log es·f ( m , θwy , xi ) es·f ( m , θwy , xi ) + ∑K k 6=y e s·cos ( θwk , xi ) . ( 2 ) In the above equation , f ( m , θwy , xi ) is a margin function . Precisely , f ( m , θwy , xi ) = cos ( m · θwy , xi ) is A-Softmax loss proposed in ( Liu et al. , 2017 ) , where m is an integer and greater than zero . f ( m , θwy , xi ) = cos ( θwy , xi ) −m is the AM-Softmax loss proposed in Wang et al . ( 2018a ) and the hyper-parameterm is greater than zero . f ( m , θwy , xi ) = cos ( θwy , xi+m ) withm > 0 is Arc-Softmax introduced in Deng et al . ( 2019a ) , which achieves better performance than the former . Fortunately , the proposed ProxylessKD can be combined with the above loss function , conveniently . In this paper , we combine our proposed ProxylessKD method with above loss functions and investigate their performance .
This paper proposes ProxylessKD method from a novel perspective of knowledge distillation. Instead of minimizing the outputs of teacher and student models, ProxylessKD adopts a shared classifier for two models. The shared classifier yields better aligned embedding space, so the embeddings from teacher and student models are comparable. Since the optimization objective for student model is learning discriminative embeddings, the face recognition performance is improved compared to the vanilla KL counterpart.
SP:cdc407d403e1008ced29c7cda727db0d631cc966
Decomposing Mutual Information for Representation Learning
1 INTRODUCTION . The ability to extract actionable information from data in the absence of explicit supervision seems to be a core prerequisite for building systems that can , for instance , learn from few data points or quickly make analogies and transfer to other tasks . Approaches to this problem include generative models ( Hinton , 2012 ; Kingma & Welling , 2014 ) and self-supervised representation learning approaches , in which the objective is not to maximize likelihood , but to formulate a series of ( label-agnostic ) tasks that the model needs to solve through its representations ( Noroozi & Favaro , 2016 ; Devlin et al. , 2019 ; Gidaris et al. , 2018 ; Hjelm et al. , 2019 ) . Self-supervised learning includes successful models leveraging contrastive learning , which have recently attained comparable performance to their fully-supervised counterparts ( Bachman et al. , 2019 ; Chen et al. , 2020a ) . Many self-supervised learning methods train an encoder such that the representations of a pair of views x and y derived from the same input example are more similar to each other than to representations of views sampled from a contrastive negative sample distribution , which is usually the marginal distribution of the data . For images , different views can be built using random flipping , color jittering and cropping ( Bachman et al. , 2019 ; Chen et al. , 2020a ) . For sequential data such as conversational text , the views can be past and future utterances in a given dialogue . It can be shown that these methods maximize a lower bound on mutual information ( MI ) between the views , I ( x ; y ) , w.r.t . the encoder , i.e . the InfoNCE bound ( Oord et al. , 2018 ) . One significant shortcoming of this approach is the large number of contrastive samples required , which directly impacts the total amount of information which the bound can measure ( McAllester & Stratos , 2018 ; Poole et al. , 2019 ) . In this paper , we consider creating subviews of x by removing information from it in various ways , e.g . by masking some pixels . Then , we use representations from less informed subviews as a source of hard contrastive samples for representations from more informed subviews . For example , in Fig . 1 , one can mask a pixel region in x′ to obtain x′′ and ask ( the representation of ) x′′ to be closer to y than to random images of the corpus , and for x′ to be closer to y than to samples from p ( y|x′′ ) . This corresponds to decomposing the MI between x and y into I ( x ; y ) ≥ I ( x′′ ; y ) + I ( x′ ; y|x′′ ) . The conditional MI measures the information about y that the model has gained by looking at x′ beyond the information already contained in x′′ . In Fig . 1 ( left ) , standard contrastive approaches could focus on the overall “ shape ” of the object and would need many negative samples to capture other discriminative features . In our approach , the model is more directly encouraged to capture these additional features , e.g . the embossed detailing . In the context of predictive coding on sequential data such as dialogue , by setting x′′ to be the most recent utterance ( Fig . 1 , right ) , the encoder is directly encouraged to capture long-term dependencies that can not be explained by x′′ . We formally show that , by such decomposition , our representations can potentially capture more of the total information shared between the original views x and y . Maximizing MI between multiple views can be related to recent efforts in representation learning , amongst them AMDIM ( Bachman et al. , 2019 ) , CMC ( Tian et al. , 2019 ) and SwAV ( Caron et al. , 2020 ) . However , these models maximize the sum of MIs between views I ( { x′ , x′′ } ; y ) = I ( x′′ ; y ) + I ( x′ ; y ) . E.g. , in Bachman et al . ( 2019 ) , x′ and x′′ could be global and local representations of an image , and in Caron et al . ( 2020 ) , x′ and x′′ could be the views resulting from standard cropping and the aggressive multi-crop strategy . This equality is only valid when the views x′ and x′′ are statistically independent , which usually does not hold . Instead , we argue that a better decomposition is I ( { x′ , x′′ } ; y ) = I ( x′′ ; y ) + I ( x′ ; y|x′′ ) , which always holds . Most importantly , the conditional MI term encourages the encoder to capture more non-redundant information across views . To maximize our proposed decomposition , we present a novel lower bound on conditional MI in Section 3 . For the conditional MI maximization , we give a computationally tractable approximation that adds minimal overhead . In Section 4 , we first show in a synthetic setting that decomposing MI and using the proposed conditional MI bound leads to capturing more of the ground-truth MI . Finally , we present evidence of the effectiveness of the method in vision and in dialogue generation . 2 PROBLEM SETTING . The maximum MI predictive coding framework ( McAllester , 2018 ; Oord et al. , 2018 ; Hjelm et al. , 2019 ) prescribes learning representations of input data such that they maximize MI . Estimating MI is generally a hard problem that has received a lot of attention in the community ( Kraskov et al. , 2004 ; Barber & Agakov , 2003 ) . Let x and y be two random variables which can generally describe input data from various domains , e.g . text , images or sound . We can learn representations of x and y by maximizing the MI of the respective features produced by encoders f , g : X → Rd , which by the data processing inequality , is bounded by I ( x ; y ) : arg max f , g I ( f ( x ) ; g ( y ) ) ≤ I ( x ; y ) . ( 1 ) We assume that the encoders can be shared , i.e . f = g. The optimization in Eq . 1 is challenging but can be lower-bounded . Our starting point is the recently proposed InfoNCE lower bound on MI ( Oord et al. , 2018 ) and its application to self-supervised learning for visual representations ( Bachman et al. , 2019 ; Chen et al. , 2020a ) . In this setting , x and y are paired input images , or independentlyaugmented copies of the same image . These are encoded using a neural network encoder which is trained such that the representations of the two image copies are closer to each other in the embedding space than to other images drawn from the marginal distribution of the corpus . This can be viewed as a contrastive estimation of the MI ( Oord et al. , 2018 ) . We present the InfoNCE bound next . 2.1 INFONCE BOUND . InfoNCE ( Oord et al. , 2018 ) is a lower-bound on I ( x ; y ) obtained by comparing pairs sampled from the joint distribution x , y1 ∼ p ( x , y ) to a set of negative samples , y2 : K ∼ p ( y2 : K ) = ∏K k=2 p ( yk ) , also called contrastive , independently sampled from the marginal : INCE ( x ; y|E , K ) = Ep ( x , y1 ) p ( y2 : K ) [ log eE ( x , y1 ) 1 K ∑K k=1 e E ( x , yk ) ] ≤ I ( x , y ) , ( 2 ) where E is a critic assigning a real valued score to x , y pairs . We provide an exact derivation for this bound in the Appendix1 . For this bound , the optimal critic is the log-odds between the conditional distribution p ( y|x ) and the marginal distribution of y , E∗ ( x , y ) = log p ( y|x ) p ( y ) + c ( x ) ( Oord et al. , 2018 ; Poole et al. , 2019 ) . The InfoNCE bound is loose if the true mutual information I ( x ; y ) is larger than logK . In order to overcome this difficulty , recent methods either train with large batch sizes ( Chen et al. , 2020a ) or exploit an external memory of negative samples in order to reduce memory requirements ( Chen et al. , 2020b ; Tian et al. , 2020 ) . These methods rely on uniform sampling from the training set in order to form the contrastive sets . For further discussion of the limits of variational bounds of MI , see McAllester & Stratos ( 2018 ) . 3 DECOMPOSING MUTUAL INFORMATION . By the data processing inequality : I ( x ; y ) ≥ I ( { x1 , . . . , xN } ; y ) , where { x1 , . . . , xN } are different subviews of x – i.e. , views derived from x without adding any exogenous information . For example , { x1 , . . . , xN } can represent exchanges in a longer dialog x , sentences in a document x , or different augmentations of the same image x . Equality is obtained when the set of subviews retains all information about x , e.g . if x is in the set . Without loss of generality , we consider the case N = 2 , I ( x ; y ) ≥ I ( { x′ , x′′ } ; y ) , where { x′ , x′′ } indicates two subviews derived from the original x . We can apply the chain rule for MI : I ( x ; y ) ≥ I ( { x′ , x′′ } ; y ) = I ( x′′ ; y ) + I ( x′ ; y|x′′ ) , ( 3 ) where the equality is obtained if and only if I ( x ; y| { x′ , x′′ } ) = 0 , i.e . x doesn ’ t give any information about y in excess to { x′ , x′′ } 2 . This suggests that we can maximize I ( x ; y ) by maximizing each of the MI terms in the sum . The conditional MI term can be written as : I ( x′ ; y|x′′ ) = Ep ( x′ , x′′ , y ) [ log p ( y|x′ , x′′ ) p ( y|x′′ ) ] . ( 4 ) This conditional MI is different from the unconditional MI , I ( x′ ; y ) , insofar it measures the amount of information shared between x′ and y which can not be explained by x′′ . Note that the decomposition holds for arbitrary partitions of x′ , x′′ , e.g . I ( { x′ , x′′ } ; y ) = I ( x′ ; y ) + I ( x′′ ; y|x′ ) . When X is high-dimensional , the amount of mutual information between x and y will potentially be larger than the amount of MI that INCE can measure given computational constraints associated with large K and the poor log scaling properties of the bound . The idea that we put forward is to split the total MI into a sum of MI terms of smaller magnitude , thus for which INCE would have less bias for any given K , and estimate each of those terms in turn . The resulting decomposed bound can be written into a sum of unconditional and conditional MI terms : INCES ( x ; y ) = INCE ( x ′′ ; y ) + ICNCE ( x ′ ; y|x′′ ) ≤ I ( x ; y ) , ( 5 ) 1The derivation in Oord et al . ( 2018 ) presented an approximation and therefore was not properly a bound . An alternative , exact derivation of the bound can be found in Poole et al . ( 2019 ) . 2For a proof of this fact , it suffices to consider I ( { x , x′ , x′′ } ; y ) = I ( x ; y| { x′ , x′′ } ) + I ( { x′ , x′′ } ; y ) , given that I ( { x , x′ , x′′ } ; y ) = I ( x ; y ) , equality is obtained iff I ( x ; y| { x′ , x′′ } ) = 0. where ICNCE is a lower-bound on conditional MI and will be presented in the next section . Both conditional ( Eq . 6 ) and unconditional bounds on the MI ( Eq . 14 ) can capture at most logK nats of MI . Therefore , the bound that arises from the decomposition of the MI in Eq . 5 potentially allows to capture up to N log K nats of MI in total , where N is the number of subviews used to describe x . This shows that measuring mutual information by decomposing it in a sequence of estimation problems potentially allows to capture more nats of MI than with the standard INCE , which is bounded by log K .
This paper proposes a contrastive learning approach where one of the views, x, is converted into two subviews, x' and x'', and then separate InfoNCE style bounds constructed for each of I(x'';y) and I(x';y|x'') before being combined to form an overall training objective. Critically, the second of these is based on the conditional MI, I(x';y|x''), distinguishing it from previous work using multiple views that just take the marginal I(x';y). Estimating this conditional MI transpires to be somewhat trickier due to the additional intractability from p(y|x''), with approximations suggested to get around this. Experiments are performed on both vision and NLP problems.
SP:a15d5230fecc1dad8998905f17c82cf8e05c98d3
A Unified Bayesian Framework for Discriminative and Generative Continual Learning
1 INTRODUCTION . Continual learning ( CL ) ( Ring , 1997 ; Parisi et al. , 2019 ) is the learning paradigm where a single model is subjected to a sequence of tasks . At any point of time , the model is expected to ( i ) make predictions for the tasks it has seen so far , ( ii ) if subjected to training data for a new task , adapt to the new task leveraging the past knowledge if possible ( forward transfer ) and benefit the previous tasks if possible ( backward transfer ) . While the desirable aspects of more mainstream transfer learning ( sharing of bias between related tasks ( Pan & Yang , 2009 ) ) might reasonably be expected here too , the principal challenge is to retain the predictive power for the older tasks even after learning new tasks , thus avoiding the so-called catastrophic forgetting . Real world applications in , for example , robotics or time-series forecasting , are rife with this challenging learning scenario , the ability to adapt to dynamically changing environments or evolving data distributions being essential in these domains . Continual learning is also desirable in unsupervised learning problems as well ( Smith et al. , 2019 ; Rao et al. , 2019b ) where the goal is to learn the underlying structure or latent representation of the data . Also , as a skill innate to humans ( Flesch et al. , 2018 ) , it is naturally an interesting scientific problem to reproduce the same capability in artificial predictive modelling systems . Existing approaches to continual learning are mainly based on three foundational ideas . One of them is to constrain the parameter values to not deviate significantly from their previously learned value by using some form of regularization or trade-off between previous and new learned weights ( Schwarz et al. , 2018 ; Kirkpatrick et al. , 2017 ; Zenke et al. , 2017 ; Lee et al. , 2017 ) . A natural way to accomplish this is to train a model using online Bayesian inference , whereby the posterior of the parameters learned from task t serve as the prior for task t + 1 as in Nguyen et al . ( 2018 ) and Zeno et al . ( 2018 ) . This new informed prior helps in the forward transfer , and also prevents catastrophic forgetting by penalizing large deviations from itself . In particular , VCL ( Nguyen et al. , 2018 ) achieves the state of the art results by applying this simple idea to Bayesian neural networks . The second idea is to perform an incremental model selection for every new task . For neural networks , this is done by evolving the structure as newer tasks are encountered ( Golkar et al. , 2019 ; Li et al. , 2019 ) . Structural learning is a very sensible direction in continual learning as a new task may require a different network structure than old unrelated tasks and even if the tasks are highly related their lower layer representations can be very different . Another advantage of structural learning is that while retaining a shared set of parameters ( which can be used to model task relationships ) it also allow task-specific parameters that can increase the performance of the new task while avoiding catastrophic forgetting caused due to forced sharing of parameters . The third idea is to invoke a form of ’ replay ’ , whereby selected or generated samples representative of previous tasks , are used to retrain the model after new tasks are learned . In this work , we introduce a novel Bayesian nonparametric approach to continual learning that seeks to incorporate the ability of structure learning into the simple yet effective framework of online Bayes . In particular , our approach models each hidden layer of the neural network using the Indian Buffet Process ( Griffiths & Ghahramani , 2011 ) prior , which enables us to learn the network structure as new tasks arrive continually . We can leverage the fact that any particular task t uses a sparse subset of the connections of a neural network Nt , and different related tasks share different subsets ( albeit possibly overlapping ) . Thus , in the setting of continual learning , it would be more effective if the network could accommodate changes in its connections dynamically to adapt to a newly arriving task . Moreover , in our model , we perform the automatic model selection where each task can select the number of nodes in each hidden layer . All this is done under the principled framework of variational Bayes and a nonparametric Bayesian modeling paradigm . Another appealing aspect of our approach is that in contrast to some of the recent state-of-the-art continual learning models ( Yoon et al. , 2018 ; Li et al. , 2019 ) that are specific to supervised learning problems , our approach applies to both deep discriminative networks ( supervised learning ) where each task can be modeled by a Bayesian neural network ( Neal , 2012 ; Blundell et al. , 2015 ) , as well as deep generative networks ( unsupervised learning ) where each task can be modeled by a variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) . 2 PRELIMINARIES . Bayesian neural networks ( Neal , 2012 ) are discriminative models where the goal is to model the relationship between inputs and outputs via a deep neural network with parametersw . The network parameters are assumed to have a prior p ( w ) and the goal is to infer the posterior given the observed dataD . The exact posterior inference is intractable in such models . One such approximate inference scheme is Bayes-by-Backprop ( Blundell et al. , 2015 ) that uses a mean-field variational posterior q ( w ) over the weights . Reparameterized samples from this posterior are then used to approximate the lower bound via Monte Carlo sampling . Our goal in the continual learning setting is to learn such Bayesian neural networks for a sequence of tasks by inferring the posterior qt ( w ) for each task t , without forgetting the information contained in the posteriors of previous tasks . Variational autoencoders ( Kingma & Welling , 2013 ) are generative models where the goal is to model a set of inputs { x } Nn=1 in terms of a stochastic latent variables { z } Nn=1 . The mapping from each zn to xn is defined by a generator/decoder model ( modeled by a deep neural network with parameters θ ) and the reverse mapping is defined by a recognition/encoder model ( modeled by another deep neural network with parameters φ ) . Inference in VAEs is done by maximizing the variational lower bound on the marginal likelihood . It is customary to do point estimation for decoder parameters θ and posterior inference for encoder parameters φ . However , in the continual learning setting , it would be more desirable to infer the full posterior qt ( w ) for each task ’ s encoder and decoder parameters w = { θ , φ } , while not forgetting the information about the previous tasks as more and more tasks are observed . Our proposed continual learning framework address this aspect as well . Variational Continual Learning ( VCL ) Nguyen et al . ( 2018 ) is a recently proposed approach to continual learning that combats catastrophic forgetting in neural networks by modeling the network parameters w in a Bayesian fashion and by setting pt ( w ) = qt−1 ( w ) , that is , a task reuses the previous task ’ s posterior as its prior . VCL solves the follow KL divergence minimization problem qt ( w ) = arg min q∈Q KL ( q ( w ) || 1 Zt qt−1 ( w ) p ( Dt|w ) ) ( 1 ) While offering a principled way that is applicable to both supervised ( discriminative ) and unsupervised ( generative ) learning settings , VCL assumes that the model structure is held fixed throughout , which can be limiting in continual learning where the number of tasks and their complexity is usually unknown beforehand . This necessitates adaptively inferring the model structure , that can potentially adapt with each incoming task . Another limitation of VCL is that the unsupervised version , based on performing CL on VAEs , only does so for the decoder model ’ s parameters ( shared by all tasks ) . It uses completely task-specific encoders and , consequently , is unable to transfer information across tasks in the encoder model . Our approach addresses both these limitations in a principled manner . 3 BAYESIAN STRUCTURE ADAPTATION FOR CONTINUAL LEARNING . In this section , we present a Bayesian model for continual learning that can potentially grow and adapt its structure as more and more tasks arrive . Our model extends seamlessly for unsupervised learning as well . For brevity of exposition , in this section , we mainly focus on the supervised setting where a task has labeled data with known task identities t ( task-incremental ) . We then briefly discuss the unsupervised extension ( based on VAEs ) in Sec . 3.3 where task boundaries may or may not ( taskagnostic ) be available and provide further details in the appendix ( Sec I ) . Our approach uses a basic primitive that models each hidden layer using a nonparametric Bayesian prior ( Fig . 1a shows an illustration and Fig . 1b shows a schematic diagram ) . We can use these hidden layers to model feedforward connections in Bayesian neural networks or VAE models . For simplicity , we will assume a single hidden layer , the first task activates as many hidden nodes as required and learns the posterior over the subset of edge weights incident on each active node . Each subsequent task reuses some of the edges learned by the previous task and uses the posterior over the weights learned by the previous task as the prior . Additionally , it may activate some new nodes and learn the posterior over some of their incident edges . It thus learns the posterior over a subset of weights that may overlap with weights learned by previous tasks . While making predictions , a task uses only the connections it has learned . More slack for later tasks in terms of model size ( allowing it to create new nodes ) indirectly lets the task learn better without deviating too much from the prior ( in this case , posterior of the previous tasks ) and further reduces chances of catastrophic forgetting ( Kirkpatrick et al. , 2017 ) . 3.1 GENERATIVE STORY .. Omitting the task id t for brevity , consider modeling tth task using a neural network having L hidden layers . We model the weights in layer l as W l = Bl V l , a point-wise multiplication of a realvalued matrix V l ( with a Gaussian prior N ( 0 , σ20 ) on each entry ) and a task-specific binary matrix Bl . This ensures sparse connection weights between the layers . Moreover , we modelBl ∼ IBP ( α ) using the Indian Buffet Process ( IBP ) Griffiths & Ghahramani ( 2011 ) prior , where the hyperparameter α controls the number of nonzero columns in B and its sparsity . The IBP prior thus enables learning the size ofBl ( and consequently of V l ) from data . As a result , the number of nodes in the hidden layer is learned adaptively from data . The output layer weights are denoted as Wout with each weight having a Gaussian prior N ( 0 , σ20 ) . The outputs are yn ∼ Lik ( WoutφNN ( xn ) ) , n = 1 , . . . , N ( 2 ) Here φNN is the function computed ( using parameter samples ) up to the last hidden layer of the network thus formed , and Lik denotes the likelihood model for the outputs . Similar priors on the network weights have been used in other recent works to learn sparse deep neural networks ( Panousis et al. , 2019 ; Xu et al. , 2019 ) . However , these works assume a single task to be learned . In contrast , our focus here is to leverage such priors in the continual learning setting where we need to learn a sequence of tasks while avoiding the problem of catastrophic forgetting . Henceforth , we further suppress the superscript denoting layer number from the notation for simplicity ; the discussion will hold identically for all hidden layers . When adapting to a new task , the posterior of V learned from previous tasks is used as the prior . A newB is learned afresh , to ensure that a task only learns the subset of weights relevant to it . Stick Breaking Construction . As described before , to adaptively infer the number of nodes in each hidden layer , we use the IBP prior ( Griffiths & Ghahramani , 2011 ) , whose truncated stick-breaking process ( Doshi et al. , 2009 ) construction for each entry of B is as follows νk ∼ Beta ( α , 1 ) , πk = k∏ i=1 νi , Bd , k ∼ Bernoulli ( πk ) ( 3 ) for d ∈ 1 , ... , D , where D denotes the number of input nodes for this hidden layer , and k ∈ 1 , 2 , ... , K , where K is the truncation level and α controls the effective value of K , i.e. , the number of active hidden nodes . Note that the prior probability πk of weights incident on hidden node k being nonzero decreases monotonically with k , until , say , K nodes , after which no further nodes have any incoming edges with nonzero weights from the previous layer , which amounts to them being turned off from the structure . Moreover , due to the cumulative product based construction of the πk ’ s , an implicit ordering is imposed on the nodes being used . This ordering is preserved across tasks , and allocation of nodes to a task follows this , facilitating reuse of weights . The truncated stick-breaking approximation is a practically plausible and intuitive solution for continual learning since a fundamental tenet of continual learning is that the model complexity should not increase in an unbounded manner as more tasks are encountered . Suppose we fix a budget on the maximum allowed size of the network ( no . hidden nodes in a layer ) after it has seen , say , T tasks . Which exactly corresponds to the truncation level for each layer . Then for each task , nodes are allocated conservatively from this total budget , in a fixed order , conveniently controlled by the α hyperparameter . In appendix ( Sec . D ) , we also discuss a dynamic expansion scheme that avoids specifying a truncation level ( and provide experimental results ) .
The paper proposes a continual learning framework based on Bayesian non-parametric approach. The hidden layer is modeled using Indian Buffet Process prior. The inference uses a structured mean-field approximation with a Gaussian family for the weights, and Beta-Bernoulli for the task-masks. The variational inference is done with Bayes-by-backprop on a common ELBO setup. The experiments show less diminishing accuracy on the increment of tasks on five datasets for the discriminative problem, and for generation the methods learn one digit or character at a time on MNIST and notMNIST datasets.
SP:70bed0f6f729c03edcb03678fca53e1d82fc06ab
Into the Wild with AudioScope: Unsupervised Audio-Visual Separation of On-Screen Sounds
1 INTRODUCTION . Audio-visual machine perception has been undergoing a renaissance in recent years driven by advances in large-scale deep learning . A motivating observation is the interplay in human perception between auditory and visual perception . We understand the world by parsing it into the objects that are the sources of the audio and visual signals we can perceive . However , the sounds and sights produced by these sources have rather different and complementary properties . Objects may make sounds intermittently , whereas their visual appearance is typically persistent . The visual percepts of different objects tend to be spatially distinct , whereas sounds from different sources can blend together and overlap in a single signal , making it difficult to separately perceive the individual sources . ∗Work done during an internship at Google . This suggests that there is something to be gained by aligning our audio and visual percepts : if we can identify which audio signals correspond to which visual objects , we can selectively attend to an object ’ s audio signal by visually selecting the object . This intuition motivates using vision as an interface for audio processing , where a primary problem is to selectively preserve desired sounds , while removing unwanted sounds . In some tasks , such as speech enhancement , the desired sounds can be selected by their class : speech versus non-speech in this case . In an open-domain setting , the selection of desired sounds is at the user ’ s discretion . This presents a user-interface problem : it is challenging to select sources in an efficient way using audio . This problem can be greatly simplified in the audio-visual case if we use video selection as a proxy for audio selection , for example , by selecting sounds from on-screen objects , and removing off-screen sounds . Recent work has used video for selection and separation of speech ( Ephrat et al. , 2018 ; Afouras et al. , 2020 ) or music ( Zhao et al. , 2018 ; Gao & Grauman , 2019 ; Gan et al. , 2020 ) . However , systems that address this for arbitrary sounds ( Gao et al. , 2018 ; Rouditchenko et al. , 2019 ; Owens & Efros , 2018 ) may be useful in more general cases , such as video recording , where the sounds of interest can not be defined in advance . The problem of associating arbitrary sounds with their visual objects is challenging in an open domain . Several complications arise that have not been fully addressed by previous work . First , a large amount of training data is needed in order to cover the space of possible sound . Supervised methods require labeled examples where isolated on-screen sounds are known . The resulting data collection and labeling burden limits the amount and quality of available data . To overcome this , we propose an unsupervised approach using mixture invariant training ( MixIT ) ( Wisdom et al. , 2020 ) , that can learn to separate individual sources from in-the-wild videos , where the on-screen and off-screen sounds are unknown . Another problem is that different audio sources may correspond to a dynamic set of on-screen objects in arbitrary spatial locations . We accommodate this by using attention mechanisms that align each hypothesized audio source with the different spatial and temporal positions of the corresponding objects in the video . Finally we need to determine which audio sources appear on screen , in the absence of strong labels . This is handled using a weakly trained classifier for sources based on audio and video embeddings produced by the attention mechanism . 2 RELATION TO PREVIOUS WORK . Separation of arbitrary sounds from a mixture , known as “ universal sound separation , ” was recently shown to be possible with a fixed number of sounds ( Kavalerov et al. , 2019 ) . Conditional information about which sound classes are present can improve separation performance ( Tzinis et al. , 2020 ) . The FUSS dataset ( Wisdom et al. , 2021 ) expanded the scope to separate a variable number of sounds , in order to handle more realistic data . A framework has also been proposed where specific sound classes can be extracted from input sound mixtures ( Ochiai et al. , 2020 ) . These approaches require curated data containing isolated sounds for training , which prevents their application to truly open-domain data and introduces difficulties such as annotation cost , accurate simulation of realistic acoustic mixtures , and biased datasets . To avoid these issues , a number of recent works have proposed replacing the strong supervision of reference source signals with weak supervision labels from related modalities such as sound class ( Pishdadian et al. , 2020 ; Kong et al. , 2020 ) , visual input ( Gao & Grauman , 2019 ) , or spatial location from multi-microphone recordings ( Tzinis et al. , 2019 ; Seetharaman et al. , 2019 ; Drude et al. , 2019 ) . Most recently , Wisdom et al . ( 2020 ) proposed mixture invariant training ( MixIT ) , which provides a purely unsupervised source separation framework for a variable number of latent sources . A variety of research has laid the groundwork towards solving audio-visual on-screen source separation ( Michelsanti et al. , 2020 ) . Generally , the two main approaches are to use audio-visual localization ( Hershey & Movellan , 2000 ; Senocak et al. , 2018 ; Wu et al. , 2019 ; Afouras et al. , 2020 ) , or object detection networks , either supervised ( Ephrat et al. , 2018 ; Gao & Grauman , 2019 ; Gan et al. , 2020 ) or unsupervised ( Zhao et al. , 2018 ) , to predict visual conditioning information . However , these works only consider restricted domains such as speech ( Hershey & Casey , 2002 ; Ephrat et al. , 2018 ; Afouras et al. , 2020 ) or music ( Zhao et al. , 2018 ; Gao & Grauman , 2019 ; Gan et al. , 2020 ) . Gao et al . ( 2018 ) reported results with videos from a wide domain , but relied on supervised visual object detectors , which precludes learning about the appearance of sound sources outside of a closed set of classes defined by the detectors . Rouditchenko et al . ( 2019 ) proposed a system for a wide domain of sounds , but required sound class labels as well as isolated sounds from these classes . Our approach avoids the supervision of class labels and isolated sources in order to handle unknown visual and sound classes occurring in multi-source data . Towards learning directly from a less restrictive open domain of in-the-wild video data , Tian et al . ( 2018 ) learned to localize audio-visual events in unconstrained videos and presented an ad hoc dataset . Korbar et al . ( 2018 ) pretrained models to discern temporal synchronization of audio-video pairs , and demonstrated promising results on action recognition and audio classification . Arandjelovic & Zisserman ( 2017 ) took a similar approach by classifying audio-visual correspondences of pairs of one video frame and one second of audio . Hu et al . ( 2020 ) proposed a curriculum learning approach where the model gradually learns harder examples to separate . Closest to our work is the approach of Owens & Efros ( 2018 ) , a self-supervised audio-visual onscreen speech separation system based on temporal audio-visual alignment . However , Owens & Efros ( 2018 ) assumes training videos containing only on-screen sources , and it is unclear how to adapt it to the case where training videos include off-screen sources . Our approach significantly differs from these prior works in that we do not restrict our domain to musical instruments or human speakers , and we train and test with real in-the-wild videos containing an arbitrary number of objects with no object class restrictions . Our proposed framework can deal with noisy labels ( e.g . videos with no on-screen sounds ) , operate on a completely open-domain of in-the-wild videos , and effectively isolate sounds coming from on-screen objects . We address the following task , which extends the formulation of the on-screen speech separation problem ( Owens & Efros , 2018 ) . Given an input video , the goal is to separate all sources that constitute the input mixture , and then estimate an audio-visual correspondence score for each separated source . These probability scores should be high for separated sources which are apparent on-screen , and low otherwise . The separated audio sources , weighted by their estimated on-screen probabilities , can be summed together to reconstruct the on-screen mixture . We emphasize that our approach is more generally applicable than previous proposals , because real-world videos may contain an unknown number of both on-screen and off-screen sources belonging to an undefined ontology of classes . We make the following contributions in this paper : 1 . We provide the first solution for training an unsupervised , open-domain , audio-visual onscreen separation system from scratch on real in-the-wild video data , with no requirement on modules such as object detectors that require supervised data . 2 . We develop a new dataset for the on-screen audio-visual separation task , drawn from 2,500 hours of unlabeled videos from YFCC100m , and 55 hours of videos that are human-labeled for presence of on-screen and off-screen sounds . 3 MODEL ARCHITECTURE . The overall architecture of AudioScope is built from the following blocks : an image embedding network , an audio separation network , an audio embedding network , an audio-visual attention mechanism , and an on-screen classifier ( see Figure 2 ) . The separation and embedding networks are based on prior work and are described in the following subsections . However , the main focus of this work is the overall architecture , as well as the training framework and loss functions . The video is analyzed with the image embedding network , which generates local embeddings for each of 64 locations within each frame , as well as an embedding of the whole frame . These embeddings are used both as a conditioning input to an audio separation network , as well as an input for classification of the on-screen sounds . The audio separation network takes the mixed input waveform as input , and generates a fixed number of output waveforms , a variable number of which are non-zero depending on the estimated number of sources in the mixture . Conditioning on the video enables the separation to take advantage of cues about the sources present when performing separation . The audio embedding network is applied to each estimated source to obtain one embedding per frame for each source . These audio embeddings are then pooled over time and used in the audio-visual spatio-temporal attention network to retrieve , for each source , a representation of the visual activity that best matches the audio , similar to the associative maps extracted from the internal network representations proposed by Harwath et al . ( 2018 ) . The architecture is designed to address the problem of unsupervised learning on in-the-wild opendomain data . First , because the target training videos can contain both on-screen and off-screen sounds , training a system to directly produce the audio of the target video would encourage inclusion of off-screen sounds as well as on-screen ones1 . Our proposed multi-source separation network instead produces latent source estimates using an unsupervised MixIT objective , which has been shown to perform well at general sound separation ( Wisdom et al. , 2020 ) . By decoupling separation from on-screen classification , our architecture facilitates the use of robust objectives that allow some of the sources to be considered off-screen , even if they appear in the soundtrack of the target videos . The audio-visual attention architecture is motivated by the alignment problem between audio and video : sound source objects in video may be localized , may move over time , and may be present before and after the corresponding audio activity . Because of the open domain we can not rely on a pre-defined set of object detectors to anchor the video representations of on-screen sources , as is done in some prior works ( Ephrat et al. , 2018 ; Gao & Grauman , 2019 ; Gan et al. , 2020 ) . Instead we propose attention to find the video representations that correspond to a source in a more flexible way . The proposed strategy of temporal pooling of the audio embeddings , before using them in the spatiotemporal attention , allows the network to derive embeddings that represent the active segments of the source audio , and ignore the ambiguous silent regions . In the present model , video is analyzed at a low frame rate , and so the audio-visual correspondence is likely based on relatively static properties of the objects , rather than the synchrony of their motion with the audio . In this case , a single time-invariant representation of the audio may be sufficient as a proof of concept . However , in future work , with higher video frame rates , it may be worthwhile to consider using attention to align sequences of audio and video embeddings in order to detect synchrony in their activity patterns . The on-screen classifier operates on an audio embedding for one estimated source , as well as the video embedding retrieved by the spatio-temporal attention mechanism , using a dense network . This presumably allows detection of the congruence between the embeddings . To provide additional context for this decision , a global video embedding , produced by temporal pooling , is provided as an additional input . Many alternative choices are possible for this classifier design , which we leave for future work , such as using a more complex classification architecture , or providing additional audio embeddings as input . 1We train such a system in Appendix A.3.5 , and find that it is not an effective approach .
This paper describes a system for separating "on-screen" sounds from "off-screen" sounds in an audio-visual task, meaning sounds that are associated with objects that are visible in a video versus not. It is trained to do this using mixture invariant training to separate synthetic mixtures of mixtures. It is evaluated on a subset of the YFCC100m that is annotated by human raters as to whether the clips have on-screen, off-screen, or both types of sounds, with the predictions of a previously described model (Jansen et al, 2020) helping to reduce the number with only off-screen sounds. The predictions are evaluated in terms of how well they can estimate the true on-screen sound (in terms of SI-SNR) and how well they can reject off-screen sound (in terms of a metric called off-screen suppression ratio, OSR). The results show that the system can successfully distinguish between on- and off-screen sound, but that different training regimens lead to different tradeoffs in these two metrics. The system with the best SI-SNR (8.0 dB) is trained using just data from the previous model along with the mixture invariant training criterion.
SP:d27e98774183ece8d82b87f1e7067bf2a28a4fca
A Simple Sparse Denoising Layer for Robust Deep Learning
1 INTRODUCTION . Deep neural networks have obtained a great success in many applications , including computer vision , reinforcement learning ( RL ) and natural language processing , etc . However , vanilla deep models are not robust to noise perturbations of the input . Even a small perturbation of input data would dramatically harm the prediction performance ( Goodfellow et al. , 2015 ) . To address this issue , there are three mainstreams of strategies : data argumentation based learning methods ( Zheng et al. , 2016 ; Ratner et al. , 2017 ; Madry et al. , 2018 ; Cubuk et al. , 2020 ) , loss functions/regularization techniques ( Elsayed et al. , 2018 ; Zhang et al. , 2019 ) , and importance weighting of network architecture against noisy input perturbation . Su et al . ( 2018 ) empirically investigated 18 deep classification models . Their studies found that model architecture is a more critical factor to robustness than the model size . Most recently , Guo et al . ( 2020 ) employed a neural architecture search ( NAS ) method to investigate the robust architectures . However , the NAS-based methods are still very computationally expensive . Furthermore , their resultant model can not be easily adopted as a plug-in for other vanilla deep models . A handy robust plug-in for backbone models remains highly demanding . In this work , we take an initial step to design a simple robust layer as a lightweight plug-in for the vanilla deep models . To achieve this goal , we first propose a novel fast sparse coding and dictionary learning algorithm . Our algorithm has a closed-form approximation for the sparse coding phase , which is cheap to compute compared with iterative methods in the literature . The closedform update is handy for the situation that needs fast computation , especially in the deep learning . Based on this , we design a very simple sparse denoising layer for deep models . Our SDL is very flexible , and it enables an end-to-end training . Our SDL can be used as a lightweight plug-in for many modern architecture of deep models ( e.g. , ResNet and DenseDet for classification and deep PPO models for RL ) . Our contributions are summarized as follows : • We propose simple sparse coding and dictionary learning algorithms for both k-sparse constrained sparse coding problem and l0-norm regularized problem . Our algorithms have simple approximation form for the sparse coding phase . • We introduce a simple sparse denoising layer ( SDL ) based on our handy update . Our SDL involves simple operations only , which is a fast plug-in layer for end-to-end training . • Extensive experiments on both classification tasks and reinforcement learning tasks show the effectiveness of our SDL . 2 RELATED WORKS . Sparse Coding and Dictionary Learning : Sparse coding and dictionary learning are widely studied in computer vision and image processing . One related popular method is K-SVD ( Elad & Aharon , 2006 ; Rubinstein et al. , 2008 ) , it jointly learns an over-complete dictionary and the sparse representations by minimizing a l0-norm regularized reconstruction problem . Specifically , K-SVD alternatively iterates between the sparse coding phase and dictionary updating phase . The both steps are based on heuristic greedy methods . Despite its good performance , K-SVD is very computationally demanding . Moreover , as pointed out by Bao et al . ( 2013 ) , both the sparse coding phase and dictonary updating of K-SVD use some greedy approaches that lack rigorous theoretical guarantee on its optimality and convergence . Bao et al . ( 2013 ) proposed to learn an orthogonal dictionary instead of the over-complete one . The idea is to concatenate the free parameters with predefined filters to form an orthogonal dictionary . This trick reduces the time complexity compared with KSVD . However , their algorithm relies on the predefined filters . Furthermore , the alternative descent method heavily relies on SVD , which is not easy to extend to deep models . In contrast , our method learns a structured over-complete dictionary , which has a simple form as a layer for deep learning . Recently , some works ( Venkatakrishnan et al. , 2013 ) employed deep neural networks to approximate alternating direction method of multipliers ( ADMM ) or other proximal algorithms for image denoising tasks . In ( Wei et al. , 2020 ) , reinforcement learning is used to learn the hyperparameters of these deep iterative models . However , this kind of method itself needs to train a complex deep model . Thus , they are computationally expensive , which is too heavy or inflexible as a plug-in layer for backbone models in other tasks instead of image denoising tasks , e.g. , reinforcement learning and multi-class classification , etc . An illustration of number of parameters of SDL , DnCNN ( Zhang et al. , 2017 ) and PnP ( Wei et al. , 2020 ) are shown in Table 1 . SDL has much less parameters and simpler structure compared with DnCNN and PnP , and it can serve as a lightweight plug-in for other tasks , e.g. , RL . Robust Deep Learning : In the literature of robust deep learning , several robust losses have been studied . To achieve better generalization ability , Elsayed et al . ( 2018 ) proposed a loss function to impose a large margin of any chosen layers of a deep network . Barron ( 2019 ) proposed a general loss with a shape parameter to cover several robust losses as special cases . For the problems with noisy input perturbation , several data argumentation-based algorithms and regularization techniques are proposed ( Zheng et al. , 2016 ; Ratner et al. , 2017 ; Cubuk et al. , 2020 ; Elsayed et al. , 2018 ; Zhang et al. , 2019 ) . However , the network architecture remains less explored to address the robustness of the input perturbation . Guo et al . ( 2020 ) employed NAS methods to search the robust architectures . However , the searching-based method is very computationally expensive . The resultant architectures can not be easily used as a plug-in for other popular networks . In contrast , our SDL is based on a closed-form of sparse coding , which can be used as a handy plug-in for many backbone models . 3 FAST SPARSE CODING AND DICTIONARY LEARNING . In this section , we present our fast sparse coding and dictionary learning algorithm for the k-sparse problem and the l0-norm regularized problem in Section 3.1 and Section 3.2 , respectively . Both algorithms belong to the alternative descent optimization framework . 3.1 K-SPARSE CODING . We first introduce the optimization problem for sparse coding with a k-sparse constraint . Mathematically , we aim at optimizing the following objective min Y , D ‖X −DY ‖2F subject to ‖yi‖0 ≤ k , ∀i ∈ { 1 , · · · , N } ( 1 ) µ ( D ) ≤ λ ‖dj‖2 = 1 , ∀j ∈ { 1 , · · · , M } , where D ∈ Rd×M is the dictionary , and di denotes the ith column of matrix D. yi denotes the ith column of the matrix Y ∈ RM×N , and µ ( · ) denotes the mutual coherence that is defined as µ ( D ) = max i 6=j |d > i dj | ‖di‖2‖dj‖2 . ( 2 ) The optimization problem ( 1 ) is discrete and non-convex , which is very difficult to optimize . To alleviate this problem , we employ a structured dictionary as D = R > B . ( 3 ) We require that R > R = RR > = Id and BB > = Id , and each column vector of matrix B has a constant l2-norm , i.e. , ‖bi‖2 = c. The benefit of the structured dictionary is that it enables a fast update algorithm with a closed-form approximation for the sparse coding phase . 3.1.1 CONSTRUCTION OF STRUCTURED MATRIX B . Now , we show how to design a structured matrix B that satisfies the requirements . First , we construct B by concatenating the real and imaginary parts of rows of a discrete Fourier matrix . The proof of the following theorems regarding the properties of B can be found in Appendix . Without loss of generality , we assume that d = 2m , M = 2n . Let F ∈ Cn×n be an n × n discrete Fourier matrix . Fk , j = e 2πikj n is the ( k , j ) thentry of F , where i = √ −1 . Let Λ = { k1 , k2 , ... , km } ⊂ { 1 , ... , n− 1 } be a subset of indexes . The structured matrix B can be constructed as Eq. ( 4 ) . B = 1√ n [ ReFΛ −ImFΛ ImFΛ ReFΛ ] ∈ Rd×N ( 4 ) where Re and Im denote the real and imaginary parts of a complex number , and FΛ in Eq . ( 5 ) is the matrix constructed by m rows of F FΛ= e 2πik11 n · · · e 2πik1n n ... . . . ... e 2πikm1 n · · · e 2πikmn n ∈ Cm×n . ( 5 ) Proposition 1 . Suppose d = 2m , M = 2n . Construct matrix B as in Eq. ( 4 ) . Then BB > = Id and ‖bj‖2 = √ m n , ∀j ∈ { 1 , · · · , M } . Theorem 1 shows that the structured construction B satisfies the orthogonal constraint and constant norm constraint . One thing remaining is how to construct B to achieve a small mutual coherence . To achieve this goal , we can leverage the coordinate descent method in ( Lyu , 2017 ) to construct the index set Λ . For a prime number n such that m divides n−1 , i.e. , m| ( n − 1 ) , we can employ a closed-form construction . Let g denote a primitive root modulo n. We construct the index Λ = { k1 , k2 , ... , km } as Λ = { g0 , g n−1 m , g 2 ( n−1 ) m , · · · , g ( m−1 ) ( n−1 ) m } mod n. ( 6 ) The resulted structured matrix B has a bounded mutual coherence , which is shown in Theorem 1 . Theorem 1 . Suppose d = 2m , M = 2n , and n is a prime such that m| ( n − 1 ) . Construct matrix B as in Eq . ( 4 ) with index set Λ as Eq. ( 6 ) . Let mutual coherence µ ( B ) : = maxi 6=j |b > i bj | ‖bi‖2‖bj‖2 . Then µ ( B ) ≤ √ n m . Remark : The bound of mutual coherence in Theorem 1 is non-trivial when n < m2 . For the case n ≥ m2 , we can use the coordinate descent method in ( Lyu , 2017 ) to minimize the mutual coherence . Now , we show that the structured dictionary D = R > B satisfies the constant norm constraint and has a bounded mutual coherence . The results are summarized in Theorem 1 . Corollary 1 . Let D = R > B with R > R = RR > = Id . Construct matrix B as in Eq . ( 4 ) with index set Λ as Eq. ( 6 ) . Then µ ( D ) = µ ( B ) ≤ √ n m and ‖dj‖2 = ‖bj‖2 = √ m n , ∀j ∈ { 1 , · · · , M } . Corollary 1 shows that , for any orthogonal matrix R , each column vector of the structured dictionary D has a constant l2-norm . Moreover , it remains a constant mutual coherence µ ( D ) = µ ( B ) . Thus , given a fixed matrix B , we only need to learn matrix R for the dictionary learning without undermining the low mutual coherence property .
The paper is generally well presented. However, a main issue is that the optimization algorithms for the l0-norm regularized problems (Section 3.1.2 and Section 3.2) are not correctly presented. Specifically, in the algorithm development to solve the "Fix $\boldsymbol{R}$, optimize $\boldsymbol{Y}$" subproblem, it overlooks the coupling/interaction between the variables $y_1, y_2, \dots,y_M$ and mistakenly obtains a closed-form solution. See Comment 1 for details.
SP:958f2aacb0790ffe7399fd918c023c7e4e4c314c
Additive Poisson Process: Learning Intensity of Higher-Order Interaction in Stochastic Processes
We present the Additive Poisson Process ( APP ) , a novel framework that can model the higher-order interaction effects of the intensity functions in point processes using lower dimensional projections . Our model combines the techniques in information geometry to model higher-order interactions on a statistical manifold and in generalized additive models to use lower-dimensional projections to overcome the effects from the curse of dimensionality . Our approach solves a convex optimization problem by minimizing the KL divergence from a sample distribution in lower dimensional projections to the distribution modeled by an intensity function in the point process . Our empirical results show that our model is able to use samples observed in the lower dimensional space to estimate the higher-order intensity function with extremely sparse observations . 1 INTRODUCTION . Consider two point processes which are correlated with arrival times for an event . For a given time interval , what is the probability of observing an event from both processes ? Can we learn the joint intensity function by just using the observations from each individual processes ? Our proposed model , the Additive Poisson Process ( APP ) , provides a novel solution to this problem . The Poisson process is a counting process used in a wide range of disciplines such as time-space sequence data including transportation ( Zhou et al. , 2018 ) , finance ( Ilalan , 2016 ) , ecology ( Thompson , 1955 ) , and violent crime ( Taddy , 2010 ) to model the arrival times for a single system by learning an intensity function . For a given time interval of the intensity function , it represents the probability of a point being excited at a given time . Despite the recent advances of modeling of the Poisson processes and its wide applicability , majority of the point processes model do not consider the correlation between two or more point processes . Our proposed approach learns the joint intensity function of the point process which is defined to be the simultaneous occurrence of two events . For example in a spatial-temporal problem , we want to learn the intensity function for a taxi to pick up customers at a given time and location . For this problem , each point is multi-dimensional , that is ( x , y , t ) Ni=1 , where a pair of x and y represents two spatial dimensions and t represents the time dimension . For any given location or time , we can only expect very few pick-up events occurring , therefore making it difficult for any model to learn the low valued intensity function . Previous approaches such as Kernel density estimation ( KDE ) ( Rosenblatt , 1956 ) are able to learn the joint intensity function . However , KDE suffers from the curse of dimensionality , which means that KDE requires a large size sample or a high intensity function to build an accurate model . In addition , the complexity of the model expands exponentially with respect to the number of dimensions , which makes it infeasible to compute . Bayesian approaches such as using a mixture of beta distributions with a Dirichlet prior ( Kottas , 2006 ) and Reproducing Kernel Hilbert Space ( RKHS ) ( Flaxman et al. , 2017 ) have been proposed to quantify the uncertainty with a prior for the intensity function . However , these approaches are often non-convex , making it difficult to obtain the global optimal solution . In addition , if observations are sparse , it is hard for these approaches to learn a reasonable intensity function . All previous models are unable to efficiently and accurately learn the intensity of the interaction between point processes . This is because the intensity of the joint process is often low , leading to sparse samples or , in an extreme case , no direct observations of the simultaneous event at all , making it difficult to learn the intensity function from the joint samples . In this paper , we propose a novel framework to learn the higher-order interaction effects of intensity functions in point processes . Our model combines the techniques introduced by Luo & Sugiyama ( 2019 ) to model higher-order interactions between point processes and by Friedman & Stuetzle ( 1981 ) in generalized additive models to learn the intensity function using samples in a lower dimensional space . Our proposed approach is to decompose a multi-dimensional point process into lower-dimensional representations . For example , in the x-dimension we have points ( xi ) Ni=1 , in the y-dimension , we have points ( yi ) N i=1 and in the time dimension we have ( ti ) Ni=1 . The data in these lower dimensional space can be used to improve the estimate of the joint intensity function . This is different from the traditional approach where we only use the simultaneous events to learn the joint intensity function . We first show the connection between generalized additive models and Poisson processes . We then provide the connection between generalized additive models and the log-linear model ( Agresti , 2012 ) , which has a well-established theoretical background in information geometry ( Amari , 2016 ) . We draw parallels between the formulation of the generalized additive models and the binary loglinear model on a partially ordered set ( poset ) ( Sugiyama et al. , 2017 ) . The learning process in our model is formulated as a convex optimization problem to arrive at a unique optimal solution using natural gradient , which minimizes the Kullback-Leibler ( KL ) divergence from the sample distribution in a lower dimensional space to the distribution modeled by the learned intensity function . This connection provides remarkable properties to our model : the ability to learn higher-order intensity functions using lower dimensional projections , thanks to the Kolmogorov-Arnold representation theorem . This property makes it advantageous to use our proposed approach for the cases where there are , no observations , missing samples , or low event rate . Our model is flexible because it can capture interaction between processes as a partial order structure in the log-linear model and the parameters of the model are fully customizable to meet the requirements for the application . Our empirical results show that our model effectively uses samples projected onto a lower dimensional space to estimate the higher-order intensity function . Our model is also robust to various sample sizes . 2 FORMULATION . In this section we first introduce the technical background in the Poisson process and its extension to a multi-dimensional Poisson process . We then introduce the Generalized Additive Model ( GAM ) and its connection to the Poisson process . This is followed by presenting our novel framework , called Additive Poisson Process ( APP ) , which is our main technical contribution and has a tight link to the Poisson process modelled by GAMs . We show that learning of APP can be achieved via convex optimization using natural gradient . The Poisson process is characterized by an intensity function λ : RD → R , where we assume multiple D processes . An inhomogeneous Poisson process is a general type of processes , where the arrival intensity changes with time . The process with time-changing intensity λ ( t ) is defined as a counting process N ( t ) , which has an independent increment property . For all time t ≥ 0 and changes in time δ ≥ 0 , the probability p for the observations is given as p ( N ( t+ δ ) −N ( t ) = 0 ) = 1− δλ ( t ) + o ( δ ) , p ( N ( t + δ ) − N ( t ) = 1 ) = δλ ( t ) + o ( δ ) , and p ( N ( t + δ ) − N ( t ) ≥ 2 ) = o ( δ ) , where o ( · ) denotes little-o notation ( Daley & Vere-Jones , 2007 ) . Given a realization of timestamps t1 , t2 , . . . , tN with ti ∈ [ 0 , T ] D from an inhomogeneous ( multi-dimensional ) Poisson process with the intensity λ . Each ti is the time of occurrence for the i-th event across D processes and T is the observation duration . The likelihood for the Poisson process ( Daley & Vere-Jones , 2007 ) is given by p ( { ti } Ni=1 | λ ( t ) ) = exp ( − ∫ λ ( t ) dt ) N∏ i=1 λ ( ti ) , ( 1 ) where t = [ t ( 1 ) , . . . , t ( D ) ] ∈ RD . We define the functional prior on λ ( t ) as λ ( t ) : = g ( f ( t ) ) = exp ( f ( t ) ) . ( 2 ) The function g ( · ) is a positive function to guarantee the non-negativity of the intensity which we choose to be the exponential function , and our objective is to learn the function f ( · ) . The log- likelihood of the multi-dimensional Poisson process with the functional prior is described as log p ( { ti } Ni=1 | λ ( t ) ) = N∑ i=1 f ( ti ) − ∫ exp ( f ( t ) ) dt . ( 3 ) In the following sections , we introduce generalized additive models and propose to model it by the log-linear model to learn f ( t ) and the normalizing term . 2.1 GENERALIZED ADDITIVE MODEL . In this section we present the connection between Poisson processes with Generalized Additive Model ( GAM ) proposed by Friedman & Stuetzle ( 1981 ) . GAM projects higher-dimensional features into lower-dimensional space to apply smoothing functions to build a restricted class of non-parametric regression models . GAM is less affected by the curse of dimensionality compared to directly using smoothing in a higher-dimensional space . For a given set of processes J ⊆ [ D ] = { 1 , . . . , D } , the traditional GAM using one-dimensional projections is defined as log λJ ( t ) = ∑ j∈J fj ( t ( j ) ) − βJ with some smoothing function fj . In this paper , we extend it to include higher-order interactions between features in GAM . The k-th order GAM is defined as log λJ ( t ) = ∑ j∈J f { j } ( t ( j ) ) + ∑ j1 , j2∈J f { j1 , j2 } ( t ( j1 ) , t ( j2 ) ) + · · ·+ ∑ j1 , ... , jk∈J f { j1 , ... , jk } ( t ( j1 ) , . . . , t ( jk ) ) − βJ = ∑ I⊆J , |I|≤k fI ( t ( I ) ) − βJ , ( 4 ) where t ( I ) ∈ R|I| denotes the subvector ( t ( j ) ) j∈I of t with respect to I ⊆ [ D ] . The function fI : R|I| → R is a smoothing function to fit the data , and the normalization constant βJ for the intensity function is obtained as βJ = ∫ λJ ( t ) dt = ∫ exp ( ∑ I⊆J , |I|≤k fI ( t ( I ) ) ) dt . The definition of the additive model is in the same form as Equation ( 3 ) . In particular , if we compare Equation ( 3 ) and ( 4 ) , we can see that the smoothing function f in ( 3 ) corresponds to the right-hand side of ( 4 ) . Learning of a continuous function using lower dimensional projections is well known because of the Kolmogorov-Arnold representation theorem , which states as follows : Theorem 1 ( Kolmogorov–Arnold Representation Theorem ( Braun & Griebel , 2009 ; Kolmogorov , 1957 ) ) . Any multivariate continuous function can be represented as a superposition of one–dimensional functions , i.e. , f ( t1 , . . . , tn ) = ∑2n+1 q=1 fq ( ∑n p=1 gq , p ( tp ) ) . Braun ( 2009 ) showed that the GAM is an approximation to the general form presented in Kolmogorov-Arnold representation theorem by replacing the range q ∈ { 1 , . . . , 2n + 1 } with I ⊆ J and the inner function gq , p by the identity if q = p and zero otherwise , yielding f ( t ) = ∑ I⊆J fI ( t ( I ) ) . Interestingly , the canonical form for additive models in Equation ( 4 ) can be rearranged to be in the same form as Kolmogorov-Arnold representation theorem . By letting f ( t ) = ∑ I⊆J fI ( t ( I ) ) = g−1 ( λ ( t ) ) and g ( · ) = exp ( · ) , we have λJ ( t ) = 1 exp ( βJ ) exp ( ∑ I⊆J fI ( t ( I ) ) ) ∝ exp ( ∑ I⊆J fI ( t ( I ) ) ) , ( 5 ) where we assume fI ( t ( I ) ) = 0 if |I| > k for the k-th order model and 1/ exp ( βJ ) is the normalization term for the intensity function . Based on the Kolmogorov-Arnold representation theorem , generalized additive models are able to learn the intensity of the higher-order interaction between point processes by using projections into lower dimensional space . The log-likelihood function for a kth-order model is obtained by substituting the Equation ( 4 ) into Equation ( 1 ) , log p ( { t } Ni=1|λ ( t ) ) = N∑ i=1 exp ∑ I⊆J , |I|≤k fI ( t ( I ) ) − β′ , where is a constant given by β′ = ∫ λ ( t ) dt + ∑ I⊆J βJ . In the following section we will detail a log-linear formulation that efficiently maximizes this log-likelihood equation .
The paper under review proposes a new model for multi-dimensional temporal Point processes, allowing efficient estimation of high order interactions. This new model, called additive Poisson process, relies on a log-linear structure of the intensity function that is motivated thanks to the Kolmogorov-Arnold theorem. Such structure is then linked to generalized additive models, a result that is used to devise an efficient estimation procedure with formal guarantees of convergence.
SP:33673a515722e1d8288fd3014e7db507b7250b20
SyncTwin: Transparent Treatment Effect Estimation under Temporal Confounding
1 INTRODUCTION . Estimating the causal individual treatment effect ( ITE ) on patient outcomes using observational data ( observational studies ) has become a promising alternative to clinical trials as large-scale electronic health records become increasingly available ( Booth & Tannock , 2014 ) . Figure 1 illustrates a common setting in medicine and it will be the focus of this work ( DiPietro , 2010 ) : an individual may start the treatment at some observed time ( black dashed line ) and we want to estimate the ITE on the outcomes over time after the treatment starts ( shaded area ) . The key limitation of observational studies is that treatment allocation is not randomised but typically influenced by prior measurable static covariates ( e.g . gender , ethnicity ) and temporal covariates ( e.g . all historical medical diagnosis and conditions , squares in Figure 1 ) . When the covariates also modulate the patient outcomes , they lead to the confounding bias in the direct estimation of the ITE ( Psaty et al. , 1999 ) . Although a plethora of methods overcome the confounding bias by adjusting for the static covariates ( Yoon et al. , 2018 ; Yao et al. , 2018 ; Louizos et al. , 2017 ; Shalit et al. , 2017 ; Li & Fu , 2017 ; Alaa & van der Schaar , 2017 ; Johansson et al. , 2016 ) , few existing works take advantage of the temporal covariates that are measured irregularly over time ( Figure 1 ) ( Bica et al. , 2020 ; Lim et al. , 2018 ; Schulam & Saria , 2017 ; Roy et al. , 2017 ) . Overcoming the confounding bias due to temporal covariates is especially important for medical research as clinical treatment decisions are often based on the temporal progression of a disease . Transparency is highly desirable in such a challenging problem . Although transparency is a general concept , we will focus on two specific aspects ( Arrieta et al. , 2020 ) . ( 1 ) Explainability : the method should estimate the ITE of any given individual ( the target individual ) based on a small subset of other individuals ( contributors ) whose amount of contribution can be quantified ( e.g using a weight between 0 and 1 ) . Although the estimate of different target individuals may depend on different contributors , the method can always shortlist the few contributors for the expert to understand the rationale for each estimate . ( 2 ) Trustworthiness : the method should identify the target individuals whose ITE can not be reliably estimated due to violation of assumptions , lack of data , or other failure modes . Being transparent about what the method can not do improves the overall trustworthiness because it guides the experts to only use the method when it is deemed reliable . Inspired by the well-established Synthetic Control method in Statistics and Econometrics ( Abadie et al. , 2010 ; Abadie , 2019 ) , we propose SyncTwin , a transparent ITE estimation method which deals with temporal confounding . Figure 2 A illustrates the schematics of SyncTwin . SyncTwin starts by encoding the irregularly-measured temporal covariates as representation vectors . For each treated target individual , SyncTwin selects and weights few contributors from the control group based on their representation vectors and the sparsity constraint . SyncTwin proceeds to construct a synthetic twin whose representation vector and outcomes are the weighted average of the contributors . Finally , the ITE is estimated as the difference between the target individual ’ s and the Synthetic Control ’ s outcomes after treatment . The difference in their outcomes before treatment indicates the quality of the synthetic twin and whether the model assumptions hold . If the target individual and synthetic twin do not match in pre-treatment outcomes , the estimated ITE should not be considered trustworthy . Transparency of SyncTwin . SyncTwin achieves explainability by selecting only a few contributors for each target individual . It achieves trustworthiness because it quantifies the confidence one should put into the estimated ITE as the difference between the target and synthetic pre-treatment outcomes . 2 PROBLEM SETTING . We consider a clinical observational study with N individuals indexed by i ∈ [ N ] = { 1 , . . . , N } . Let ai ∈ { 0 , 1 } be the treatment indicator with ai = 1 if i started to receive the treatment at some time and ai = 0 if i never initiated the treatment . We realign the time steps such that all treatments were initiated at time t = 0 . Let I1 = { i ∈ [ N ] | ai = 1 } and I0 = { i ∈ [ N ] | ai = 0 } be the set of the treated and the control respectively . Denote N0 = |I0| and N1 = |I1| as the sizes of the groups . The time t = 0 is of special significance because it marks the initiation of the treatment ( black dashed line in Figure 1 ) . We call the period t < 0 the pre-treatment period and the period t ≥ 0 the treatment period ( shaded area in Figure 1 ) . Temporal covariates are observed during the pre-treatment period only and may influence the treatment decision and the outcome . Let Xi = [ xis ] s∈ [ Si ] be the sequence of covariates xis ∈ RD , which includes Si ∈ N observations taken at times t ∈ Ti = { tis } s∈ [ Si ] , where all tis ∈ R and tis < 0 . Note that xis may also include static covariates whose values are constant over time . To allow the covariates to be sampled at different frequencies , let mis ∈ { 0 , 1 } D be the masking vector with misd = 1 indicating the dth element in xis is sampled . The outcome of interest is observed both before and after the treatment . In many cases , the researchers are interested in the outcomes measured at regular time intervals ( e.g . the monthly average blood pressure ) . Hence , let T − = { −M , . . . , −1 } and T + = { 0 , . . . , H − 1 } be the observation times before and after treatment initiation . In this work , we focus on real-valued outcomes yit ∈ R observed at t ∈ T − ∪ T + . We arrange the outcomes after treatment into a H-dimensional vector denoted as yi = [ yit ] t∈T + ∈ RH . Similarly define pre-treatment outcome vector y − i = [ yit ] t∈T − ∈ RM . Using the potential outcome framework ( Rubin , 2005 ) , let yit ( ai ) ∈ R denote the potential outcome at time t in a world where i received the treatment as indicated by ai . Let yi ( 1 ) = [ yit ( 1 ) ] t∈T + ∈ RH , and y−i ( 1 ) = [ yit ( 1 ) ] t∈T − ∈ RM , similarly for yi ( 0 ) and y − i ( 0 ) . The individual treatment effect ( ITE ) is defined as τi = yi ( 1 ) − yi ( 0 ) ∈ RH . Under the consistency assumption ( discussed later in details ) , the factual outcome is observed yi ( ai ) = yi , which means for any i ∈ [ N ] only the unobserved counterfactual outcome yi ( 1− ai ) needs to be estimated in order to estimate the ITE . To simplify the notations , we focus on estimating the ITE for the treated , i.e . τ̂i = yi ( 1 ) − ŷi ( 0 ) for i ∈ I1 , though the same approach applies to the control i ∈ I0 and new units i /∈ [ N ] without loss of generality ( A.5 ) . SyncTwin relies on the following assumptions . ( 1 ) Consistency , also known as Stable Unit Treatment Value Assumption ( Rubin , 1980 ) : yit ( ai ) = yit , ∀i ∈ [ N ] , t ∈ T − ∪ T + . ( 2 ) No anticipation , also known as causal systems ( Abbring & Van den Berg , 2003 ; Dash , 2005 ) : yit = yit ( 1 ) = yit ( 0 ) , ∀t ∈ T − , i ∈ [ N ] . ( 3 ) Data generating model : the assumed directed acyclic graph is visualized in Figure 2 B ( Pearl , 2009 ) , where we introduce two variables ci ∈ RK and vi ∈ RU in addition to the previously defined ones . The latent variable ci is the common cause of yit ( 0 ) and xis , and it indirectly influences ai through xis . As we show later , SyncTwin tries to learn and construct a synthetic twin that has the same ci as the target . The variable vi is an unobserved confounder . Although SyncTwin , like all other ITE methods , works better without unobserved confounders ( i.e . vi = 0 , ∀i ∈ [ N ] ) , we develop a unique checking procedure in Equation ( 4 ) to validate if there exists vi 6= 0 . We also demonstrate that under certain favourable conditions , SyncTwin can overcome the impact of the vi . To establish the theoretical results , we further assume yit ( 0 ) follows a latent factor model with ci , vi as the latent “ factors ” ( Bai & Ng , 2008 ) : yit ( 0 ) = q > t ci + u > t vi + ξit , ∀t ∈ T − ∪ T + , ( 1 ) where qt ∈ RK , ut ∈ RU are weight vectors and ξit is the white noise . We require the weight vectors to have ||qt|| = 1 , ∀t ∈ T −∪T + ( Xu , 2017 ) , which does not reduce the expressiveness of the model . We further require the dimensionality of the latent factor to be smaller than the number of time steps before or after treatment , i.e . K < min ( M , H ) . Furthermore , let Q− = [ qt ] t∈T − ∈ RM×K and Q = [ qt ] t∈T + ∈ RH×K denote the matrices that stack all the weight vectors q ’ s before and after treatment as rows respectively . The latent factor model assumption may seem restrictive but as we show in Appendix A.4 it is applicable to many scenarios . In the simulation study ( 5.1 ) we further show SyncTwin performs well even when the data is not generated using model ( 1 ) but instead using a set of differential equations . We compare our assumptions with those used in the related works in Appendix A.3 . 3 RELATED WORK . 3.1 SYNTHETIC CONTROL . Similar to SyncTwin , Synthetic control ( SC ) ( Abadie , 2019 ) and its extensions ( Athey et al. , 2018 ; Amjad et al. , 2018 ) estimate ITE based on Synthetic Control outcomes . However , when applied to temporal confounding , SC will flatten the temporal covariates [ xis ] s∈ [ Si ] into a fixed-sized ( highdimensional ) vector xi and use it to construct the twin . As a result , SC does not allow the covariates to be variable-length or sampled at different frequencies ( otherwise xi ’ s dimensionality will vary across individuals ) . In contrast , SyncTwin can gracefully handle these irregularities because it constructs the twin using the representation vectors . Moreover , the covariates xi may contain observation noise and other sources of randomness that do not relate to the outcome or the treatment . Enforcing the target and the twin to have similar xi will inject these irrelevant noise to the twin , a situation we call over-match ( because it resembles over-fit ) . Over-match undermines ITE estimation as we show in the simulation study in Section 5.1 . Finally , SC assumes yit ( 0 ) = q > t xi + u > t vi + ξit , i.e . the flattened covariates xi linearly predicts yit ( 0 ) , which is a special case of our assumption ( 1 ) and unlikely to hold for many medical applications .
This paper provides an approach for treatment effect estimation when the observational data is longitudinal (with irregular time stamps) and consists of temporal confounding variables. The proposed method can be categorized under the matching methods, in which, in order to estimate the counterfactual outcomes, a subset of the subjects in the opposite treatment arm (i.e., contributors) is selected and weighted. The proposed method is designed such that it achieves explainability (by identifying a few contributors) and trustworthiness (by checking if the estimated outcome is reliable).
SP:e6e46c0563e852189839b2f923788165800a0f17
PAC Confidence Predictions for Deep Neural Network Classifiers
1 INTRODUCTION . Due to the recent success of machine learning , there has been increasing interest in using predictive models such as deep neural networks ( DNNs ) in safety-critical settings , such as robotics ( e.g. , obstacle detection ( Ren et al. , 2015 ) and forecasting ( Kitani et al. , 2012 ) ) and healthcare ( e.g. , diagnosis ( Gulshan et al. , 2016 ; Esteva et al. , 2017 ) and patient care management ( Liao et al. , 2020 ) ) . One of the key challenges is the need to provide guarantees on the safety or performance of DNNs used in these settings . The potential for failure is inevitable when using DNNs , since they will inevitably make some mistakes in their predictions . Instead , our goal is to design tools for quantifying the uncertainty of these models ; then , the overall system can estimate and account for the risk inherent in using the predictions made by these models . For instance , a medical decision-making system may want to fall back on a doctor when its prediction is uncertain whether its diagnosis is correct , or a robot may want to stop moving and ask a human for help if it is uncertain to act safely . Uncertainty estimates can also be useful for human decision-makers—e.g. , for a doctor to decide whether to trust their intuition over the predicted diagnosis . While many DNNs provide confidences in their predictions , especially in the classification setting , these are often overconfident . This phenomenon is most likely because DNNs are designed to overfit the training data ( e.g. , to avoid local minima ( Safran & Shamir , 2018 ) ) , which results in the predicted probabilities on the training data being very close to one for the correct prediction . Recent work has demonstrated how to calibrate the confidences to significantly reduce overconfidence ( Guo et al. , 2017 ) . Intuitively , these techniques rescale the confidences on a held-out calibration set . Because they are only fitting a small number of parameters , they do not overfit the data as was the case in the original DNN training . However , these techniques do not provide theoretical guarantees on their correctness , which can be necessary in safety-critical settings to guarantee correctness . We propose a novel algorithm for calibrated prediction in the classification setting that provides theoretical guarantees on the predicted confidences . We focus on on-distribution guarantees— i.e. , where the test distribution is the same as the training distribution . In this setting , we can build on ideas from statistical learning theory to provide probably approximately correctness ( PAC ) guarantees ( Valiant , 1984 ) . Our approach is based on a calibrated prediction technique called histogram binning ( Zadrozny & Elkan , 2001 ) , which rescales the confidences by binning them and then rescaling each bin independently . We use Clopper-Pearson bounds on the tails of the binomial distribution to obtain PAC upper/lower bounds on the predicted confidences . Next , we study how it enables theoretical guarantees in two applications . First , we consider the problem of speeding up DNN inference by composing a fast but inaccurate model with a slow but accurate model—i.e. , by using the accurate model for inference only if the confidence of the inaccurate one is underconfident ( Teerapittayanon et al. , 2016 ) . We use our algorithm to obtain guarantees on accuracy of the composed model . Second , for safe planning , we consider using a DNN to predict whether or not a given action ( e.g. , move forward ) is safe ( e.g. , do not run into obstacles ) given an observation ( e.g. , a camera image ) . The robot only continues to act if the predicted confidence is above some threshold . We use our algorithm to ensure safety with high probability . Finally , we evaluate the efficacy of our approach in the context of these applications . Related work . Calibrated prediction ( Murphy , 1972 ; DeGroot & Fienberg , 1983 ; Platt , 1999 ) has recently gained attention as a way to improve DNN confidences ( Guo et al. , 2017 ) . Histogram binning is a non-parametric approach that sorts the data into finitely many bins and rescales the confidences per bin ( Zadrozny & Elkan , 2001 ; 2002 ; Naeini et al. , 2015 ) . However , traditional approaches do not provide theoretical guarantees on the predicted confidences . There has been work on predicting confidence sets ( i.e. , predict a set of labels instead of a single label ) with theoretical guarantees ( Park et al. , 2020a ) , but this approach does not provide the confidence of the most likely prediction , as is often desired . There has also been work providing guarantees on the overall calibration error ( Kumar et al. , 2019 ) , but this approach does not provide per-prediction guarantees . There has been work speeding up DNN inference ( Hinton et al. , 2015 ) . One approach is to allow intermediate layers to be dynamically skipped ( Teerapittayanon et al. , 2016 ; Figurnov et al. , 2017 ; Wang et al. , 2018 ) , which can be thought of as composing multiple models that share a backbone . Unlike our approach , they do not provide guarantees on the accuracy of the composed model . There has also been work on safe learning-based control ( Akametalu et al. , 2014 ; Fisac et al. , 2019 ; Bastani , 2019 ; Li & Bastani , 2020 ; Wabersich & Zeilinger , 2018 ; Alshiekh et al. , 2018 ) ; however , these approaches are not applicable to perception-based control . The most closely related work is Dean et al . ( 2019 ) , which handles perception , but they are restricted to known linear dynamics . 2 PAC CONFIDENCE PREDICTION . In this section , we begin by formalizing the PAC confidence coverage prediction problem ; then , we describe our algorithm for solving this problem based on histogram binning . Calibrated prediction . Let x ∈ X be an example and y ∈ Y be one of a finite label set , and let D be a distribution over X × Y . A confidence predictor is a model f̂ : X → PY , where PY is the space of probability distributions over labels . In particular , f̂ ( x ) y is the predicted confidence that the true label for x is y . We let ŷ : X → Y be the corresponding label predictor—i.e. , ŷ ( x ) : = arg maxy∈Y f̂ ( x ) y—and let p̂ : X → R≥0 be corresponding top-label confidence predictor— i.e. , p̂ ( x ) : = maxy∈Y f̂ ( x ) y . While traditional DNN classifiers are confidence predictors , a naively trained DNN is not reliable—i.e. , predicted confidence does not match to the true confidence ; recent work has studied heuristics for improving reliability ( Guo et al. , 2017 ) . In contrast , our goal is to construct a confidence predictor that comes with theoretical guarantees . We first introduce the definition of calibration ( DeGroot & Fienberg , 1983 ; Zadrozny & Elkan , 2002 ; Park et al. , 2020b ) —i.e. , what we mean for a predicted confidence to be “ correct ” . In many cases , the main quantity of interest is the confidence of the top prediction . Thus , we focus on ensuring that the top-label predicted confidence p̂ ( x ) is calibrated ( Guo et al. , 2017 ) ; our approach can easily be extended to providing guarantees on all confidences predicted using f̂ . Then , we say a confidence predictor f̂ is well-calibrated with respect to distribution D if P ( x , y ) ∼D [ y = ŷ ( x ) | p̂ ( x ) = t ] = t ( ∀t ∈ [ 0 , 1 ] ) . That is , among all examples x such that the label prediction ŷ ( x ) has predicted confidence t = p̂ ( x ) , ŷ ( x ) is the correct label for exactly a t fraction of these examples . Using a change of variables ( Park et al. , 2020b ) , f̂ being well-calibrated is equivalent to p̂ ( x ) = c∗ f̂ ( x ) : = P ( x′ , y′ ) ∼D [ y ′ = ŷ ( x′ ) | p̂ ( x′ ) = p̂ ( x ) ] ( ∀x ∈ X ) . ( 1 ) Then , the goal of well-calibration is to make p̂ equal to c∗ f̂ . Note that f̂ appears on both sides of the equation p̂ ( x ) = c∗ f̂ ( x ) —implicitly in p̂—which is what makes it challenging to satisfy . Indeed , in general , it is unlikely that ( 1 ) holds exactly for all x . Instead , based on the idea of histogram binning ( Zadrozny & Elkan , 2001 ) , we consider a variant where we partition the data into a fixed number of bins and then construct confidence coverages separately for each bin . In particular , consider K bins B1 , . . . , BK ⊆ [ 0 , 1 ] , where B1 = [ 0 , b1 ] and Bk = ( bk−1 , bk ] for k > 1 . Here , K and 0 ≤ b1 ≤ · · · ≤ bK−1 ≤ bK = 1 are hyperparameters . Given f̂ , let κf̂ : X → { 1 , . . . , K } to denote the index of the bin that contains p̂ ( x ) —i.e. , p̂ ( x ) ∈ Bκf̂ ( x ) . Definition 1 We say f̂ is well-calibrated for a distribution D and bins B1 , . . . , BK if p̂ ( x ) = cf̂ ( x ) : = P ( x′ , y′ ) ∼D [ y′ = ŷ ( x′ ) ∣∣∣ p̂ ( x′ ) ∈ Bκf̂ ( x ) ] ( ∀x ∈ X ) , ( 2 ) where we refer to cf̂ ( x ) as the true confidence . Intuitively , this definition “ coarsens ” the calibration problem across the bins—i.e. , rather than sorting the inputs x into a continuum of “ bins ” p̂ ( x ) = t for each t ∈ [ 0 , 1 ] as in ( 1 ) , we sort them into a finite number of bins p̂ ( x ) ∈ Bk ; intuitively , we have c∗ f̂ ≈ cf̂ if the bin sizes are close to zero . It may not be obvious what downstream guarantees can be obtained based on this definition ; we provide examples in Sections 3 & 4 . Problem formulation . We formalize the problem of PAC calibration . We focus on the setting where the training and test distributions are identical—e.g. , we can not handle adversarial examples or changes in covariate distribution ( e.g. , common in reinforcement learning ) . Importantly , while we assume a pre-trained confidence predictor f̂ is given , we make no assumptions about f̂—e.g. , it can be uncalibrated or heuristically calibrated . If f̂ performs poorly , then the predicted confidences will be close to 1/|Y|—i.e. , express no confidence in the predictions . Thus , it is fine if f̂ is poorly calibrated ; the important property is that the confidence predictor f̂ have similar true confidences . The challenge in formalizing PAC calibration is that quantifying over all x in ( 2 ) . One approach is to provide guarantees in expectation over x ( Kumar et al. , 2019 ) ; however , this approach does not provide guarantees for individual predictions . Instead , our goal is to find a set of predicted confidences that includes the true confidence with high probability . Of course , we could simply predict the interval [ 0 , 1 ] , which always contains the true confidence ; thus , simultaneously want to make the size of the interval small . To this end , we consider a confidence coverage predictor Ĉ : X → 2R , where cf̂ ( x ) ∈ Ĉ ( x ) with high probability . In particular , Ĉ ( x ) outputs an interval [ c , c ] ⊆ R , where c ≤ c , instead of a set . We only consider a single interval ( rather than disjoint intervals ) since one suffices to localize the true confidence cf̂ . We are interested in providing theoretical guarantees for an algorithm used to construct confidence coverage predictor Ĉ given a held-out calibration set Z ⊆ X × Y . In addition , we assume the algorithm is given a pretrained confidence predictor f̂ . Thus , we consider Ĉ as depending on Z and f̂ , which we denote by Ĉ ( · ; f̂ , Z ) . Then , we want Ĉ to satisfy the following guarantee : Definition 2 Given δ ∈ R > 0 and n ∈ N , Ĉ is probably approximately correct ( PAC ) if for any D , PZ∼Dn [ ∧ x∈X cf̂ ( x ) ∈ Ĉ ( x ; f̂ , Z ) ] ≥ 1− δ . ( 3 ) Note that cf̂ depends on D. Here , “ approximately correct ” technically refers to the mean of Ĉ ( x ; f̂ , Z ) , which is an estimate of cf̂ ( x ) ; the interval captures the bound on the error of this estimate ; see Appendix A for details . Furthermore , the conjunction over all x ∈ X may seem strong . We can obtain such a guarantee due to our binning strategy : the property cf̂ ( x ) ∈ Ĉ ( x ; f̂ , Z ) only depends on the bin Bκf̂ ( x ) , so the conjunction is really only over bins k ∈ { 1 , ... , K } . Algorithm . We propose a confidence coverage predictor that satisfies the PAC property . The problem of estimating the confidence interval Ĉ ( x ) of the binned true confidence cf̂ ( x ) is closely related to the binomial proportion confidence interval estimation ; consider a Bernoulli random variable b ∼ B : = Bernoulli ( θ ) for any θ ∈ [ 0 , 1 ] , where b = 1 denotes a success and b = 0 denotes a failure , and θ is unknown . Given a sequence of observations b1 : n : = ( b1 , . . . , bn ) ∼ Bn , the goal is to construct an interval Θ̂ ( b1 : n ) ⊆ R that includes θ with high probability—i.e. , Pb1 : n∼Bn [ θ ∈ Θ̂ ( b1 : n ) ] ≥ 1− α , ( 4 ) where α ∈ R > 0 is a given confidence level . In particular , the Clopper-Pearson interval Θ̂CP ( b1 : n ; α ) : = [ inf θ { θ ∣∣∣ Pθ [ S ≥ s ] ≥ α 2 } , sup θ { θ ∣∣∣ Pθ [ S ≤ s ] ≥ α 2 } ] , guarantees ( 4 ) ( Clopper & Pearson , 1934 ; Brown et al. , 2001 ) , where s = ∑n i=1 bi is the number of observed successes , n is the number of observations , and S is a Binomial random variable S ∼ Binomial ( n , θ ) . Intuitively , the interval is constructed such that the number of observed success falls in the region with high-probability for any θ in the interval . The following expression is equivalent due to the relationship between the Binomial and Beta distributions ( Hartley & Fitch , 1951 ; Brown et al. , 2001 ) —i.e. , Pθ [ S ≥ s ] = Iθ ( s , n− s+ 1 ) , where Iθ is the CDF of Beta ( s , n− s+ 1 ) : Θ̂CP ( b1 : n ; α ) = [ α 2 quantile of Beta ( s , n− s+ 1 ) , ( 1− α 2 ) quantile of Beta ( s+ 1 , n− s ) ] . Now , for each of the K bins , we apply Θ̂CP with confidence level α = δK—i.e. , Ĉ ( x ; f̂ , Z , δ ) : = Θ̂CP ( Wκf̂ ( x ) ; δ K ) where Wk : = { 1 ( ŷ ( x ) = y ) ∣∣∣ ( x , y ) ∈ Z s.t . κf̂ ( x ) = k } . Here , Wk is the set of observations of successes vs. failures corresponding to the subset of labeled examples ( x , y ) ∈ Z such that p̂ ( x ) falls in the bin Bk , where a success is defined to be a correct prediction ŷ ( x ) = y . We note that for efficiency , the confidence interval for each of the K bins can be precomputed . Our construction of Ĉ satisfies the following ; see Appendix B for a proof : Theorem 1 Our confidence coverage predictor Ĉ is PAC for any δ ∈ R > 0 and n ∈ N. Note that Clopper-Pearson intervals are exact , ensuring the size of Ĉ for each bin is small in practice . Finally , an important special case is when there is a single bin B = [ 0 , 1 ] —i.e. , Ĉ0 ( x ; f̂ , Z ′ , δ ) : = Θ̂CP ( W ; δ ) where W : = { 1 ( ŷ ( x′ ) = y′ ) | ( x′ , y′ ) ∈ Z ′ } . Note that Ĉ0 does not depend on x , so we drop it—i.e. , Ĉ0 ( f̂ , Z ′ , δ ) : = Θ̂CP ( W ; δ ) —i.e. , Ĉ0 computes the Clopper-Pearson interval over Z ′ , which is a subset of the original set Z .
This paper proposes a method for obtaining probably-approximately correct (PAC) predictions given a pre-trained classifier. The PAC intervals are connected to calibration, and take the form of confidence intervals given the bin a prediction falls in. They demonstrate and explore two use cases: applying this technique to get faster inference in deep neural networks, and using the PAC predictor to do safe planning. Experiments in both of these cases show improvements in speed-accuracy or safety-accuracy tradeoffs, as compared to baselines.
SP:8997ab419d35acd51ef50ef6265e5c37c468a2ac
Weak NAS Predictor Is All You Need
1 INTRODUCTION . Neural Architecture Search ( NAS ) has become a central topic in recent years with great progress ( Liu et al. , 2018b ; Luo et al. , 2018 ; Wu et al. , 2019 ; Howard et al. , 2019 ; Ning et al. , 2020 ; Wei et al. , 2020 ; Luo et al. , 2018 ; Wen et al. , 2019 ; Chau et al. , 2020 ; Luo et al. , 2020 ) . Methodologically , all existing NAS methods try to find the best network architecture by exploring the architecture-toperformance manifold , such as reinforced-learning-based ( Zoph & Le , 2016 ) , evolution-based ( Real et al. , 2019 ) or gradient-based Liu et al . ( 2018b ) approaches . In order to cover the whole space , they often train and evaluate a large amount of architectures , thus causing tremendous computation cost . Recently , predictor-based NAS methods alleviate this problem with two key steps : one sampling step to sample some architecture-performance pairs , and another performance modeling step to fit the performance distribution by training a proxy accuracy predictor . An in-depth analysis of existing methods ( Luo et al. , 2018 ) founds that most of those methods ( Ning et al. , 2020 ; Wei et al. , 2020 ; Luo et al. , 2018 ; Wen et al. , 2019 ; Chau et al. , 2020 ; Luo et al. , 2020 ) attempt to model the performance distribution over the whole architecture space . However , since the architecture space is often exponentially large and highly non-convex , modeling the whole space is very challenging especially given limited samples . Meanwhile , different types of predictors in these methods have to demand handcraft design of the architecture representations to improve the performance . In this paper , we envision that the ambitious goal of modeling the whole space may not be necessary if the final goal is to find the best architecture . Intuitively , we assume the whole space could be divided into different sub-spaces , some of which are relatively good while some are relatively bad . We tend to choose the good ones while neglecting the bad ones , which makes sure more samples will be used to model the good subspace precisely and then find the best architecture . From another perspective , instead of optimizing the predictor by sampling the whole space as well as existing methods , we propose to jointly optimize the sampling strategy and the predictor learning , which helps achieve better sample efficiency and prediction accuracy simultaneously . Based on the above motivation , we present a novel framework that estimates a series of weak predictors progressively . Rather than expecting a strong predictor to model the whole space , we instead seek a progressive evolving of weak predictors that can connect a path to the best architecture . In this way , it greatly simplifies the learning task of each predictor . To ensure moving the best architecture along the path , we increase the sampling probability of better architectures guided by the weak predictor at each iteration . Then , the consecutive weak predictor with better samples will be trained in the next iteration . We iterate until we arrive at an embedding subspace where the best architectures reside . The weak predictor achieved at the final iteration becomes the dedicated predictor focusing on such a fine subspace and the best performed architecture can be easily predicted . Compared to existing predictor-based NAS , our method has several merits . First , since only weak predictors are required to locate the good subspace , it yields better sample efficiency . On NAS-Benchmark-101 and NAS-Benchmark-201 , it costs significantly fewer samples to find the top-performance architecture than existing predictorbased NAS methods . Second , it is much less sensitive to the architecture representation ( e.g. , different architecture embeddings ) and the predictor formulation design ( e.g. , MLP , Gradient Boosting Regression Tree , Random Forest ) . Experiments show our superior robustness in all their combinations . Third , it is generalized to other search spaces . Given a limited sample budget , it achieves the state-of-the-art ImageNet performance on the NASNet search space . 2 OUR APPROACH . 2.1 REVISIT PREDICTOR-BASED NEURAL ARCHITECTURE SEARCH . Neural Architecture Search ( NAS ) finds the best network architecture by exploring the architectureto-performance manifold . It can be formulated as an optimization problem . Given a search space of network architectures X and a discrete architecture-to-performance mapping function f : X → P from architecture set X to performance set P , the objective is to find the best neural architecture x∗ with the highest performance f ( x ) in the search space X : x∗ = argmax x∈X f ( x ) ( 1 ) A naı̈ve solution is to estimate the performance mapping f ( x ) through the full search space , however , it is prohibitively expensive since all architectures have to be exhaustively trained from scratch . To address this problem , predictor-based NAS learns a proxy predictor f̃ ( x ) to approximate f ( x ) using some architecture-performance pairs , which significantly reduces the training cost . In general , predictor-based NAS can be formulated as : x∗ = argmax x∈X f̃ ( x|S ) s.t . f̃ = argmin S , f̃∈F̃ ∑ s∈S L ( f̃ ( s ) , f ( s ) ) ( 2 ) where L is the loss function for the predictor f̃ , F̃ is a set of all possible approximation to f , S : = { S ⊆ X | |S| ≤ C } is the training pairs for predictor f̃ given sample budget C. Here , C is directly correlated to the total training cost . Our objective is to minimize the loss L of the predictor f̃ based on some sampled architectures S. Previous predictor-based NAS methods attempt to solve Equation 2 with two key steps : ( 1 ) sampling some architecture-performance pairs and ( 2 ) learning a proxy accuracy predictor . First , a common practice in previous work is to sample training pairs S uniformly from the search space X to learn the predictor . Such a sampling is inefficient considering that the goal of NAS is to find a subspace of well-performed architectures in the search space . A biased sampling strategy towards the wellperformed architectures can be more desirable . Second , given such pairs S , previous predictor-based NAS uses a predictor f̃ to model the performance distribution over the whole architecture space . Since the architecture space is often enormously large and highly non-convex , it is too challenging to model the whole space given the limited samples . 2.2 PROGRESSIVE WEAK PREDICTORS APPROXIMATION . We envision that the above ambitious goal may not be necessary if the final aim of NAS is to find the best architecture . We argue that sampling S and learning f̃ should be co-evolving instead of a onetime deal as done in existing predictor-based NAS . Demonstrated in Figure 2 , rather than expecting a single strong predictor to model the whole space at one time , we progressively evolve our weak predictors to sample towards subspace of best architectures , thus greatly simplifying the learning task of each predictor . With these coarse-to-fine iterations , the ranking of sampling space is refined gradually , which helps find the optimal architectures eventually . Thus , we propose a novel coordinate descent way to jointly optimize the sampling and learning stages in predictor-based NAS progressively , which can be formulated as following : Sampling Stage : P̃ k = { f̃k ( s ) |s ∈ X \ Sk } ( 3 ) Sk+1 = argmax Tk ( P̃ k ) ∪ Sk ( 4 ) Learning Stage : x∗ = argmax x∈X f̃ ( x|Sk+1 ) s.t.f̃k+1 = argmin f̃k∈F̃ ∑ s∈Sk+1 L ( f̃ ( s ) , f ( s ) ) ( 5 ) Suppose our iterative methods has K iterations , at k-th iteration where k = 1 , 2 , . . .K , we initialize our training set S1 by randomly sampling a few samples from X to train an initial predictor f̃1 . We then jointly optimize the sampling set Sk and predictor f̃k in a progressive manner for K iterations . Sampling Stage We first sort all the architectures in the search space X according to its predicted performance P̃ k at every iteration k. Given the sample budget , we then sample new architectures Sk+1 among the top T k ranked architectures . Learning Stage We learn a predictor f̃k , where we want to minimize the the lossL of the predictor f̃k based on sampled architectures Sk . We then evaluate all the architectures X in the search space using the learned predictor f̃k to get the predicted performance P̃ k. Progressive Approximation Through the above alternative iteration , the predictor f̃k would guide the sampling process to gradually zoom into the promising architecture samples . In addition , the good performing samples Sk+1 sampled from the promising architecture samples would in term improve the performance of the predictor f̃k+1 in the well-performed architectures . To demonstrate the effectiveness of our iterative scheme , Figure 3 ( a ) shows the progressive procedure of finding the optimal architecture x∗ and learning the predicted best architecture x̃∗k over 5 iterations . As we can see , the optimal architecture and the predicted best one are moving towards each other closer and closer , which indicates that the performance of predictor over the optimal architecture ( s ) is growing better . In Figure 3 ( b ) , we use the error empirical distribution function ( EDF ) proposed in ( Radosavovic et al. , 2020 ) to visualize the performance distribution of architectures in the subspace . We plot the EDF of the top-200 models based on the predicted performance over 5 iterations . As shown in Figure 3 ( b ) , the subspace of top-performed architectures is consistently evolving towards more promising architecture samples over 5 iterations . In conclusion , the probabilities of sampling better architectures through these progressively improved weak predictors indeed keep increasing , as we desire them to . 2.3 GENERALIZABILITY ON PREDICTORS AND FEATURES . Here we analyze the generalizability of our method and demonstrate its robustness on different predictors and features . In predictor-based NAS , the objective of learning the predictor f̃ can be formulated as a regression problem ( Wen et al. , 2019 ) or a ranking ( Ning et al. , 2020 ) problem . The choice of predictors is diverse , and usually critical to final performance ( e.g . MLP ( Ning et al. , 2020 ; Wei et al. , 2020 ) , LSTM ( Luo et al. , 2018 ) , GCN ( Wen et al. , 2019 ; Chau et al. , 2020 ) , Gradient Boosting Tree ( Luo et al. , 2020 ) ) . To illustrate our framework is generalizable and robust to the specific choice of predictors , we compare the following predictor variants . • Multilayer perceptron ( MLP ) : MLP is the baseline commonly used in predictor-based NAS ( Ning et al. , 2020 ) due to its simplicity . Here we use a 4-layer MLP with hidden layer dimension of ( 1000 , 1000 , 1000 , 1000 ) which is sufficient to model the architecture encoding . • Gradient Boosting Regression Tree ( GBRT ) : Tree-based methods have recently been pre- ferred in predictor-based NAS ( Luo et al. , 2020 ; Siems et al. , 2020 ) since it is more suitable to model discrete representation of the architectures . Here , we use the Gradient Boosting Regression Tree based on XGBoost ( Chen & Guestrin , 2016 ) implementation . • Random Forest : Random Forrest is another variant of tree-based predictor , it differs from Gradient Boosting Trees in that it combines decisions at the end instead of along each hierarchy , and thus more robust to noise . The selection of features to represent the architecture search space and learn the predictor is also sensitive to the performance . Previous methods tended to hand craft the feature for the best performance ( e.g. , raw architecture encoding ( Wei et al. , 2020 ) , supernet statistic ( Hu et al. , 2020 ) ) . To demonstrate our framework is robust across different features , we compare the following features . • One-hot Vector : In NAS-Bench-201 ( Dong & Yang , 2020 ) , its DART style search space fixed the graph connectivity , so one-hot vector is used to encode the choice of operator . • Adjacency Matrix : In NAS-Bench-101 , we used the encoding scheme as well as ( Ying et al. , 2019 ; Wei et al. , 2020 ) , where a 7×7 adjacency matrix represents the graph connectivity and a 7-dimensional vector represents the choice of operator , on every node . We compare the robustness across different predictors under our framework shown in Figure 4 . We can see that all predictors perform similarly among different target datasets . As shown in Figure 4 with Figure 5 , although different architecture encoding methods are used , our method can perform similarly well among different predictors , which demonstrates that our proposed method is robust to different predictors and features selection .
of contribution: The authors propose an interesting approach to address the sample-efficiency issue in Neural Architecture Search (NAS). Compared to other existing predictor based methods, the approach distinguishes itself by progressive shrinking the search space. The paper correctly identifies the sampling is an important aspect in using a predictor based NAS method;
SP:4c82d9d12ec6a9f171c4281739776da18bcc2906
R-GAP: Recursive Gradient Attack on Privacy
1 INTRODUCTION . Distributed and federated learning have become common strategies for training neural networks without transferring data ( Jochems et al. , 2016 ; 2017 ; Konečný et al. , 2016 ; McMahan et al. , 2017 ) . Instead , model updates , often in the form of gradients , are exchanged between participating nodes . These are then used to update at each node a copy of the model . This has been widely applied for privacy purposes ( Rigaki & Garcia , 2020 ; Cristofaro , 2020 ) , including with medical data ( Jochems et al. , 2016 ; 2017 ) . Recently , it has been demonstrated that this family of approaches is susceptible to attacks that can in some circumstances recover the training data from the gradient information exchanged in such federated learning approaches , calling into question their suitability for privacy preserving distributed machine learning ( Phong et al. , 2018 ; Wang et al. , 2019 ; Zhu et al. , 2019 ; Zhao et al. , 2020 ; Geiping et al. , 2020 ; Wei et al. , 2020 ) . To date these attack strategies have broadly fallen into two groups : ( i ) an analytical attack based on the use of gradients with respect to a bias term ( Phong et al. , 2018 ) , and ( ii ) an optimization-based attack ( Zhu et al. , 2019 ) that can in some circumstances recover individual training samples in a batch , but that involves a difficult nonconvex optimization that doesn ’ t always converge to a correct solution ( Geiping et al. , 2020 ) , and that provides comparatively little insights into the information that is being exploited in the attack . The development of privacy attacks is most important because they inform strategies for protecting against them . This is achieved by perturbations to the transferred gradients , and the form of the attack can give insights into the type of perturbation that can effectively protect the data ( Fan et al. , 2020 ) . As such , the development of novel closed-form attacks is essential to the analysis of privacy in federated learning . More broadly , the existence of model inversion attacks ( He et al. , 2019 ; Wang et al. , 2019 ; Yang et al. , 2019 ; Zhang et al. , 2020 ) calls into question whether transferring a fully trained model can be considered privacy preserving . As the weights of a model trained by ( stochastic ) gradient descent are the summation of individual gradients , understanding gradient attacks can assist in the analysis of and protection against model inversion attacks in and outside of a federated learning setting . In this work , we develop a novel third family of attacks , recursive gradient attack on privacy ( RGAP ) , that is based on a recursive , depth-wise algorithm for recovering training data from gradient information . Different from the analytical attack using the bias term , R-GAP utilizes much more information and is the first closed-form algorithm that works on both convolutional networks and fully connected networks with or without bias term . Compared to optimization-based attacks , it is not susceptible to local optima , and is orders of magnitude faster to run with a deterministic running time . Furthermore , we show that under certain conditions our recursive attack can fully recover training data in cases where optimization attacks fail . Additionally , the insights gained from the closed form of our recursive attack have lead to a refined rank analysis that predicts which network architectures enable full recovery , and which lead to provable noisy recovery due to rankdeficiency . This explains well the performance of both closed-form and optimization-based attacks . We also demonstrate that using rank analysis we are able to make small modifications to network architectures to increase the network ’ s security without sacrificing its accuracy . 1.1 RELATED WORK . Bias attacks : The original discovery of the existence of an analytical attack based on gradients with respect to the bias term is due to Phong et al . ( 2018 ) . Fan et al . ( 2020 ) also analyzed the bias attack as a system of linear equations , and proposed a method of perturbing the gradients to protect against it . Their work considers convolutional and fully-connected networks as equivalent , but this ignores the aggregation of gradients in convolutional networks . Similar to our work , they also perform a rank analysis , but it considers fewer constraints than is included in our analysis ( Section 4 ) . Optimization attacks : The first attack that utilized an optimization approach to minimize the distance between gradients appears to be due to Wang et al . ( 2019 ) . In this work , optimization is adopted as a submodule in their GAN-style framework . Subsequently , Zhu et al . ( 2019 ) proposed a method called deep leakage from gradients ( DLG ) which relies entirely on minimization of the difference of gradients ( Section 2 ) . They propose the use of L-BFGS ( Liu & Nocedal , 1989 ) to perform the optimization . Zhao et al . ( 2020 ) further analyzed label inference in this setting , proposing an analytic way to reconstruct the one-hot label of multi-class classification in terms of a single input . Wei et al . ( 2020 ) show that DLG is sensitive to initialization and proposed that the same class image is an optimal initialization . They proposed to use SSIM as image similarity metric , which can then be used to guide optimization by DLG . Geiping et al . ( 2020 ) point out that as DLG requires second-order derivatives , L-BFGS actually requires third-order derivatives , which leads to challenging optimzation for networks with activation functions such as ReLU and LeakyReLU . They therefore propose to replace L-BFGS with Adam ( Kingma & Ba , 2015 ) . Similar to the work of Wei et al . ( 2020 ) , Geiping et al . ( 2020 ) propose to incorporate an image prior , in this case total variation , while using PSNR as a quality measurement . 2 OPTIMIZATION-BASED GRADIENT ATTACKS ON PRIVACY ( O-GAP ) Optimization-based gradient attacks on privacy ( O-GAP ) take the real gradients as its ground-truth label and utilizes optimization to decrease the distance between the real gradients ∇W and the dummy gradients ∇W′ generated by a pair of randomly initialized dummy data and dummy label . The objective function of O-GAP can be generally expressed as : arg min x′ , y′ ‖∇W−∇W′‖2 = arg min x′ , y′ d∑ i=1 ‖∇Wi −∇W′i‖2 , ( 1 ) where the summation is taken over the layers of a network of depth d , and ( x′ , y′ ) is the dummy training data and label used to generate ∇W ′ . The idea of O-GAP was proposed by Wang et al . ( 2019 ) . However , they have adopted it as a part of their GAN-style framework and did not realize that O-GAP is able to preform a more accurate attack by itself . Later in the work of Zhu et al . ( 2019 ) , O-GAP has been proposed as a stand alone approach , the framework has been named as Deep Leakage from Gradients ( DLG ) . The approach is intuitively simple , and in practice has been shown to give surprisingly good results ( Zhu et al. , 2019 ) . However , it is sensitive to initialization and prone to fail ( Zhao et al. , 2020 ) . The choice of optimizer is therefore important , and convergence can be very slow ( Geiping et al. , 2020 ) . Perhaps most importantly , Equation 1 gives little insight into what information in the gradients is being exploited to recover the data . Analysis in Zhu et al . ( 2019 ) is limited to empirical insights , and fundamental open questions remain : What are sufficient conditions for arg minx′ , y′ ∑d i=1 ‖∇Wi − ∇W ′ i‖2 to have a unique minimizer ? We address this question in Section 4 , and subsequently validate our findings empirically . 3 CLOSED-FORM GRADIENT ATTACKS ON PRIVACY . The first attempt of closed-form GAP was proposed in a research of privacy-preserving deep learning by Phong et al . ( 2018 ) . Theorem 1 ( Phong et al . ( 2018 ) ) . Assume a layer of a fully connected network with a bias term , expressed as : Wx + b = z , ( 2 ) where W , b denote the weight matrix and bias vector , and x , z denote the input vector and output vector of this layer . If the loss function ` of the network can be expressed as : ` = ` ( f ( x ) , y ) where f indicates a nested function of x including activation function and all subsequent layers , y is the ground-truth label . Then x can be derived from gradients w.r.t . W and gradients w.r.t . b , i.e . : ∂ ` ∂W = ∂ ` ∂z x > , ∂ ` ∂b = ∂ ` ∂z x > = ∂ ` ∂Wj / ∂ ` ∂bj ( 3 ) where j denotes the j-th row , note that in fact from each row we can compute a copy of x > . When this layer is the first layer of a network , it is possible to reconstruct the data , i.e . x , using this approach . In the case of noisy gradients , we can make use of the redundancy in estimating x by averaging over noisy estimates : x̂ > = ∑ j ∂ ` ∂Wj / ∂ ` ∂bj . However , simply removing the bias term can disable this attack . Besides , this approach does not work on convolutional neural networks due to a dimension mismatch in Equation 3 . Both of these two problems have been resolved in our approach . 3.1 RECURSIVE GRADIENT ATTACK ON PRIVACY ( R-GAP ) For simplicity we derive the R-GAP in terms of binary classification with a single image as input . In this setting we can generally describe the network and loss function as : µ = ywd = : fd−1 ( x ) ︷ ︸︸ ︷ σd−1 Wd−1 σd−2 ( Wd−2φ ( x ) ) ︸ ︷︷ ︸ = : fd−2 ( x ) ( 4 ) ` = log ( 1 + e−µ ) ( 5 ) where y ∈ { −1 , 1 } , d denotes the d-th layer , φ represents all layers previous to d− 2 , and σ denotes the activation function . Note that , although our notation omits the bias term in our approach , with an augmented matrix and augmented vector it is able to represent both of the linear map and the translation , e.g . Equation 2 , using matrix multiplication as shown in Equation 4 . So our formulation also includes the approach proposed by Phong et al . ( 2018 ) . Moreover , if the i-th layer is a convolutional layer , then Wi is an extended circulant matrix representing the convolutional kernel ( Golub & Van Loan , 1996 ) , and data x as well as input of each layer are represented by a flattened vector in Equation 4 . 3.1.1 RECOVERING DATA FROM GRADIENTS . From Equation 4 and Equation 5 we can derive following gradients : ∂ ` ∂wd = y ∂ ` ∂µ f > d−1 ( 6 ) ∂ ` ∂Wd−1 = ( ( w > d ( y ∂ ` ∂µ ) ) σ′d−1 ) f > d−2 ( 7 ) ∂ ` ∂Wd−2 = ( ( W > d−1 ( ( w > d ( y ∂ ` ∂µ ) ) σ′d−1 ) ) σ′d−2 ) φ > ( 8 ) where σ′ denotes the derivative of σ , for more details of deriving the gradients refer to Appendix H. The first observation of these gradients is that : ∂ ` ∂wd · wd = ∂ ` ∂µ µ ( 9 ) Additionally , if σ1 , ... , σd−1 are ReLU or LeakyRelu , the dot product of the gradients and weights of each layer will be the same , i.e . : ∂ ` ∂wd · wd = ∂ ` ∂Wd−1 ·Wd−1 = ... = ∂ ` ∂W1 ·W1 = ∂ ` ∂µ µ ( 10 ) Since gradients and weights of each layer are known , we can obtain ∂ ` ∂µµ . If loss function ` is logistic loss ( Equation 5 ) , we obtain : ∂ ` ∂µ µ = −µ 1 + eµ . ( 11 ) In order to perform R-GAP , we need to derive µ from ∂ ` ∂µµ . As we can see , ∂ ` ∂µµ is non-monotonic , which means knowing ∂ ` ∂µµ does not always allow us to uniquely recover µ . However , even in the case that we can not uniquely recover µ , there are only two possible values to consider . Figure 1 illustrates ∂ ` ∂µµ of logistic , exponential , and hinge losses , showing when we can uniquely recover µ from ∂ ` ∂µµ . The non-uniqueness of µ inspires us to find a sort of data that can trigger exactly the same gradients as the real data , which we name twin data , denoted by x̃ . The existence of twin data demonstrates that the objective function of DLG could have more than one global minimum , which explains at least in part why DLG is sensitive to initialization , for more information and experiments about the twin data refer to Appendix B . The second observation on Equations 6-8 is that the gradients of each layer have a repeated format : ∂ ` ∂wd = kdf > d−1 ; kd : = y ∂ ` ∂µ ( 12 ) ∂ ` ∂Wd−1 = kd−1f > d−2 ; kd−1 : = ( w > d kd ) σ′d−1 ( 13 ) ∂ ` ∂Wd−2 = kd−2φ > ; kd−2 : = ( W > d−1kd−1 ) σ′d−2 ( 14 ) In Equation 12 , the value of y can be derived from the sign of the gradients at this layer if the activation function of previous layer is ReLU or Sigmoid , i.e . fd−1 > 0 . For multi-class classification , y can always be analytically derived as proved by Zhao et al . ( 2020 ) . From Equations 12-14 we can see that gradients are actually linear constraints on the output of the previous layer , also the input of the current layer . We name these gradient constraints , which can be generally described as : Kixi = flatten ( ∂ ` ∂Wi ) , ( 15 ) where i denotes i-th layer , xi denotes the input and Ki is a coefficient matrix containing all gradient constraints at the i-th layer .
This paper studies the problem of gradient attack in deep learning models. In particular, this paper tries to form a system of linear equations to find a training data point when the gradient of the deep learning model with respect to that data point is available. The algorithm for finding the data point is called R-GAP.
SP:720f167592297c58d88272599fb66978f3ae8001
Lipschitz Recurrent Neural Networks
1 INTRODUCTION . Many interesting problems exhibit temporal structures that can be modeled with recurrent neural networks ( RNNs ) , including problems in robotics , system identification , natural language processing , and machine learning control . In contrast to feed-forward neural networks , RNNs consist of one or more recurrent units that are designed to have dynamical ( recurrent ) properties , thereby enabling them to acquire some form of internal memory . This equips RNNs with the ability to discover and exploit spatiotemporal patterns , such as symmetries and periodic structures ( Hinton , 1986 ) . However , RNNs are known to have stability issues and are notoriously difficult to train , most notably due to the vanishing and exploding gradients problem ( Bengio et al. , 1994 ; Pascanu et al. , 2013 ) . Several recurrent models deal with the vanishing and exploding gradients issue by restricting the hidden-to-hidden weight matrix to be an element of the orthogonal group ( Arjovsky et al. , 2016 ; Wisdom et al. , 2016 ; Mhammedi et al. , 2017 ; Vorontsov et al. , 2017 ; Lezcano-Casado & MartinezRubio , 2019 ) . While such an approach is advantageous in maintaining long-range memory , it limits the expressivity of the model . To address this issue , recent work suggested to construct hidden-tohidden weights which have unit norm eigenvalues and can be nonnormal ( Kerg et al. , 2019 ) . Another approach for resolving the exploding/vanishing gradient problem has recently been proposed by Kag et al . ( 2020 ) , who formulate the recurrent units as a differential equation and update the hidden states based on the difference between predicted and previous states . In this work , we address these challenges by viewing RNNs as dynamical systems whose temporal evolution is governed by an abstract system of differential equations with an external input . The data are formulated in continuous-time where the external input is defined by the function x = x ( t ) ∈ Rp , and the target signal is defined as y = y ( t ) ∈ Rd . Based on insights from dynamical systems theory , we propose a continuous-time Lipschitz recurrent neural network with the functional form { ḣ = AβA , γAh+ tanh ( WβW , γW h+ Ux+ b ) , y = Dh , ( 1a ) ( 1b ) where the hidden-to-hidden matrices Aβ , γ ∈ RN×N and Wβ , γ ∈ RN×N are of the form { AβA , γA = ( 1− βA ) ( MA +MTA ) + βA ( MA −MTA ) − γAI WβW , γW = ( 1− βW ) ( MW +MTW ) + βW ( MW −MTW ) − γW I , ( 2a ) ( 2b ) where βA , βW ∈ [ 0 , 1 ] , γA , γW > 0 are tunable parameters and MA , MW ∈ RN×N are trainable matrices . Here , h = h ( t ) ∈ RN is a function of time t that represents an internal ( hidden ) state , and ḣ = ∂h ( t ) ∂t is its time derivative . The hidden state represents the memory that the system has of its past . The function in Eq . ( 1 ) is parameterized by the hidden-to-hidden weight matrices A ∈ RN×N and W ∈ RN×N , the input-to-hidden encoder matrix U ∈ RN×p , and an offset b . The function in Eq . ( 1b ) is parameterized by the hidden-to-output decoder matrix D ∈ Rd×N . Nonlinearity is introduced via the 1-Lipschitz tanh activation function . While RNNs that are governed by differential equations with an additive structure have been studied before ( Zhang et al. , 2014 ) , the specific formulation that we propose in ( 1 ) and our theoretical analysis are distinct . Treating RNNs as dynamical systems enables studying the long-term behavior of the hidden state with tools from stability analysis . From this point of view , an unstable unit presents an exploding gradient problem , while a stable unit has well-behaved gradients over time ( Miller & Hardt , 2019 ) . However , a stable recurrent unit can suffer from vanishing gradients , leading to catastrophic forgetting ( Hochreiter & Schmidhuber , 1997b ) . Thus , we opt for a stable model whose dynamics do not ( or only slowly do ) decay over time . Importantly , stability is also a statement about the robustness of neural units with respect to input perturbations , i.e. , stable models are less sensitive to small perturbations compared to unstable models . Recently , Chang et al . ( 2019 ) explored the stability of linearized RNNs and provided a local stability guarantee based on the Jacobian . In contrast , the particular structure of our unit ( 1 ) allows us to obtain guarantees of global exponential stability using control theoretical arguments . In turn , the sufficient conditions for global stability motivate a novel symmetric-skew decomposition based scheme for constructing hidden-to-hidden matrices . This scheme alleviates exploding and vanishing gradients , while remaining highly expressive . In summary , the main contributions of this work are as follows : • First , in Section 3 , using control theoretical arguments in a direct Lyapunov approach , we provide sufficient conditions for global exponential stability of the Lipschitz RNN unit ( Theorem 1 ) . Global stability is advantageous over local stability results since it guarantees non-exploding gradients regardless of the state . In the special case where A is symmetric , we find that these conditions agree with those in classical theoretical analyses ( Lemma 1 ) . • Next , in Section 4 , drawing from our stability analysis , we propose a novel scheme based on the symmetric-skew decomposition for constructing hidden-to-hidden matrices . This scheme mitigates the vanishing and exploding gradients problem , while obtaining highly expressive hidden-to-hidden matrices . • In Section 6 , we show that our Lipschitz RNN has the ability to outperform state-of-theart recurrent units on computer vision , language modeling and speech prediction tasks . Further , our results show that the higher-order explicit midpoint time integrator improves the predictive accuracy as compared to using the simpler one-step forward Euler scheme . • Finally , in Section 7 ) , we study our Lipschitz RNN via the lens of the Hessian and show that it is robust with respect to parameter perturbations ; we also show that our model is more robust with respect to input perturbations , compared to other continuous-time RNNs . 2 RELATED WORK . The problem of vanishing and exploding gradients ( and stability ) have a storied history in the study of RNNs . Below , we summarize two particular approaches to the problem ( constructing unitary/orthogonal RNNs and the dynamical systems viewpoint ) that have gained significant attention . Unitary and orthogonal RNNs . Unitary recurrent units have received attention recently , largely due to Arjovsky et al . ( 2016 ) showing that unitary hidden-to-hidden matrices alleviate the vanishing and exploding gradients problem . Several other unitary and orthogonal models have also been proposed ( Wisdom et al. , 2016 ; Mhammedi et al. , 2017 ; Jing et al. , 2017 ; Vorontsov et al. , 2017 ; Jose et al. , 2018 ) . While these approaches stabilize the training process of RNNs considerably , they also limit their expressivity and their prediction accuracy . Further , unitary RNNs are expensive to train , as they typically involve the computation of a matrix inverse at each step of training . Recent work by Lezcano-Casado & Martinez-Rubio ( 2019 ) overcame some of these limitations . By leveraging concepts from Riemannian geometry and Lie group theory , their recurrent unit exhibits improved expressivity and predictive accuracy on a range of benchmark tasks while also being efficient to train . Another competitive recurrent design was recently proposed by Kerg et al . ( 2019 ) . Their approach is based on the Schur decomposition , and it enables the construction of general nonnormal hidden-to-hidden matrices with unit-norm eigenvalues . Dynamical systems inspired RNNs . The continuous time view of RNNs has a long history in the neurodynamics community as it provides higher flexibility and increased interpretability ( Pineda , 1988 ; Pearlmutter , 1995 ; Zhang et al. , 2014 ) . In particular , RNNs that are governed by differential equations with an additive structure have been extensively studied from a theoretical point of view ( Funahashi & Nakamura , 1993 ; Kim et al. , 1996 ; Chow & Li , 2000 ; Hu & Wang , 2002 ; Li et al. , 2005 ; Trischler & D ’ Eleuterio , 2016 ) . See Zhang et al . ( 2014 ) for a comprehensive survey of continuous-time RNNs and their stability properties . Recently , several works have adopted the dynamical systems perspective to alleviate the challenges of training RNNs which are related to the vanishing and exploding gradients problem . For nonsequential data , Ciccone et al . ( 2018 ) proposed a negative-definite parameterization for enforcing stability in the RNN during training . Chang et al . ( 2019 ) introduced an antisymmetric hidden-tohidden weight matrix and provided guarantees for local stability . Kag et al . ( 2020 ) have proposed a differential equation based formulation for resolving the exploding/vanishing gradients problem by updating the hidden states based on the difference between predicted and previous states . Niu et al . ( 2019 ) employed numerical methods for differential equations to study the stability of RNNs . Another line of recent work has focused on continuous-time models that deal with irregular sampled time-series , missing values and multidimensional time series . Rubanova et al . ( 2019 ) and De Brouwer et al . ( 2019 ) formulated novel recurrent models based on the theory of differential equations and their discrete integration . Lechner & Hasani ( 2020 ) extended these ordinary differential equation ( ODE ) based models and addresses the issue of vanishing and exploding gradients by designing an ODE-model that is based on the idea of long short-term memory ( LSTM ) . This ODE-LSTM outperforms the continuous-time LSTM ( Mei & Eisner , 2017 ) as well as the GRU-D model ( Che et al. , 2018 ) that is based on a gated recurrent unit ( GRU ) . The link between dynamical systems and models for forecasting sequential data also provides the opportunity to incorporate physical knowledge into the learning process which improves the generalization performance , robustness , and ability to learn with limited data ( Chen et al. , 2019 ) . 3 STABILITY ANALYSIS OF LIPSCHITZ RECURRENT UNITS . One of the key contributions in this work is that we prove that model ( 1 ) is globally exponentially stable under some mild conditions on A and W . Namely , for any initial hidden state we can guarantee that our Lipschitz unit converges to an equilibrium if it exists , and therefore , gradients can never explode . We improve upon recent work on stability in recurrent models , which provide only a local analysis , see e.g. , ( Chang et al. , 2019 ) . In fact , global exponential stability is among the strongest notions of stability in nonlinear systems theory , implying all other forms of Lyapunov stability about the equilibrium h∗ ( Khalil , 2002 , Definitions 4.4 and 4.5 ) . Definition 1 . A point h∗ is an equilibrium point of ḣ = f ( h , t ) if f ( h∗ , t ) = 0 for all t. Such a point is globally exponentially stable if there exists some C > 0 and λ > 0 such that for any choice of initial values h ( 0 ) ∈ RN , ‖h ( t ) − h∗‖ ≤ Ce−λt‖h ( 0 ) − h∗‖ , for any t ≥ 0 . ( 3 ) The presence of a Lipschitz nonlinearity in ( 1 ) plays an important role in our analysis . While we focus on tanh in our experiments , our proof is more general and is applicable to models whose nonlinearity σ ( · ) is an M -Lipschitz function . Specifically , we consider the general model ḣ = Ah+ σ ( Wh+ Ux+ b ) , ( 4 ) for which we have the following stability result . In the following , we let σmin and σmax denote the smallest and largest singular values of the hidden-to-hidden matrices , respectively . Theorem 1 . Let h∗ be an equilibrium point of a differential equation of the form ( 4 ) for some x ∈ Rp . The point h∗ is globally exponentially stable if the eigenvalues of Asym : = 12 ( A + A T ) are strictly negative , W is non-singular , and either ( a ) σmin ( Asym ) > Mσmax ( W ) ; or ( b ) σ is monotone non-decreasing , W +WT is negative definite , and ATW +WTA is positive definite . The two cases show that global exponential stability is guaranteed if either ( a ) the matrix A has eigenvalues with real parts sufficiently negative to counteract expanding trajectories in the nonlinearity ; or ( b ) the nonlinearity is monotone , both A and W yield stable linear systems u̇ = Au , v̇ = Wv , and A , W have sufficiently similar eigenvectors . In practice , case ( b ) occasionally holds , but is challenging to ensure without assuming specific structure onA , W . Because such assumptions could limit the expressiveness of the model , the next section will develop a tunable formulation for A and W with the capacity to ensure that case ( a ) holds . In Appendix A.1 , we provide a proof of Theorem 1 using a direct Lyapunov approach . One advantage of this approach is that the driving input x is permitted to evolve in time arbitrarily in the analysis . The proof relies on the classical Kalman-Yakubovich-Popov lemma and circle criterion from control theory — to our knowledge , these tools have not been applied in the modern RNN literature , and we hope our proof can illustrate their value to the community . In the special case whereA is symmetric and x ( t ) constant , we show that we can also inherit criteria for both local and global stability from a class of well-studied Cohen–Grossberg–Hopfield models . Lemma 1 . Suppose that A is symmetric and W is nonsingular . There exists a diagonal matrix D ∈ RN×N , and nonsingular matrices L , V ∈ RN×N such that an equilibrium of ( 4 ) is ( globally exponentially ) stable if and only if there is a corresponding ( globally exponentially ) stable equilibrium for the system ż = Dz + Lσ ( V z + Ux+ b ) . ( 5 ) For a thorough review of analyses of ( 5 ) , see ( Zhang et al. , 2014 ) . In this special case , the criteria in Theorem 1 coincide with those obtained for the corresponding model ( 5 ) . However , in practice , we will not choose A to be symmetric .
Considering a continuous time RNN with Lipschitz-continuous nonlinearity, the authors formulate sufficient conditions on the parameter matrices for the network to be globally stable, in the sense of a globally attracting fixed point. They provide a specific parameterization for the hidden-to-hidden weight matrices to control global stability and error gradients, consisting of a weighted combination of a symmetric and a skew-symmetric matrix (and some diagonal offset). The authors discuss numerical integration by forward-Euler and RK2, and thoroughly benchmark their approach against a large set of other state-of-the-art RNNs on various tasks including versions of MNIST and TIMIT. Finally, they highlight improved stability of their RNN against parameter and input perturbations.
SP:6cf84af3e1ae0c84dc251ba41a5acb3dc7f61645
EEC: Learning to Encode and Regenerate Images for Continual Learning
1 INTRODUCTION . Humans continue to learn new concepts over their lifetime without the need to relearn most previous concepts . Modern machine learning systems , however , require the complete training data to be available at one time ( batch learning ) ( Girshick , 2015 ) . In this paper , we consider the problem of continual learning from the class-incremental perspective . Class-incremental systems are required to learn from a stream of data belonging to different classes and are evaluated in a single-headed evaluation ( Chaudhry et al. , 2018 ) . In single-headed evaluation , the model is evaluated on all classes observed so far without any information indicating which class is being observed . Creating highly accurate class-incremental learning systems is a challenging problem . One simple way to create a class-incremental learner is by training the model on the data of the new classes , without revisiting the old classes . However , this causes the model to forget the previously learned classes and the overall classification accuracy decreases , a phenomenon known as catastrophic forgetting ( Kirkpatrick et al. , 2017 ) . Most existing class-incremental learning methods avoid this problem by storing a portion of the training samples from the earlier learned classes and retraining the model ( often a neural network ) on a mixture of the stored data and new data containing new classes ( Rebuffi et al. , 2017 ; Hou et al. , 2019 ) . Storing real samples of the previous classes , however , leads to several issues . First , as pointed out by Wu et al . ( 2018b ) , storing real samples exhausts memory capacity and limits performance for real-world applications . Second , storing real samples introduces privacy and security issues ( Wu et al. , 2018b ) . Third , storing real samples is not biologically inspired , i.e . humans do not need to relearn previously known classes . This paper explores the ” strict ” class-incremental learning problem in which the model is not allowed to store any real samples of the previously learned classes . The strict class-incremental learning problem is more akin to realistic learning scenarios such as a home service robot that must learn continually with limited on-board memory . This problem has been previously addressed using generative models such as autoencoders ( Kemker & Kanan , 2018 ) or Generative Adversarial Networks ( GANs ) ( Ostapenko et al. , 2019 ) . Most approaches for strict class-incremental learning 1A preliminary version of this work was presented at ICML 2020 Workshop on Lifelong Machine Learning ( Ayub & Wagner , 2020c ) . use GANs to generate samples reflecting old class data , because GANs generate sharp , fine-grained images ( Ostapenko et al. , 2019 ) . The downside of GANs , however , is that they tend to generate images which do not belong to any of the learned classes , hurting classification performance . Autoencoders , on the other hand , always generate images that relate to the learned classes , but tend to produce blurry images that are also not good for classification . To cope with these issues , we propose a novel , cognitively-inspired approach termed Encoding Episodes as Concepts ( EEC ) for continual learning , which utilizes convolutional autoencoders to generate previously learned class data . Inspired by models of the hippocampus ( Renoult et al. , 2015 ) , we use autoencoders to create compressed embeddings ( encoded episodes ) of real images and store them in memory . To avoid the generation of blurry images , we borrow ideas from the Neural Style Transfer ( NST ) algorithm proposed by Gatys et al . ( 2016 ) to train the autoencoders . For efficient memory management , we use the notion of memory integration , from hippocampal and neocortical concept learning ( Mack et al. , 2018 ) , to combine similar episodes into centroids and covariance matrices eliminating the need to store real data . This paper contributes : 1 ) an autoencoder based approach to strict class-incremental learning which uses Neural Style Transfer to produce quality samples reflecting old class data ( Sec . 3.1 ) ; 2 ) a cognitively-inspired memory management technique that combines similar samples into a centroid/covariance representation , drastically reducing the memory required ( Sec . 3.2 ) ; 3 ) a data filtering and a loss weighting technique to manage image degradation of old classes during classifier training ( Sec . 3.3 ) . We further show that EEC outperforms state-of-the-art ( SOTA ) approaches on benchmark datasets by significant margins while also using far less memory . 2 RELATED WORK . Most recent approaches to class-incremental learning store a portion of the real images belonging to the old classes to avoid catastrophic forgetting . Rebuffi et al . ( 2017 ) ( iCaRL ) store old class images and utilize knowledge distillation ( Hinton et al. , 2015 ) for representation learning and the nearest class mean ( NCM ) classifier for classification of the old and new classes . Knowledge distillation uses a loss term to force the labels of the images of previous classes to remain the same when learning new classes . Castro et al . ( 2018 ) ( EEIL ) improves iCaRL with an end-to-end learning approach . Wu et al . ( 2019 ) also stores real images and uses a bias correction layer to avoid any bias toward the new classes . To avoid storing old class images , some approaches store features from the last fully-connected layer of the neural networks ( Xiang et al. , 2019 ; Hayes & Kanan , 2020 ; Ayub & Wagner , 2020b ; d ) . These approaches , however , use a network pretrained on ImageNet to extract features , which gives them an unfair advantage over other approaches . Because of their reliance on a pretrained network , these approaches can not be applied in situations when new data differs drastically from ImageNet ( Russakovsky et al. , 2015 ) . These difficulties have forced researchers to consider using generative networks . Methods employing generative networks tend to model previous class statistics and regenerate images belonging to the old classes while attempting to learn new classes . Both Shin et al . ( 2017 ) and Wu et al . ( 2018a ) use generative replay where the generator is trained on a mixture of generated old class images and real images from the new classes . This approach , however , causes images belonging to classes learned in earlier increments to start to semantically drift , i.e . the quality of images degrades because of the repeated training on synthesized images . Ostapenko et al . ( 2019 ) avoids semantic drift by training the GAN only once on the data of each class . Catastrophic forgetting is avoided by applying elastic weight consolidation ( Kirkpatrick et al. , 2017 ) , in which changes in important weights needed for old classes are avoided when learning new classes . They also grow their network when it runs out of memory while learning new classes , which can be difficult to apply in situations with restricted memory . One major issue with GAN based approaches is that GANs tend to generate images that do not belong to any of the learned classes which decreases classification accuracy . For these reasons , most approaches only perform well on simpler datasets such as MNIST ( LeChun , 1998 ) but perform poorly on complex datasets such as ImageNet . Conditional GAN can be used to mitigate the problem of images belonging to none of the classes as done by Ostapenko et al . ( 2019 ) , however the performance is still poor on complex datasets such as ImageNet-50 ( see Table 1 and Table 2 ) . We avoid the problem of generating images that do not belong to any learned class by training autoencoders instead of GANs . Comparatively little work has focused on using autoencoders to generate samples because the images generated by autoencoders are blurry , limiting their usefulness for classifier training . Hattori ( 2014 ) uses autoencoders on binary pixel images and Kemker & Kanan ( 2018 ) ( FearNet ) uses a network pre-trained on ImageNet to extract feature embeddings for images , applying the autoencoder to the feature embeddings . Neither of these approaches are scalable to RGB images . Moreover , the use of a pre-trained network to extract features gives FearNet an unfair advantage over other approaches . 3 ENCODING EPISODES AS CONCEPTS ( EEC ) . Following the notation of Chaudhry et al . ( 2018 ) , we consider St = { ( xti , yti ) } n t i=1 to be the set of samples xi ∈ X and their ground truth labels yti belonging to task t. In a class-incremental setup , St can contain one or multiple classes and data for different tasks is available to the model in different increments . In each increment , the model is evaluated on all the classes seen so far . Our formal presentation of continual learning follows Ostapenko et al . ( 2019 ) , where a task solver model ( classifier for class-incremental learning ) D has to update its parameters θD on the data of task t in an increment such that it performs equally well on all the t − 1 previous tasks seen so far . Data for the t − 1 tasks is not available when the model is learning task t. The subsections below present our approach . 3.1 AUTOENCODER TRAINING WITH NEURAL STYLE TRANSFER . An autoencoder is a neural network that is trained to compress and then reconstruct the input ( Goodfellow et al. , 2016 ) , formally fr : X → X . The network consists of an encoder that compresses the input into a lower dimensional feature space ( termed as the encoded episode in this paper ) , genc : X → F and a decoder that reconstructs the input from the feature embedding , gdec : F → X . Formally , for a given input x ∈ X , the reconstruction pipeline fr is defined as : fr ( x ) = ( gdec ◦ genc ) ( x ) . The parameters θr of the network are usually optimized using an l2 loss ( Lr ) between the inputs and the reconstructions : Lr = ||x− fr ( x ) ||2 ( 1 ) Although autoencoders are suitable for dimensionality reduction for complex , high-dimensional data like RGB images , the reconstructed images lose the high frequency components necessary for correct classification . To tackle this problem , we train autoencoders using some of the ideas that underline Neural Style Transfer ( NST ) . NST uses a pre-trained CNN to transfer the style of one image to another . The process takes three images , an input image , a content image and a style image and alters the input image such that it has the content image ’ s content and the artistic style of the style image . The three images are passed through the pre-trained CNN generating convolutional feature maps ( usually from the last convolutional layer ) and l2 distances between the feature maps of the input image and content image ( content loss ) and style image ( style loss ) are calculated . These losses are then used to update the input image . Intuitively , our intent here is to create reconstructed images that are similar to the real images ( in the pixel and convolutional space ) , thereby improving classification accuracy . Hence , we only utilize the idea of content transfer from the NST algorithm , where the input image is the image reconstructed by the autoencoder and content image is the real image corresponding to the reconstructed image . The classifier model , D , is used to generate convolutional feature maps for the NST , since it is already trained on real data for the classes in the increment t. In contrast to the traditional NST algorithm , we use the content loss ( Lcont ) to train the autoencoder , rather than updating the input image directly . Formally , let fc : X → Fc be the classifier pipeline that converts input images into convolutional features . For an input image , xti of task t , the content loss is : Lcont = ||fc ( xti ) − fc ( fr ( xti ) ) ||2 ( 2 ) The autoencoder parameters are optimized using a combination of reconstruction and content losses : L = ( 1− λ ) Lr + λLcont ( 3 ) where , λ is a hyperparamter that controls the contribution of each loss term towards the complete loss . During autoencoder training , classifier D acts as a fixed feature extractor and its parameters are not updated . This portion of the complete procedure is depicted in Figure 1 ( a ) . To provide an illustration of our approach , we perform an experiment with ImageNet-50 dataset . We trained one autoencoder on 10 classes from ImageNet-50 with NST and one without NST . Figure 2 depicts the reconstructed images by the two autoencoders . Note that the images generated by the autoencoder trained without using NST are blurry . In contrast , the autoencoder trained using NST creates images with fine-grained details which improves the classification accuracy .
In continual learning settings, one of the important technique for avoiding catastrophe forgetting is to replay data points from the past. For memory efficiency purposes, representative samples can be generated from a generative model, such as GANs, rather than replaying the original samples which can be large in number. It is argued that GANs generate new samples which may not belong exactly to one of the classes, so a new generative model is proposed. Experimental results are appealing.
SP:2ad12575818f72f453eb0c04c953a48be56e80e3
Using latent space regression to analyze and leverage compositionality in GANs
1 INTRODUCTION . Natural scenes are comprised of disparate parts and objects that humans can easily segment and interchange ( Biederman , 1987 ) . Recently , unconditional generative adversarial networks ( Karras et al. , 2017 ; 2019b ; a ; Radford et al. , 2015 ) have become capable of mimicking the complexity of natural images by learning a mapping from a latent space noise distribution to the image manifold . But how does this seemingly unstructured latent space produce a strikingly realistic and structured 1Dome image from : https : //www.technologyreview.com/2019/10/24/132370/mit-dome/ scene ? Here , we use a latent regressor to probe the latent space of a pretrained GAN , allowing us to uncover and manipulate the concepts that GANs learn about the world in an unsupervised manner . For example , given a church image , is it possible to swap one foreground tree for another one ? Given only parts of the building , can the missing portion be realistically filled ? To achieve these modifications , the generator must be compositional , i.e. , understanding discrete and separate representations of objects . We show that the pretrained generator – without any additional interventions – already represents these compositional properties in its latent code . Furthermore , these properties can be manipulated using a regression network that predicts the latent code of a given image . The pixels of this image then provide us with an intuitive interface to control and modify the latent code . Given the modified latent code , the network then applies image priors learned from the dataset , ensuring that the output is always a coherent scene regardless of inconsistencies in the input ( Fig . 1 ) . Our approach is simple – given a fixed pretrained generator , we train a regressor network to predict the latent code from an input image , while adding a masking modification to learn to handle missing pixels . To investigate the GAN ’ s ability to produce a globally coherent version of a scene , we hand the regressor a rough , incoherent template of the scene we desire , and use the two networks to convert it into a realistic image . Even though our regressor is never trained on these unrealistic templates , it projects the given image into a reasonable part of the latent space , which the generator maps onto the image manifold . This approach requires no labels or clustering of attributes ; all we need is a single example of approximately how we want the generated image to look . It only requires a forward pass of the regressor and generator , so there is low latency in obtaining the output image , unlike iterative optimization approaches that can require upwards of a minute to reconstruct an image . We use the regressor to investigate the compositional capabilities of pretrained GANs across different datasets . Using input images composed of different image parts ( “ collages ” ) , we leverage the generator to recombine this unrealistic content into a coherent image . This requires solving three tasks simultaneously – blending , alignment , and inpainting . We then investigate the GAN ’ s ability to independently vary localized portions of a given image . In summary , our contributions are : • We propose a latent regression model that learns to perform image reconstruction even in the case of incomplete images and missing pixels and show that the combination of regressor and generator forms a strong image prior . • Using the learned regressor , we show that the representation of the generator is already compositional in the latent code , without having to resort to intermediate layer activations . • There is no use of labelled attributes nor test-time optimization , so we can edit images based on a single example of the desired modification and reconstruct in real-time . • We use the regressor to probe what parts of a scene can vary independently , and investigate the difference between image mixing using the encoder and interpolation in latent space . • The same regressor setup can be used for a variety of other image editing applications , such as multimodal editing , scene completion , or dataset rebalancing . 2 RELATED WORK . Image Inversion . Given a target image , the GAN inversion problem aims to recover a latent code which best generates the target . Image inversion comes with a number of challenges , including 1 ) a complex optimization landscape and 2 ) the generator ’ s inability to reconstruct out-of-domain images . To relax the domain limitations of the generator , one possibility is to invert to a more flexible intermediate latent space ( Abdal et al. , 2019 ) , but this may allow the generator to become overly flexible and requires regularizers to ensure that the recovered latent code does not deviate too far from the latent manifold ( Pividori et al. , 2019 ; Zhu et al. , 2020 ; Wulff & Torralba , 2020 ) . An alternative to increasing the flexibility of the generator is to learn an ensemble of latent codes that approximate a target image when combined ( Gu et al. , 2019a ) . Due to challenging optimization , the quality of inversion depends on good initialization . A number of approaches use a hybrid of a latent regression network to provide an initial guess of the latent code with subsequent optimization of the latent code ( Bau et al. , 2019 ; Guan et al. , 2020 ) or the generator weights ( Zhu et al. , 2016 ; Bau et al. , 2020 ; Pan et al. , 2020 ) , while Huh et al . ( 2020 ) investigates gradient-free approaches for optimization . Besides inverting whole images , a different use case of image inversion through a generator is to complete partial scenes . When using optimization , this is achieved by only measuring the reconstruction loss on the known pixels ( Bora et al. , 2017 ; Gu et al. , 2019a ; Abdal et al. , 2020 ) , whereas in feed-forward methods , the missing region must be provided explicitly to the model . Rather than inverting to the latent code of a pretrained generator , one can train the generator and encoder jointly , based on modifications to the Variational Autoencoder ( Kingma & Welling , 2013 ) . Donahue et al . ( 2017 ) ; Donahue & Simonyan ( 2019 ) ; Dumoulin et al . ( 2016 ) use this setup to investigate the properties of latent representations learned during training , while Pidhorskyi et al . ( 2020 ) demonstrate a joint learning method that can achieve comparable image quality to recent GAN models . In our work , we investigate the emergent priors of a pretrained GAN using a masked latent regression network as an approximate image inverter . While such a regressor has lower reconstruction accuracy than optimization-based techniques , its lower latency allows us to investigate the learned priors in a computationally efficient way and makes real-time image editing incorporating such priors possible . Composition in Image Domains . To join segments of disparate image sources into one cohesive output , early works use hand-designed features , such as Laplacian pyramids for seamless blending ( Burt & Adelson , 1983 ) . Hays & Efros ( 2007 ) and Isola & Liu ( 2013 ) employ nearest-neighbor approaches for scene composition and completion . More recently , a number of deep network architectures have been developed for compositional tasks . For discriminative tasks , Tabernik et al . ( 2016 ) and Kortylewski et al . ( 2020 ) train CNNs with modified compositional architectures to understand model interpretability and reason about object occlusion in classification . For image synthesis , Mokady et al . ( 2019 ) and Press et al . ( 2020 ) use an autoencoder to encode , disentangle , and swap properties between two sets of images , while Shocher et al . ( 2020 ) mixes images in deep feature space while training the generator . Rather than creating models specifically for image composition or scene completion objectives , we investigate the ability of a pre-trained GAN to mix-and-match parts of its generated images . Related to our work , Besserve et al . ( 2018 ) estimates the modular structure of GANs by learning a casual model of latent representations , whereas we investigate the GAN ’ s compositional properties using image inversion . Due to the imprecise nature of image collages , compositing image parts also involves inpainting misaligned regions . However , in contrast to inpainting , in which regions have to be filled in otherwise globally consistent images ( Pathak et al. , 2016 ; Iizuka et al. , 2017 ; Yu et al. , 2018 ; Zeng et al. , 2020 ) , the composition problem involves correcting inconsistencies as well as filling in missing pixels . Image Editing . A recent topic of interest is editing images using generative models . A number of works propose linear attribute vector editing directions to perform image manipulation operations ( Goetschalckx et al. , 2019 ; Jahanian et al. , 2020 ; Shen et al. , 2019 ; Kingma & Dhariwal , 2018 ; Karras et al. , 2019a ; Radford et al. , 2015 ) . It is also possible to identify concepts learned in the generator ’ s intermediate layers by clustering intermediate representations , either using segmentation labels ( Bau et al. , 2018 ) or unsupervised clustering ( Collins et al. , 2020 ) , and change these representations to edit the desired concepts in the output image . Suzuki et al . ( 2018 ) use a spatial feature blending approach which mixes properties of target images in the intermediate feature space of a generator . On faces , editing can be achieved using a 3D parametric model to supervise the modification ( Tewari et al. , 2017 ; 2020 ) . In our work , we do not require clusters or concepts in intermediate layers to be defined a priori , nor do we need distinct input and output domains for approximate collages and real images , as in image translation tasks ( Zhu et al. , 2017 ; Almahairi et al. , 2018 ) . Unlike image manipulation using semantic maps ( Park et al. , 2019 ; Gu et al. , 2019b ) , our approach respects the style of the manipulation ( e.g . the specific color of the sky ) , which is lost in the semantic map representation . Our method shares commonalities with Richardson et al . ( 2020 ) , although we focus on investigating compositional properties rather than image-to-image translation . In our approach , we only require a single example of the approximate target property we want to modify and use regression into the latent space as a fast image prior to create a coherent output . This allows us to create edits that are not contingent on labelled concepts , and we do not need to modify or train the generator . 3 METHOD . 3.1 LATENT CODE RECOVERY IN GANS . GANs provide a mapping from a predetermined input distribution to a complex output distribution , e.g . from a standard normal Z to the image manifold X , but they are not easily invertible . In other words , given an image sample from the output distribution , it is not trivial to recover the sample from the input distribution that generated it . The image inversion objective aims to find the latent code z of GAN G that best recovers the desired target image x : z∗ = argmin z ( dist ( G ( z ) , x ) ) , ( 1 ) using some metric of image distance dist , such as pixel-wise L1 error or a metric based on deep features . This objective can be solved iteratively , using L-BFGS ( Liu & Nocedal , 1989 ) or other optimizers . However , iterative optimization is slow – it takes a large number of iterations to converge , is prone to local minima , and must be performed for each target image x independently . An alternative way of recovering the latent code z is to train a neural network to directly predict it from a given image x . In this case , the recovered latent code is simply the result of a feed-forward pass through a trained regressor network , z∗ = E ( x ) , where E can be used for any x ∈ X . To train the regressor ( or encoder ) network E , we use the latent encoder loss L = Ez∼N ( 0,1 ) , x=G ( z ) [ ||x−G ( E ( x ) ) ||22 + Lp ( x , G ( E ( x ) ) ) + Lz ( z , E ( x ) ) ] . ( 2 ) We sample z randomly from the latent distribution , and pass it through a pretrained generator G to obtain the target image x = G ( z ) . Between the target image x and the recovered image G ( E ( x ) ) , we use a mean square error loss to guide reconstruction and a perceptual loss Lp ( Zhang et al. , 2018 ) to recover details . Between the original latent code z and the recovered latent code E ( x ) , we use a latent recovery loss Lz . We use mean square error or a variant of cosine similarity for latent recovery , depending on the GAN ’ s input normalization . Additional details can be found in Supp . Sec . A.1.1 . Throughout this paper the generators are frozen , and we only optimize the weights of the encoder E. When using ProGAN ( Karras et al. , 2017 ) , we train the encoder network to directly invert to the latent code z . For StyleGAN ( Karras et al. , 2019b ) , we encode to an expandedW+ latent space ( Abdal et al. , 2019 ) . Once trained , the output of the latent regressor yields a latent code such that the reconstructed image looks perceptually similar to the target image .
In this paper, the authors propose a latent space regression method for analyzing and manipulating the latent space of pre-trained GAN models. Unlike existing optimization-based methods, an explicit latent code regressor is learned to map the input to the latent space. The authors apply this approach to several applications: image composition, attribute modification, image completion, and multimodal editing. They also present some analysis on the independence of semantic parts of an image.
SP:da8ca392a4eb366f4fdedb09d461ef804615b0b2
A Trainable Optimal Transport Embedding for Feature Aggregation and its Relationship to Attention
1 INTRODUCTION . Many scientific fields such as bioinformatics or natural language processing ( NLP ) require processing sets of features with positional information ( biological sequences , or sentences represented by a set of local features ) . These objects are delicate to manipulate due to varying lengths and potentially long-range dependencies between their elements . For many tasks , the difficulty is even greater since the sets can be arbitrarily large , or only provided with few labels , or both . Deep learning architectures specifically designed for sets have recently been proposed ( Lee et al. , 2019 ; Skianis et al. , 2020 ) . Our experiments show that these architectures perform well for NLP tasks , but achieve mixed performance for long biological sequences of varying size with few labeled data . Some of these models use attention ( Bahdanau et al. , 2015 ) , a classical mechanism for aggregating features . Its typical implementation is the transformer ( Vaswani et al. , 2017 ) , which has shown to achieve state-of-the-art results for many sequence modeling tasks , e.g , in NLP ( Devlin et al. , 2019 ) or in bioinformatics ( Rives et al. , 2019 ) , when trained with self supervision on large-scale data . Beyond sequence modeling , we are interested in this paper in finding a good representation for sets of features of potentially diverse sizes , with or without positional information , when the amount of training data may be scarce . To this end , we introduce a trainable embedding , which can operate directly on the feature set or be combined with existing deep approaches . More precisely , our embedding marries ideas from optimal transport ( OT ) theory ( Peyré & Cuturi , 2019 ) and kernel methods ( Schölkopf & Smola , 2001 ) . We call this embedding OTKE ( Optimal ∗Equal contribution . †Univ . Grenoble Alpes , Inria , CNRS , Grenoble INP , LJK , 38000 Grenoble , France . ‡D.I. , UMR 8548 , École normale supérieure , Paris , France . Transport Kernel Embedding ) . Concretely , we embed feature vectors of a given set to a reproducing kernel Hilbert space ( RKHS ) and then perform a weighted pooling operation , with weights given by the transport plan between the set and a trainable reference . To gain scalability , we then obtain a finite-dimensional embedding by using kernel approximation techniques ( Williams & Seeger , 2001 ) . The motivation for using kernels is to provide a non-linear transformation of the input features before pooling , whereas optimal transport allows to align the features on a trainable reference with fast algorithms ( Cuturi , 2013 ) . Such combination provides us with a theoretically grounded , fixed-size embedding that can be learned either without any label , or with supervision . Our embedding can indeed become adaptive to the problem at hand , by optimizing the reference with respect to a given task . It can operate on large sets with varying size , model long-range dependencies when positional information is present , and scales gracefully to large datasets . We demonstrate its effectiveness on biological sequence classification tasks , including protein fold recognition and detection of chromatin profiles where we achieve state-of-the-art results . We also show promising results in natural language processing tasks , where our method outperforms strong baselines . Contributions . In summary , our contribution is three-fold . We propose a new method to embed sets of features of varying sizes to fixed size representations that are well adapted to downstream machine learning tasks , and whose parameters can be learned in either unsupervised or supervised fashion . We demonstrate the scalability and effectiveness of our approach on biological and natural language sequences . We provide an open-source implementation of our embedding that can be used alone or as a module in larger learning models . 2 RELATED WORK . Kernel methods for sets and OT-based kernels . The kernel associated with our embedding belongs to the family of match kernels ( Lyu , 2004 ; Tolias et al. , 2013 ) , which compare all pairs of features between two sets via a similarity function . Another line of research builds kernels by matching features through the Wasserstein distance . A few of them are shown to be positive definite ( Gardner et al. , 2018 ) and/or fast to compute ( Rabin et al. , 2011 ; Kolouri et al. , 2016 ) . Except for few hyperparameters , these kernels yet can not be trained end-to-end , as opposed to our embedding that relies on a trainable reference . Efficient and trainable kernel embeddings for biological sequences have also been proposed by Chen et al . ( 2019a ; b ) . Our work can be seen as an extension of these earlier approaches by using optimal transport rather than mean pooling for aggregating local features , which performs significantly better for long sequences in practice . Deep learning for sets . Deep Sets ( Zaheer et al. , 2017 ) feed each element of an input set into a feed-forward neural network . The outputs are aggregated following a simple pooling operation before further processing . Lee et al . ( 2019 ) propose a Transformer inspired encoder-decoder architecture for sets which also uses latent variables . Skianis et al . ( 2020 ) compute some comparison costs between an input set and reference sets . These costs are then used as features in a subsequent neural network . The reference sets are learned end-to-end . Unlike our approach , such models do not allow unsupervised learning . We will use the last two approaches as baselines in our experiments . Interpretations of attention . Using the transport plan as an ad-hoc attention score was proposed by Chen et al . ( 2019c ) in the context of network embedding to align data modalities . Our paper goes beyond and uses the transport plan as a principle for pooling a set in a model , with trainable parameters . Tsai et al . ( 2019 ) provide a view of Transformer ’ s attention via kernel methods , yet in a very different fashion where attention is cast as kernel smoothing and not as a kernel embedding . 3 PROPOSED EMBEDDING . 3.1 PRELIMINARIES . We handle sets of features in Rd and consider sets x living in X = { x | x = { x1 , . . . , xn } such that x1 , . . . , xn ∈ Rd for some n ≥ 1 } . Elements of X are typically vector representations of local data structures , such as k-mers for sequences , patches for natural images , or words for sentences . The size of x denoted by n may vary , which is not an issue since the methods we introduce may take a sequence of any size as input , while providing a fixed-size embedding . We now revisit important results on optimal transport and kernel methods , which will be useful to describe our embedding and its computation algorithms . Optimal transport . Our pooling mechanism will be based on the transport plan between x and x′ seen as weighted point clouds or discrete measures , which is a by-product of the optimal transport problem ( Villani , 2008 ; Peyré & Cuturi , 2019 ) . OT has indeed been widely used in alignment problems ( Grave et al. , 2019 ) . Throughout the paper , we will refer to the Kantorovich relaxation of OT with entropic regularization , detailed for example in ( Peyré & Cuturi , 2019 ) . Let a in ∆n ( probability simplex ) and b in ∆n ′ be the weights of the discrete measures ∑ i aiδxi and ∑ j bjδx′j with respective locations x and x′ , where δx is the Dirac at position x . Let C in Rn×n ′ be a matrix representing the pairwise costs for aligning the elements of x and x′ . The entropic regularized Kantorovich relaxation of OT from x to x′ is min P∈U ( a , b ) ∑ ij CijPij − εH ( P ) , ( 1 ) where H ( P ) = − ∑ ij Pij ( log ( Pij ) − 1 ) is the entropic regularization with parameter ε , which controls the sparsity of P , and U is the space of admissible couplings between a and b : U ( a , b ) = { P ∈ Rn×n ′ + : P1n = a and P > 1n′ = b } . The problem is typically solved by using a matrix scaling procedure known as Sinkhorn ’ s algorithm ( Sinkhorn & Knopp , 1967 ; Cuturi , 2013 ) . In practice , a and b are uniform measures since we consider the mass to be evenly distributed between the points . P is called the transport plan , which carries the information on how to distribute the mass of x in x′ with minimal cost . Our method uses optimal transport to align features of a given set to a learned reference . Kernel methods . Kernel methods ( Schölkopf & Smola , 2001 ) map data living in a space X to a reproducing kernel Hilbert space H , associated to a positive definite kernel K through a mapping function ϕ : X → H , such that K ( x , x′ ) = 〈ϕ ( x ) , ϕ ( x′ ) 〉H . Even though ϕ ( x ) may be infinitedimensional , classical kernel approximation techniques ( Williams & Seeger , 2001 ) provide finitedimensional embeddings ψ ( x ) in Rk such that K ( x , x′ ) ≈ 〈ψ ( x ) , ψ ( x′ ) 〉 . Our embedding for sets relies in part on kernel method principles and on such a finite-dimensional approximation . 3.2 OPTIMAL TRANSPORT EMBEDDING AND ASSOCIATED KERNEL . We now present the OTKE , an embedding and pooling layer which aggregates a variable-size set or sequence of features into a fixed-size embedding . We start with an infinite-dimensional variant living in a RKHS , before introducing the finite-dimensional embedding that we use in practice . Infinite-dimensional embedding in RKHS . Given a set x and a ( learned ) reference z in X with p elements , we consider an embedding Φz ( x ) which performs the following operations : ( i ) initial embedding of the elements of x and z to a RKHS H ; ( ii ) alignment of the elements of x to the elements of z via optimal transport ; ( iii ) weighted linear pooling of the elements x into p bins , producing an embedding Φz ( x ) inHp , which is illustrated in Figure 1 . Before introducing more formal details , we note that our embedding relies on two main ideas : • Global similarity-based pooling using references . Learning on large sets with long-range interactions may benefit from pooling to reduce the number of feature vectors . Our pooling rule follows an inductive bias akin to that of self-attention : elements that are relevant to each other for the task at hand should be pooled together . To this end , each element in the reference set corresponds to a pooling cell , where the elements of the input set are aggregated through a weighted sum . The weights simply reflect the similarity between the vectors of the input set and the current vector in the reference . Importantly , using a reference set enables to reduce the size of the “ attention matrix ” from quadratic to linear in the length of the input sequence . • Computing similarity weights via optimal transport . A computationally efficient similarity score between two elements is their dot-product ( Vaswani et al. , 2017 ) . In this paper , we rather consider that elements of the input set should be pooled together if they align well with the same part of the reference . Alignment scores can efficiently be obtained by computing the transport plan between the input and the reference sets : Sinkhorn ’ s algorithm indeed enjoys fast solvers ( Cuturi , 2013 ) . We are now in shape to give a formal definition . Definition 3.1 ( The optimal transport kernel embedding ) . Let x = ( x1 , . . . , xn ) in X be an input set of feature vectors and z = ( z1 , . . . , zp ) in X be a reference set with p elements . Let κ be a positive definite kernel , e.g. , Gaussian kernel , with RKHSH and ϕ : Rd → H , its associated kernel embedding . Let κ be the n× p matrix which carries the comparisons κ ( xi , zj ) , before alignment . Then , the transport plan between x and z , denoted by the n × p matrix P ( x , z ) , is defined as the unique solution of ( 1 ) when choosing the cost C = −κ , and our embedding is defined as Φz ( x ) : = √ p× ( n∑ i=1 P ( x , z ) i1ϕ ( xi ) , . . . , n∑ i=1 P ( x , z ) ipϕ ( xi ) ) = √ p×P ( x , z ) > ϕ ( x ) , where ϕ ( x ) : = [ ϕ ( x1 ) , . . . , ϕ ( xn ) ] > . Interestingly , it is easy to show that our embedding Φz ( x ) is associated to the positive definite kernel Kz ( x , x ′ ) : = n∑ i , i′=1 Pz ( x , x ′ ) ii′κ ( xi , x ′ i′ ) = 〈Φz ( x ) , Φz ( x′ ) 〉 , ( 2 ) with Pz ( x , x′ ) : = p × P ( x , z ) P ( x′ , z ) > . This is a weighted match kernel , with weights given by optimal transport in H. The notion of pooling in the RKHS H of κ arises naturally if p ≤ n. The elements of x are non-linearly embedded and then aggregated in “ buckets ” , one for each element in the reference z , given the values of P ( x , z ) . This process is illustrated in Figure 1 . We acknowledge here the concurrent work by Kolouri et al . ( 2021 ) , where a similar embedding is used for graph representation . We now expose the benefits of this kernel formulation , and its relation to classical non-positive definite kernel . Kernel interpretation . Thanks to the gluing lemma ( see , e.g. , Peyré & Cuturi , 2019 ) , Pz ( x , x′ ) is a valid transport plan and , empirically , a rough approximation of P ( x , x′ ) . Kz can therefore be seen as a surrogate of a well-known kernel ( Rubner et al. , 2000 ) , defined as KOT ( x , x ′ ) : = n∑ i , i′=1 P ( x , x′ ) ii′κ ( xi , x ′ i′ ) . ( 3 ) When the entropic regularization ε is equal to 0 , KOT is equivalent to the 2-Wasserstein distance W2 ( x , x ′ ) with ground metric dκ induced by kernel κ. KOT is generally not positive definite ( see Peyré & Cuturi ( 2019 ) , Chapter 8.3 ) and computationally costly ( the number of transport plans to compute is quadratic in the number of sets to process whereas it is linear for Kz ) . Now , we show the relationship between this kernel and our kernel Kz , which is proved in Appendix B.1 . Lemma 3.1 ( Relation between P ( x , x′ ) and Pz ( x , x′ ) when ε = 0 ) . For any x , x′ and z in X with lengths n , n′ and p , by denoting W z2 ( x , x ′ ) : = 〈Pz ( x , x′ ) , d2κ ( x , x′ ) 〉 1/2 we have |W2 ( x , x′ ) −W z2 ( x , x′ ) | ≤ 2 min ( W2 ( x , z ) , W2 ( x′ , z ) ) . ( 4 ) This lemma shows that the distanceW z2 resulting fromKz is related to the Wasserstein distanceW2 ; yet , this relation should not be interpreted as an approximation error as our goal is not to approximate W2 , but rather to derive a trainable embedding Φz ( x ) with good computational properties . Lemma 3.1 roots our features and to some extent self-attention in a rich optimal transport literature . In fact , W z2 is equivalent to a distance introduced by Wang et al . ( 2013 ) , whose properties are further studied by Moosmüller & Cloninger ( 2020 ) . A major difference is that W z2 crucially relies on Sinkhorn ’ s algorithm so that the references can be learned end-to-end , as explained below .
The authors propose a new way to aggregate the embeddings of elements in a set (or sequence) by comparing it with respect to (trainable) reference set(s) via Optimal Transport (OT). The motivation to build such a pooling operation is derived from self-attention and the authors suggest an OT spin to that (e.g., the different reference sets/measures can be thought of as different heads in attention). This is, however, done in a principled way with the help of kernel embeddings and not just ad-hoc using the transport plan as the attention matrix.
SP:c0e827c33dbc9378404fe2a0949198cb74f13688
Combining Imitation and Reinforcement Learning with Free Energy Principle
1 INTRODUCTION . Imitation Learning ( IL ) is a framework to learn a policy to mimic expert trajectories . As the expert specifies model behaviors , there is no need to do exploration or to design complex reward functions . Reinforcement Learning ( RL ) does not have these features , so RL agents have no clue to realize desired behaviors in sparse-reward settings and even when RL succeeds in reward maximization , the policy does not necessarily achieve behaviors that the reward designer has expected . The key drawbacks of IL are that the policy never exceeds the suboptimal expert performance and that the policy is vulnerable to distributional shift . Meanwhile , RL can achieve super-human performance and has potentials to transfer the policy to new tasks . As real-world applications often needs high sample efficiency and little preparation ( rough rewards and suboptimal experts ) , it is important to find a way to effectively combine IL and RL . When the sensory inputs are high-dimensional images as in the real world , behavior learning such as IL and RL would be difficult without representation or model learning . Free Energy Principle ( FEP ) , a unified brain theory in computational neuroscience that explains perception , action and model learning in a Bayesian probabilistic way ( Friston et al. , 2006 ; Friston , 2010 ) , can handle behavior learning and model learning at the same time . In FEP , the brain has a generative model of the world and computes a mathematical amount called Free Energy using the model prediction and sensory inputs to the brain . By minimizing the Free Energy , the brain achieves model learning and behavior learning . Prior work about FEP only dealt with limited situations where a part of the generative model is given and the task is very low dimensional . As there are a lot in common between FEP and variational inference in machine learning , recent advancements in deep learning and latent variable models could be applied to scale up FEP agents to be compatible with high dimensional tasks . Recent work in model-based reinforcement learning succeeds in latent planning from highdimensional image inputs by incorporating latent dynamics models . Behaviors can be derived either by imagined-reward maximization ( Ha & Schmidhuber , 2018 ; Hafner et al. , 2019a ) or by online planning ( Hafner et al. , 2019b ) . Although solving high dimensional visual control tasks with modelbased methods is becoming feasible , prior methods have never tried to combine with imitation . In this paper , we propose Deep Free Energy Network ( FENet ) , an agent that combines the advantages of IL and RL so that the policy roughly learns from suboptimal expert data without the need of exploration or detailed reward crafting in the first place , then learns from sparsely specified reward functions to exceed the suboptimal expert performance . The key contributions of this work are summarized as follows : • Extension of Free Energy Principle : We theoretically extend Free Energy Principle , introducing policy prior and policy posterior to combine IL and RL . We implement the proposed method on top of Recurrent State Space Model ( Hafner et al. , 2019b ) , a latent dynamics model with both deterministic and stochastic components . • Visual control tasks in realistic problem settings : We solve Cheetah-run , Walker-walk , and Quadruped-walk tasks from DeepMind Control Suite ( Tassa et al. , 2018 ) . We do not only use the default problem settings , we also set up problems with sparse rewards and with suboptimal experts . We demonstrate that our agent outperforms model-based RL using Recurrent State Space Model in sparse-reward settings . We also show that our agent can achieve higher returns than Behavioral Cloning ( IL ) with suboptimal experts . 2 BACKGROUNDS ON FREE ENERGY PRINCIPLE . 2.1 PROBLEM SETUPS . We formulate visual control as a partially observable Markov decision process ( POMDP ) with discrete time steps t , observations ot , hidden states st , continuous action vectors at , and scalar rewards rt . The goal is to develop an agent that maximizes expected return E [ ∑T t=1 rt ] . 2.2 FREE ENERGY PRINCIPLE . Perception , action and model learning are all achieved by minimizing the same objective function , Free Energy ( Friston et al. , 2006 ; Friston , 2010 ) . In FEP , the agent is equipped with a generative model of the world , using a prior p ( st ) and a likelihood p ( ot|st ) . p ( ot , st ) = p ( ot|st ) p ( st ) ( 1 ) Perceptual Inference Under the generative model , the posterior probability of hidden states given observations is calculated with Bayes ’ theorem as follows . p ( st|ot ) = p ( ot|st ) p ( st ) p ( ot ) , p ( ot ) = ∫ p ( ot|st ) p ( st ) ds ( 2 ) Since we can not compute p ( ot ) due to the integral , we think of approximating p ( st|ot ) with a variational posterior q ( st ) by minimizing KL divergence KL ( q ( st ) ||p ( st|ot ) ) . KL ( q ( st ) ||p ( st|ot ) ) = ln p ( ot ) +KL ( q ( st ) ||p ( ot , st ) ) ( 3 ) Ft = KL ( q ( st ) ||p ( ot , st ) ) ( 4 ) We define the Free Energy as ( eq.4 ) . Since p ( ot ) does not depend on st , we can minimize ( eq.3 ) w.r.t . the parameters of the variational posterior by minimizing the Free Energy . Thus , the agent can infer the hidden states of the observations by minimizing Ft . This process is called ’ perceptual inference ’ in FEP . Perceptual Learning Free Energy is the same amount as negative Evidence Lower Bound ( ELBO ) in variational inference often seen in machine learning as follows . p ( ot ) ≥ −Ft ( 5 ) By minimizing Ft w.r.t . the parameters of the prior and the likelihood , the generative model learns to best explain the observations . This process is called ’ perceptual learning ’ in FEP . Active Inference We can assume that the prior is conditioned on the hidden states and actions at the previous time step as follows . p ( st ) = p ( st|st−1 , at−1 ) ( 6 ) The agent can change the future by choosing actions . Suppose the agent chooses at when it is at st , the prior can predict the next hidden state st+1 . Thus , we can think of the Expected Free Energy Gt+1 at the next time step t+ 1 as follows ( Friston et al. , 2015 ) . Gt+1 = KL ( q ( st+1 ) ||p ( ot+1 , st+1 ) ) = Eq ( st+1 ) [ ln q ( st+1 ) − ln p ( ot+1 , st+1 ) ] = Eq ( st+1 ) p ( ot+1|st+1 ) [ ln q ( st+1 ) − ln p ( ot+1 , st+1 ) ] ( 7 ) = Eq ( st+1 ) p ( ot+1|st+1 ) [ ln q ( st+1 ) − ln p ( st+1|ot+1 ) − ln p ( ot+1 ) ] ≈ Eq ( ot+1 , st+1 ) [ ln q ( st+1 ) − ln q ( st+1|ot+1 ) − ln p ( ot+1 ) ] ( 8 ) = Eq ( ot+1 ) [ −KL ( q ( st+1|ot+1 ) ||q ( st+1 ) ) − ln p ( ot+1 ) ] ( 9 ) Since the agent has not experienced time step t+ 1 yet and has not received observations ot+1 , we take expectation over ot+1 using the likelihood p ( ot+1|st+1 ) as ( eq.7 ) . In ( eq.8 ) , we approximate p ( ot+1|st+1 ) as q ( ot+1|st+1 ) and p ( st+1|ot+1 ) as q ( st+1|ot+1 ) . According to the complete class theorem ( Friston et al. , 2012 ) , any scalar rewards can be encoded as observation priors using p ( o ) ∝ exp r ( o ) and the second term in ( eq.9 ) becomes a goal-directed value . This observation prior p ( ot+1 ) can also be regarded as the probability of optimality variable p ( Ot+1 = 1|ot+1 ) , where the binary optimality variableOt+1 = 1 denotes that time step t+1 is optimal andOt+1 = 0 denotes that it is not optimal as introduced in the context of control as probabilistic inference ( Levine , 2018 ) . The first term in ( eq.9 ) is called epistemic value that works as intrinsic motivation to further explore the world . Minimization of −KL ( q ( st+1|ot+1 ) ||q ( st+1 ) ) means that the agent tries to experience as different states st+1 as possible given some imagined observations ot+1 . By minimizing the Expected Free Energy , the agent can infer the actions that explores the world and maximize rewards . This process is called ’ active inference ’ . 3 DEEP FREE ENERGY NETWORK ( FENET ) . Perceptual learning deals with learning the generative model to best explain the agent ’ s sensory inputs . If we think of not only observations but also actions given by the expert as a part of the sensory inputs , we can explain imitation leaning by using the concept of perceptual learning . Active inference deals with exploration and reward maximization , so it is compatible with reinforcement learning . By minimizing the same objective function , the Free Energy , we can deal with both imitation and RL . In this section , we first introduce a policy prior for imitation and a policy posterior for RL . Second , we extend the Free Energy Principle to be able to accommodate these two policies in the same objective function , the Free Energy . Finally , we explain a detailed network architecture to implement the proposed method for solving image control tasks . 3.1 INTRODUCING A POLICY PRIOR AND A POLICY POSTERIOR . Free Energy We extend the Free Energy from ( eq.4 ) so that actions are a part of sensory inputs that the generative model tries to explain . Ft = KL ( q ( st ) ||p ( ot , st , at ) ) = KL ( q ( st ) ||p ( ot|st ) p ( at|st ) p ( st|st−1 , at−1 ) ) ( 10 ) = Eq ( st ) [ ln q ( st ) p ( ot|st ) p ( at|st ) p ( st|st−1 , at−1 ) ] ( 11 ) = Eq ( st ) [ − ln p ( ot|st ) − ln p ( at|st ) + ln q ( st ) − ln p ( st|st−1 , at−1 ) ] ( 12 ) = Eq ( st ) [ − ln p ( ot|st ) − ln p ( at|st ) ] +KL ( q ( st ) ||p ( st|st−1 , at−1 ) ) ( 13 ) We define p ( at|st ) as a policy prior . When the agent observes expert trajectories , by minimizing Ft , the policy prior will be learned so that it can best explain the experts . Besides the policy prior , we introduce and define a policy posterior q ( at|st ) , which is the very policy that the agent samples from when interacting with its environments . We explain how to learn the policy posterior in the following . Expected Free Energy for imitation In a similar manner to active inference in Section 2.2 , we think of the Expected Free EnergyGt+1 at the next time step t+1 , but this time we take expectation over the policy posterior q ( at|st ) becauseGt+1 is a value expected under the next actions . Note that in Section 2.2 at was given as a certain value , but here at is sampled from the policy posterior . We calculate the expected variational posterior at time step t+ 1 as follows . q ( st+1 ) = Eq ( st ) q ( at|st ) [ p ( st+1|st , at ) ] ( 14 ) q ( ot+1 , st+1 , at+1 ) = Eq ( st+1 ) [ p ( ot+1|st+1 ) q ( at+1|st+1 ) ] ( 15 ) We extend the Expected Free Energy from ( eq.12 ) so that the variational posterior makes inference on actions as follows . GILt+1 = Eq ( ot+1 , st+1 , at+1 ) [ − ln p ( ot+1|st+1 ) − ln p ( at+1|st+1 ) + ln q ( st+1 , at+1 ) − ln p ( st+1|st , at ) ] ( 16 ) = Eq ( ot+1 , st+1 , at+1 ) [ − ln p ( ot+1|st+1 ) − ln p ( at+1|st+1 ) + ln q ( at+1|st+1 ) ] +KL ( q ( st+1 ) ||p ( st+1|st , at ) ) ( 17 ) = Eq ( ot+1 , st+1 ) [ − ln p ( ot+1|st+1 ) +KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) ] +KL ( q ( st+1 ) ||p ( st+1|st , at ) ) ( 18 ) = Eq ( ot+1 , st+1 ) [ − ln p ( ot+1|st+1 ) +KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) ] + 0 ( 19 ) = Eq ( st+1 ) [ H [ p ( ot+1|st+1 ) ] +KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) ] ( 20 ) In ( eq.20 ) , the first term is the entropy of the observation likelihood , and the second term is the KL divergence between the policy prior and the policy posterior . By minimizing GILt+1 , the agent learns the policy posterior so that it matches the policy prior which has been learned through minimizing Ft to encode the experts ’ behavior . Expected Free Energy for RL We can get the Expected Free Energy in a different way that has a reward component r ( ot+1 ) leading to the policy posterior maximizing rewards . We extend the Expected Free Energy from ( eq.8 ) so that the variational posterior makes inference on actions as follows . GRLt+1 = Eq ( ot+1 , st+1 , at+1 ) [ ln q ( st+1 , at+1 ) − ln p ( at+1|st+1 ) − ln q ( st+1|ot+1 ) − ln p ( ot+1 ) ] ( 21 ) = Eq ( ot+1 , st+1 ) [ ln q ( st+1 ) − ln q ( st+1|ot+1 ) +KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) − ln p ( ot+1 ) ] ( 22 ) = Eq ( ot+1 ) [ −KL ( q ( st+1|ot+1 ) ||q ( st+1 ) ) − ln p ( ot+1 ) ] + Eq ( st+1 ) [ KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) ] ( 23 ) ≈ Eq ( ot+1 ) [ −KL ( q ( st+1|ot+1 ) ||q ( st+1 ) ) − r ( ot+1 ) ] + Eq ( st+1 ) [ KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) ] ( 24 ) In a similar manner to active inference in Section 2.2 , we use p ( o ) ∝ exp r ( o ) in ( eq.24 ) . The first KL term is the epistemic value that lets the agent explore the world , the second term is the expected reward under the action sampled from the policy posterior , and the last KL term is the KL divergence between the policy prior and the policy posterior . The last KL term can be written as follows ( eq.25 ) , meaning that minimizing this term leads to maximizing the entropy of the policy posterior at the same time the policy posterior tries to match the policy prior . Thus , the expected free energy can be regarded as one of entropy maximizing RL methods . KL ( q ( at+1|st+1 ) ||p ( at+1|st+1 ) ) = −H [ q ( at+1|st+1 ) ] − Eq ( at+1|st+1 ) [ ln p ( at+1|st+1 ) ] ( 25 ) Note that q ( ot+1 ) in ( eq.24 ) can be calculated as follows . q ( ot+1 ) = Eq ( st+1 ) [ p ( ot+1|st+1 ) ] ( 26 ) By minimizing GRLt+1 , the agent learns the policy posterior so that it explores the world and maximizes the reward as long as it does not deviate too much from the policy prior which has encoded experts ’ behavior through minimizing Ft .
This paper extends and explains how to apply the "free energy principle" and active inference to RL and imitation learning. They implement a neural network approximation of losses derived this way and test on some control tasks. Importantly the tasks focus on here are imitation + control tasks. That is, there is both a reward signal but also demonstration trajectories. The demonstrations may be suboptimal. The compare against PLaNet, a latent planning based approach.
SP:a85b6d598513c8e03a013fd20da6b19a1108f71e
Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
1 INTRODUCTION . The success of CNNs ( Krizhevsky et al. , 2012 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Tan & Le , 2019 ) has prompted interest in improving understanding of how these models make their predictions . Particularly in applications such as medical diagnosis , having models explain their predictions can improve trust in them . The main line of work concerning model interpretability has focused on the creation of saliency maps–overlays to an input image that highlight regions most salient to the model in making its predictions . Among these , the most prominent are gradient-based methods ( Simonyan et al. , 2013 ; Sundararajan et al. , 2017 ; Selvaraju et al. , 2018 ) and masking-based methods ( Fong & Vedaldi , 2017 ; Dabkowski & Gal , 2017 ; Fong & Vedaldi , 2018 ; Petsiuk et al. , 2018 ; Chang et al. , 2019 ; Zintgraf et al. , 2017 ) . In recent years , we have witnessed an explosion of research based on these two directions . With a variety of approaches being proposed , framed and evaluated in different ways , it has become difficult to assess and fairly evaluate their additive contributions . In this work , we investigate the class of masking-based saliency methods , where we train a masking model to generate saliency maps based on an explicit optimization objective . Using a general formulation , we iteratively evaluate the extent to which recently proposed ideas in the literature improve performance . In addition to evaluating our models against the commonly used Weakly Supervised Object Localization ( WSOL ) metrics , the Saliency Metric ( SM ) , and the more recently introduced Pixel Average Precision ( PxAP ; Choe et al. , 2020 ) , we also test our final models against a suite of “ sanity checks ” for saliency methods ( Adebayo et al. , 2018 ; Hooker et al. , 2018 ) . Concretely , we make four major contributions . ( 1 ) We find that incorporating both masked-in classification maximization and masked-out entropy maximization objectives leads to the best saliency maps , and continually training the classifier improves the quality of generated maps . ( 2 ) We find that the masking model requires only the top layers of the classifier to effectively generate saliency maps . ( 3 ) Our final model outperforms other masking-based methods on WSOL and PxAP metrics . ( 4 ) We find that a small number of examples—as few as ten per class—is sufficient to train a masker to within the ballpark of our best performing model . 2 RELATED WORK . Interpretability of machine learning models has been an ongoing topic of research ( Ribeiro et al. , 2016 ; Doshi-Velez & Kim , 2017 ; Samek et al. , 2017 ; Lundberg et al. , 2018 ) . In this work , we focus on interpretability methods that involve generating saliency maps for image classification models . An overwhelming majority of the methods for generating saliency maps for image classifiers can be assigned to two broad families : gradient-based methods and masking-based methods . Gradient-based methods , such as using backpropagated gradients ( Simonyan et al. , 2013 ) , Guided Backprop ( Springenberg et al. , 2015 ) , Integrated Gradients ( Sundararajan et al. , 2017 ) , GradCam ( Selvaraju et al. , 2018 ) , SmoothGrad ( Smilkov et al. , 2017 ) and many more , directly use the backpropagated gradients through the classifier to the input to generate saliency maps . Masking-based methods modify input images to alter the classifier behavior and use the regions of modifications as the saliency map . Within this class of methods , one line of work focuses on optimizing over the masks directly : Fong & Vedaldi ( 2017 ) optimize over a perturbation mask for an image , Petsiuk et al . ( 2018 ) aggregates over randomly sampled masks , Fong & Vedaldi ( 2018 ) performs an extensive search for masks of a given size , while Chang et al . ( 2019 ) includes a counterfactual mask-infilling model to make the masking objective more challenging . The other line of work trains a separate masking model to produce saliency maps : Dabkowski & Gal ( 2017 ) trains a model that optimizes similar objectives to Fong & Vedaldi ( 2017 ) , Zolna et al . ( 2020 ) use a continually trained pool of classifiers and an adversarial masker to generate model-agnostic saliency maps , while Fan et al . ( 2017 ) identifies super-pixels from the image and then trains the masker similarly in an adversarial manner . Salient Object Detection ( Borji et al. , 2014 ; Wang et al. , 2019 ) is a related line of work that concerns identifying salient objects within an image as an end in itself , and not for the purpose of model interpretability . While it is not uncommon for these methods to incorporate a pretrained image classification model to extract learned visual features , they often also incorporate techniques for improving the quality of saliency maps that are orthogonal to model interpretability . Salient object detection methods that are trained on only image-level labels bear the closest similarity to saliency map generation methods for model interpretability . Hsu et al . ( 2017 ) and follow-up Hsu et al . ( 2019 ) train a masking model to confuse a binary image-classification model that predicts whether an image contains an object or is a ‘ background ’ image . Wang et al . ( 2017 ) apply a smooth pooling operation and a Foreground Inference Network ( a masking model ) while training an image classifier to generate saliency maps as a secondary output . Evaluation of saliency maps The wave of saliency map research has also ignited research on evaluation methods for these saliency maps as model explanations . Adebayo et al . ( 2018 ) and Hooker et al . ( 2018 ) propose sanity checks and benchmarks for the saliency maps . Choe et al . ( 2020 ) propose Pixel Average Precision ( PxAP ) , a pixel-wise metric for scoring saliency maps that accounts for mask binarization thresholds , while Yang & Kim ( 2019 ) create a set of metrics as well as artificial datasets interleaving foreground and background objects for evaluating the saliency maps . These works have shown that a number of gradient-based methods fail the sanity checks or perform no better than simple edge detectors . Hence , we choose to focus on masking-based methods in this paper . 3 MASKING-BASED SALIENCY MAP METHODS . We start by building a general formulation of masking-based saliency map methods . We take as given a trained image classifier F : x → y , that maps from image inputs x ∈ RH×W×C to class predictions ŷ ∈ [ 0 , 1 ] K , evaluated against ground-truth y ∈ { 1 · · ·K } . Our goal is to generate a mask m ∈ [ 0 , 1 ] H×W for each image x such that the masked-in image x m or the masked-out image x ( 1−m ) maximizes some objective based on output of a classifier given the modified image . For instance , we could attempt to mask out parts of the image to maximally deteriorate the classifier ’ s performance . This mask m then serves as a saliency map for the image x . Concretely , the per-image objective can be expressed as : arg min m λoutLout ( F ( x ( 1−m ) ; θF ) , y ) + λinLin ( F ( x m ; θF ) , y ) +R ( m ) , where Lout , Lin are the masked-out and masked-in objectives over the classifier output , λout , λin are hyperparameters controlling weighting of these two objectives , θF the classifier parameters , and R ( m ) a regularization term over the mask . The masked-in and masked-out losses , Lout and Lin , correspond to finding the smallest destroying region and smallest sufficient region as described in Dabkowski & Gal ( 2017 ) . Candidates for Lout include negative classification cross-entropy and prediction entropy . For Lin , the obvious candidate is the classification cross-entropy of the masked-in image . We set λin = 0 or λout = 0 if we only have either a masked-in or masked-out objective . The above formulation subsumes a number of masking-based methods , such as Fong & Vedaldi ( 2017 ) ; Dabkowski & Gal ( 2017 ) ; Zolna et al . ( 2020 ) . We follow Dabkowski & Gal , amortize the optimization by training a neural network masker M : x→ m , and solve for : arg min θM λoutLout ( F ( x ( 1−M ( x ; θM ) ) ; θF ) , y ) +λinLin ( F ( x M ( x ; θM ) ; θF ) , y ) +R ( M ( x ; θM ) ) , where M is the masking model and θM its parameters . In our formulation , we do not provide the masker with the ground-truth label , which differs from certain other masking-based saliency works ( Dabkowski & Gal , 2017 ; Chang et al. , 2019 ; Fong & Vedaldi , 2018 ) . In practice , we often desire model explanations without the availability of ground-truth information , so we focus our investigation on methods that require only an image as input . 3.1 MASKER ARCHITECTURE . We use a similar architecture to Dabkowski & Gal and Zolna et al .. The masker takes as input activations across different layers of the classifier , meaning it has access to the internal representation of the classifier for each image . Each layer of activations is fed through a convolutional layer and upsampled ( with nearest neighbor interpolation ) so they all share the same spatial resolution . All transformed layers are then concatenated and fed through another convolutional layer , upsampled , and put through a sigmoid operation to obtain a mask of the same resolution as the input image . In all our experiments , we use a ResNet-50 ( He et al. , 2016 ) as our classifier , and the masker has access to the outputs of the five major ResNet blocks . Figure 1B shows the architecture of our models . Following prior work ( Fong & Vedaldi , 2017 ) , we apply regularization on the generated masks to avoid trivial solutions such as masking the entire image . We apply L1 regularization to limit the size of masks and Total Variation ( TV ) to encourage smoothness . Details can be found in Appendix A.1 . Model OM ↓ LE ↓ SM ↓ PxAP ↑ . Train-Validation Set . Validation Set . 3.2 CONTINUAL TRAINING OF THE CLASSIFIER . Because neural networks are susceptible to adversarial perturbations ( Goodfellow et al. , 2015 ) , masking models can learn to perturb an input to maximize the above objectives for a given fixed classifier without producing intuitive saliency maps . While directly regularizing the masks is one potential remedy , Zolna et al . ( 2020 ) propose to train the masker against a diverse set of classifiers . In practice , they simulate this by continually training the classifier on masked images , retain a pool of past model checkpoints , and sample from the pool when training the masker . We adopt their approach and distinguish between a masker trained against a fixed classifier ( FIX ) and against a pool of continually trained classifiers ( CA , for Classifier-Agnostic ) . We highlight that saliency maps for FIX and CA address fundamentally different notions of saliency . Whereas a FIX approach seeks a saliency map that explains what regions are most salient to a given classifier , a CA approach tries to identify all possible salient regions for any hypothetical classifier ( hence , classifier-agnostic ) . In other words , a CA approach may be inadequate for interpreting a specific classifier and is better suited for identifying salient regions for a class of image classification models .
By the first look, this work itself does not introduce any new architecture or novel algorithm. It takes what is considered as the popular choices in generating classifier saliency masks, and conducts quite extensive sets of experiments to dissect the components by their importance. The writing is pretty clear in narrative and the experimental findings are surprising and significant.
SP:69855e0bec141e9d15eec5cc37022f313e6600b2
No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks
1 INTRODUCTION . The conventional performance measure of accuracy for image classification treats all classes other than ground truth as equally wrong . However , some mistakes may have a much higher impact than others in real-world applications . An intuitive example being an autonomous vehicle mistaking a car for a bus is a better mistake than mistaking a car for a lamppost . Consequently , it is essential to integrate the notion of mistake severity into classifiers and one convenient way to do so is to use a taxonomic hierarchy tree of class labels , where severity is defined by a distance on the graph ( e.g. , height of the Lowest Common Ancestor ) between the ground truth and the predicted label ( Deng et al. , 2010 ; Zhao et al. , 2011 ) . This is similar to the problem of providing a good ranking of classes in a retrieval setting . Consider the case of an autonomous vehicle ranking classes for a thin , white , narrow band ( a pole , in reality ) . A top-3 prediction of { pole , lamppost , tree } would be a better prediction than { pole , person , building } . Notice that the top-k class predictions would have at least k − 1 incorrect predictions here , and the aim is to reduce the severity of these mistakes , measured by the average hierarchical distance of each of the top k predictions from the ground truth . Silla & Freitas ( 2011 ) survey classical methods leveraging class hierarchy when designing classifiers across various application domains and illustrate clear advantages over the flat hierarchy classification , especially when the labels have a well-defined hierarchy . There has been growing interest in the problem of deep hierarchy-aware image classification ( Barz & Denzler , 2019 ; Bertinetto et al. , 2020 ) . These approaches seek to leverage the class hierarchy ∗shyamgopal.karthik @ research.iiit.ac.in inherent in the large scale datasets ( e.g. , the ImageNet dataset is derived from the WordNet semantic ontology ) . Hierarchy is incorporated using either label embedding methods , hierarchical loss functions , or hierarchical architectures . We empirically found that these models indeed improve the ranking of the top-k predicted classes – ensuring that the top alternative classes are closer in the class hierarchy . However , this improvement is observed only for k > 1 . While inspecting closely the top-1 predictions of these models , we observe that instead of improving the mistake severity , they simply introduce additional low-severity mistakes which in turn favours the mistake-severity metric proposed in ( Bertinetto et al. , 2020 ) . This metric involves division by the number of misclassified samples , therefore , in many situations ( discussed in the paper ) , it can prefer a model making additional low-severity mistakes over the one that does not make such mistakes . This is at odds with the intuitive notion of making better mistakes . These additional low-severity mistakes can also explain the significant drop in their top-1 accuracy compared to the vanilla crossentropy model . We also find these models to be highly miscalibrated which further limits their practical usability . In this work we explore a different direction for hierarchy-aware classification where we amend mistake severity at test time by making post-hoc corrections over the class likelihoods ( e.g. , softmax in the case of deep neural networks ) . Given a label hierarchy , we perform such amendments to the likelihood by applying the very well-known and classical approach called Conditional Risk Minimization ( CRM ) . We found that CRM outperforms state-of-the-art deep hierarchy-aware classifiers by large margins at ranking classes with little loss in the classification accuracy . As opposed to other recent approaches , CRM does not hurt the calibration of a model as the cross-entropy likelihoods can still be used for the same . CRM is simple , requires addition of just a few lines of code to the standard cross-entropy model , does not require retraining of a network , and contains no hyperparameters whatsoever . We would like to emphasize that we do not claim any algorithmic novelty as CRM has been well explored in the literature ( Duda & Hart , 1973 , Ch . 2 ) . Almost a decade ago , Deng et al . ( 2010 ) had proposed a very similar solution using Support Vector Machine ( SVM ) classifier applied on handcrafted features . However , this did not result in practically useful performance because of the lack of modern machine learning tools at that time . We intend to bring this old , simple , and extremely effective approach back into the attention before we delve deeper into the sophisticated ones requiring expensive retraining of large neural networks and designing complex loss functions . Overall , our investigation into the hierarchy-aware classification makes the following contributions : • We highlight a shortcoming in one of the metrics proposed to evaluate hierarchy-aware classification and show that it can easily be fooled and give the wrong impression of making better mistakes . • We revisit an old post-hoc correction technique ( CRM ) which significantly outperforms prior art when the ranking of the predictions made by the model are considered . • We also investigate the reliability of prior art in terms of calibration and show that these methods are severely miscalibrated , limiting their practical usefulness . 2 RELATED WORKS . 2.1 COST-SENSITIVE CLASSIFICATION . Cost-sensitive classification assigns varying costs to different types of misclassification errors . The work by Abe et al . ( 2004 ) groups cost-sensitive classifiers into three main categories . The first category specifically extends one particular classification model to be cost-sensitive , such as support vector machines ( Tu & Lin , 2010 ) or decision trees ( Lomax & Vadera , 2013 ) . The second category makes the training procedure cost-sensitive , which is typically achieved by assigning the training examples of different classes with different weights ( rescaling ) ( Zhou & Liu , 2010 ) or by changing the proportions of each class while training using sampling ( rebalancing ) ( Elkan , 2001 ) . The third category makes the prediction procedure cost-sensitive ( Domingos , 1999 ; Zadrozny & Elkan , 2001a ) . Such direct cost-sensitive decision-making is the most generic : it considers the underlying classifier as a black box and extends to any number of classes and arbitrary cost matrices . Our work comes under the third category of post-hoc amendment . We study cost-sensitive classification in a large scale setting ( e.g. , ImageNet ) and explore the use of a taxonomic hierarchy to obtain the misclassification costs . 2.2 HIERARCHY AWARE CLASSIFICATION . There is a rich literature around exploiting hierarchies to improve the task of image classification . Embedding-based methods define each class as a soft embedding vector , instead of the typical onehot . DeViSE ( Frome et al. , 2013 ) learn a transformation over image features to maximize the cosine similarity with their respective word2vec label embeddings . The transformation is learned using ranking loss and places the image embeddings in a semantically meaningful space . Akata et al . ( 2015 ) ; Xian et al . ( 2016 ) explore variations of text embeddings , and ranking loss frameworks . Barz & Denzler ( 2019 ) project classes on a hypersphere , such that the correlation of class embeddings equals the semantic similarity of the classes . The semantic similarity is derived from the height of the lowest common ancestor ( LCA ) in a given hierarchy tree . Another line of work directly alters the loss functions or the algorithms/architectures . Zhao et al . ( 2011 ) propose a weighted ( hierarchy-aware ) multi-class logistic regression formulation . Verma et al . ( 2012 ) optimize a context-sensitive loss to learn a separate distance metric for each node in the class taxonomy tree . Wu et al . ( 2016 ) combine losses at different hierarchies of the tree by learning separate , fully connected layers for each level post a shared feature space.Bilal et al . ( 2017 ) add branches at different depths of AlexNet architecture to fuse losses at different levels of the hierarchy . Brust & Denzler ( 2019 ) use conditional probability chains to derive a novel label encoding and a corresponding loss function . Most deep learning-based methods overlook the severity of mistakes , and the evaluation revolves around counting the top-k errors . Bertinetto et al . ( 2020 ) has revived the interest in this direction by jointly analyzing the top-k accuracies with the severity of errors . They propose two modifications to cross-entropy to better capture the hierarchy : one based on label embeddings ( Soft-labels ) and the other , which factors the cross-entropy loss into the individual terms for each of the edges in the hierarchy tree and assigns different weights to them ( Hierarchical cross-entropy or HXE ) . Our method uses models trained with vanilla cross-entropy loss and alters the decision rule to pick the class that minimizes the conditional risk where the condition is being imposed using the known class-hierarchy . On similar lines , Deng et al . ( 2010 ) study the effect of minimizing conditional risk on the mean hierarchical cost . They leverage the ImageNet hierarchy for cost and compute posteriors by fitting a sigmoid function to the SVM ’ s output or taking the percent of neighbours from a class for Nearest Neighbour classification . Our work investigates the relevance of CRM in the deep learning era and highlights the importance of looking beyond mean hierarchical costs and jointly analyzing the role of accuracy and calibration . 2.3 CALIBRATION OF DEEP NEURAL NETWORKS . Networks are said to be well-calibrated if their predicted probability estimates are representative of the true correctness likelihood . Calibrated confidence estimates are important for model interpretability and its use in downstream applications . Platt scaling ( Platt et al. , 1999 ) , Histogram binning ( Zadrozny & Elkan , 2001b ) and Isotonic regression ( Zadrozny & Elkan , 2002 ) are three common calibration methods . Although originally proposed for the SVM classifier , their variations are used in improving the calibration of neural networks ( Guo et al. , 2017 ) . Calibrated probability estimates are particularly important when cost-sensitive decisions are to be made ( Zadrozny & Elkan , 2001b ) and are often measured using Expected Calibration Error ( ECE ) and Maximum Calibration Error ( MCE ) ( Niculescu-Mizil & Caruana , 2005 ; Naeini et al. , 2015 ; Mukhoti et al. , 2020 ) . We desire models with high accuracy that have low calibration error and make less severe mistakes . However , there is often a compromise . Studies in cost-sensitive classification ( Jan et al. , 2012 ) reveal a trade-off between costs and error rates . Reliability literature aims to obtain better calibrated deep networks while retaining top-k accuracy ( Seo et al. , 2019 ) . We further observe that methods like Soft-labels or Hierarchical cross-entropy successfully minimize the average top-k hierarchical cost , but result in poorly calibrated networks . In contrast , the proposed framework retains top-k accuracy and good calibration , while significantly reducing the hierarchical cost . 3 APPROACH . The K-class classification problem comes with a training set S = { ( xi , yi ) } Ni=1 , where label yi ∈ Y = { 1 , 2 , ... , K } . The classifier is a deep neural network fθ : X → p ( Y ) parametrized by θ which maps the input samples to a probability distribution over the label space Y . The p ( y|x ) is typically derived using a softmax function on the logits obtained for an input x . Given p ( y|x ) , the network minimizes cross-entropy with the ground truth class over samples from the training set , and uses SGD to optimize θ , forming the standard hierarchy-agnostic cross-entropy baseline . The decision rule is naturally given by argmax k p ( y = k|x ) . The classical CRM framework ( Duda & Hart , 1973 ) can be adapted to image classification by taking the trained model with a given θ and incorporating the hierarchy information at deployment time . A symmetric class-relationship matrix C is created using the given hierarchy tree ( which can either be drawn from the WordNet ontology or an application specific taxonomy ) , where Ci , j is the height of the lowest common ancestor LCA ( yi , yj ) between classes i and j . The height of a node is defined as the number of edges between the given node and the furthest leaf . Ci , j is zero when i = j and is bounded by the maximum height of the hierarchy tree . Given an input x , the likelihood p ( y|x ) is obtained by passing the sample through the network fθ ( x ) . The only modification we make is in the decision rule , which now selects the class that minimizes the conditional risk R ( y = k|x ) , given by : argmin k R ( y = k|x ) = argmin k K∑ j=1 Ck , j · p ( y = j|x ) ( 1 ) For the ease of the reader , we illustrate a four-class example in Figure 1a , comparing predictions obtained using the standard cross-entropy baseline ( leaf nodes ) , and the prediction using CRM ( Eq . ( 1 ) ) for a given class-relationship matrix . Given the probability of each class p ( y|x ) , argminR ( y|x ) is the Bayes optimal prediction . It is guaranteed to achieve the lowest possible overall cost , i.e . lowest expected cost over all possible examples weighted by their probabilities ( Duda & Hart , 1973 , Ch . 2 ) . Depending on the cost-matrix and p ( y|x ) , the top-1 prediction of the CRM applied on cross-entropy might differ from the top-1 prediction of the cross-entropy baseline . However , because of the overconfident nature of recent deep neural networks , we observe that the top-1 probability of p ( y|x ) is greater than 0.5 for significant number of test samples . Below we prove that in such situations where maxp ( y|x ) is higher than the sum of other probabilities , the post-hoc correction ( CRM ) does not change the top-1 prediction irrespective of the structure of the tree . Since the second highest probability is guaranteed to be less than 0.5 by definition , our correction can effectively re-rank the classes . Experimentally we find it to significantly reduce the hierarchical distance @ k. Theorem 1 . If max ( p ( y|x ) ) > 0.5 , then argmini ∑K j=1 Ci , j · p ( y = j|x ) and argmax p ( y|x ) are identical irrespective of the tree structure and both lead to the same top-1 prediction . Proof . Consider the tree illustrated in Figure 1b ; two leaf nodes ( class labels ) i , j and the subtree ( Tij ) rooted at their Lowest Common Ancestor . Assuming the height of the LCA ( i , j ) = h and argmax p ( y|x ) = i , the risk R ( y = j|x ) = R ( j ) is given as : R ( j ) = h · p ( i ) + ∑ k∈Tij\ { i } Cj , k · p ( k ) + ∑ ∀k 6∈Tij Cj , k · p ( k ) Ignoring the cost of other nodes inside Tij , we get R ( j ) ≥ h ·p ( i ) + ∑ ∀k 6∈Tij Cj , k ·p ( k ) . Similarly , for the risk of class i : R ( i ) ≤ h · ( 1− p ( i ) ) + ∑ ∀k 6∈Tij Ci , k · p ( k ) Outside the subtree rooted at Ti , j , Ci , k = Cj , k∀k and therefore without loss of generality we can say that R ( i ) < R ( j ) , if p ( i ) > 0.5 .
The authors propose a model to improve the output distribution of neural nets in image classification problems. Their model is a post hoc procedure and is based on the tree structure of WordNet. The model revises the classifier output based on the distance of the labels in the tree. Intuitively, their solution is to pick the candidate label that is located in the region of the tree with a higher accumulated probability mass value. They also experimentally show that the previous evaluation metrics are inconclusive.
SP:e4e5b4e2bee43c920ed719dc331a370129845268
Toward Trainability of Quantum Neural Networks
1 INTRODUCTION . Neural Networks ( Hecht-Nielsen , 1992 ) using gradient-based optimizations have dramatically advanced researches in discriminative models , generative models , and reinforcement learning . To efficiently utilize the parameters and practically improve the trainability , neural networks with specific architectures ( LeCun et al. , 2015 ) are introduced for different tasks , including convolutional neural networks ( Krizhevsky et al. , 2012 ) for image tasks , recurrent neural networks ( Zaremba et al. , 2014 ) for the time series analysis , and graph neural networks ( Scarselli et al. , 2008 ) for tasks related to graph-structured data . Recently , the neural architecture search ( Elsken et al. , 2019 ) is proposed to improve the performance of the networks by optimizing the neural structures . Despite the success in many fields , the development of the neural network algorithms could be limited by the large computation resources required for the model training . In recent years , quantum computing has emerged as one solution to this problem , and has evolved into a new interdisciplinary field known as the quantum machine learning ( QML ) ( Biamonte et al. , 2017 ; Havlı́ček et al. , 2019 ) . Specifically , variational quantum circuits ( Benedetti et al. , 2019 ) have been explored as efficient protocols for quantum chemistry ( Kandala et al. , 2017 ) and combinatorial optimizations ( Zhou et al. , 2018 ) . Compared to the classical circuit models , quantum circuits have shown greater expressive power ( Du et al. , 2020a ) , and demonstrated quantum advantage for the low-depth case ( Bravyi et al. , 2018 ) . Due to the robustness against noises , variational quantum circuits have attracted significant interest for the hope to achieve the quantum supremacy on near-term quantum computers ( Arute et al. , 2019 ) . Quantum Neural Networks ( QNNs ) ( Farhi & Neven , 2018 ; Schuld et al. , 2020 ; Beer et al. , 2020 ) are the special kind of quantum-classical hybrid algorithms that run on trainable quantum circuits . Recently , small-scale QNNs have been implemented on real quantum computers ( Havlı́ček et al. , 2019 ) for supervised learning tasks . The training of QNNs aims to minimize the objective function f with respect to parameters θ . Inspired by the classical optimizations of neural networks , a natural strategy to train QNNs is to exploit the gradient of the loss function ( Crooks , 2019 ) . However , the recent work ( McClean et al. , 2018 ) shows that n-qubit quantum circuits with random structures and large depth L = O ( poly ( n ) ) tend to be approximately unitary 2-design ( Harrow & Low , 2009 ) , and the partial derivative vanishes to zero exponentially with respect to n. The vanishing gradient problem is usually referred to as the Barren Plateaus ( McClean et al. , 2018 ) , and could affect the trainability of QNNs in two folds . Firstly , simply using the gradient-based method like Stochastic Gradient Descent ( SGD ) to train the QNN takes a large number of iterations . Secondly , the estimation of the derivatives needs an extremely large number of samples from the quantum output to guarantee a relatively accurate update direction ( Chen et al. , 2018 ) . To avoid the Barren Plateaus phenomenon , we explore QNNs with special structures to gain fruitful results . In this work , we introduce QNNs with special architectures , including the tree tensor ( TT ) structure ( Huggins et al. , 2019 ) referred to as TT-QNNs and the setp controlled structure referred to as SCQNNs . We prove that for TT-QNNs and SC-QNNs , the expectation of the gradient norm of the objective function is bounded . Theorem 1.1 . ( Informal ) Consider the n-qubit TT-QNN and the n-qubit SC-QNN defined in Figure 1-2 and corresponding objective functions fTT and fSC defined in ( 3-4 ) , then we have : 1 + log n 2n · α ( ρin ) ≤ Eθ‖∇θfTT‖2 ≤ 2n− 1 , 1 + nc 21+nc · α ( ρin ) ≤ Eθ‖∇θfSC‖2 ≤ 2n− 1 , where nc is the number of CNOT operations that directly link to the first qubit channel in the SCQNN , the expectation is taken for all parameters in θ with uniform distributions in [ 0 , 2π ] , and α ( ρin ) ≥ 0 is a constant that only depends on the input state ρin ∈ C2 n×2n . Moreover , by preparing ρin using the L-layer encoding circuit in Figure 4 , the expectation of α ( ρin ) could be further lower bounded as Eα ( ρin ) ≥ 2−2L . Compared to random QNNs with 2−O ( poly ( n ) ) derivatives , the gradient norm of TT-QNNs ad SCQNNs is greater than Ω ( 1/n ) or Ω ( 2−nc ) that could lead to better trainability . Our contributions are summarized as follows : • We prove Ω̃ ( 1/n ) and Ω̃ ( 2−nc ) lower bounds on the expectation of the gradient norm of TT-QNNs and SC-QNNs , respectively , that guarantees the trainability on related optimization problems . Our theorem does not require the unitary 2-design assumption in existing works and is more realistic to near-term quantum computers . • We prove that by employing the encoding circuit in Figure 4 to prepare ρin , the expectation of term α ( ρin ) is lower bounded by a constant 2−2L . Thus , we further lower bounded the expectation of the gradient norm to the term independent from the input state . • We simulate the performance of TT-QNNs , SC-QNNs , and random structure QNNs on the binary classification task . All results verify proposed theorems . Both TT-QNNs and SC-QNNs show better trainability and accuracy than random QNNs . Our proof strategy could be adopted for analyzing QNNs with other architectures as future works . With the proven assurance on the trainability of TT-QNNs and SC-QNNs , we eliminate one bottleneck in front of the application of large-size Quantum Neural Networks . The rest parts of this paper are organized as follows . We address the preliminary including the definitions , the basic quantum computing knowledge and related works in Section 2 . The QNNs with special structures and the corresponding results are presented in Section 3 . We implement the binary classification using QNNs with the results shown in Section 4 . We make conclusions in Section 5 . 2 PRELIMINARY . 2.1 NOTATIONS AND THE BASIC QUANTUM COMPUTING . We use [ N ] to denote the set { 1 , 2 , · · · , N } . The form ‖ · ‖ denotes the ‖ · ‖2 norm for vectors . We denote aj as the j-th component of the vector a . The tensor product operation is denoted as “ ⊗ ” . The conjugate transpose of a matrixA is denoted asA† . The trace of a matrixA is denoted as Tr [ A ] . We denote ∇θf as the gradient of the function f with respect to the vector θ . We employ notations O and Õ to describe the standard complexity and the complexity ignoring minor terms , respectively . Now we introduce the quantum computing . The pure state of a qubit could be written as |φ〉 = a|0〉+b|1〉 , where a , b ∈ C satisfies |a|2+|b|2 = 1 , and { |0〉 = ( 1 , 0 ) T , |1〉 = ( 0 , 1 ) T } , respectively . The n-qubit space is formed by the tensor product of n single-qubit spaces . For the vector x ∈ R2n , the amplitude encoded state |x〉 is defined as 1‖x‖ ∑2n j=1 xj |j〉 . The dense matrix is defined as ρ = |x〉〈x| for the pure state , in which 〈x| = ( |x〉 ) † . A single-qubit operation to the state behaves like the matrix-vector multiplication and can be referred to as the gate in the quantum circuit language . Specifically , single-qubit operations are often used including RX ( θ ) = e−iθX , RY ( θ ) = e−iθY , and RZ ( θ ) = e−iθZ : X = ( 0 1 1 0 ) , Y = ( 0 −i i 0 ) , Z = ( 1 0 0 −1 ) . Pauli matrices { I , X , Y , Z } will be referred to as { σ0 , σ1 , σ2 , σ3 } for the convenience . Moreover , two-qubit operations , the CNOT gate and the CZ gate , are employed for generating quantum entanglement : CNOT = • = |0〉〈0| ⊗ σ0 + |1〉〈1| ⊗ σ1 , CZ = • • = |0〉〈0| ⊗ σ0 + |1〉〈1| ⊗ σ3 . We could obtain information from the quantum system by performing measurements , for example , measuring the state |φ〉 = a|0〉+b|1〉 generates 0 and 1 with probability p ( 0 ) = |a|2 and p ( 1 ) = |b|2 , respectively . Such a measurement operation could be mathematically referred to as calculating the average of the observable O = σ3 under the state |φ〉 : 〈σ3〉|φ〉 ≡ 〈φ|σ3|φ〉 ≡ Tr [ |φ〉〈φ| · σ3 ] = |a|2 − |b|2 = p ( 0 ) − p ( 1 ) = 2p ( 0 ) − 1 . The average of a unitary observable under arbitrary states is bounded by [ −1 , 1 ] . 2.2 RELATED WORKS . The barren plateaus phenomenon in QNNs is first noticed by McClean et al . ( 2018 ) . They prove that for n-qubit random quantum circuits with depth L = O ( poly ( n ) ) , the expectation of the derivative to the objective function is zero , and the variance of the derivative vanishes to zero with rate exponential in the number of qubits n. Later , Cerezo et al . ( 2020 ) prove that for L-depth quantum circuits consisting of 2-design gates , the gradient with local observables vanishes with the rate O ( 2−O ( L ) ) . The result implies that in the low-depth L = O ( log n ) case , the vanishing rate could be O ( 1polyn ) , which is better than previous exponential results . Recently , some techniques have been proposed to address the barren plateaus problem , including the special initialization strategy ( Grant et al. , 2019 ) and the layerwise training method ( Skolik et al. , 2020 ) . We remark that these techniques rely on the assumption of low-depth quantum circuits . Specifically , Grant et al . ( 2019 ) initialize parameters such that the initial quantum circuit is equivalent to an identity matrix ( L = 0 ) . Skolik et al . ( 2020 ) train parameters in subsets in each layer , so that a low-depth circuit is optimized during the training of each subset of parameters . Since random quantum circuits tend to be approximately unitary 2-design1 as the circuit depth increases ( Harrow & Low , 2009 ) , and 2-design circuits lead to exponentially vanishing gradients ( McClean et al. , 2018 ) , the natural idea is to consider circuits with special structures . On the other hand , tensor networks with hierarchical structures have been shown an inherent relationship with classical neural networks ( Liu et al. , 2019 ; Hayashi et al. , 2019 ) . Recently , quantum classifiers using hierarchical structure QNNs have been explored ( Grant et al. , 2018 ) , including the tree tensor network and the multi-scale entanglement renormalization ansatz . Besides , QNNs with dissipative layers have shown the ability to avoid the barren plateaus ( Beer et al. , 2020 ) . However , theoretical analysis of the trainability of QNNs with certain layer structures has been little explored ( Sharma et al. , 2020 ) . Also , the 2-design assumption in the existing theoretical analysis ( McClean et al. , 2018 ; Cerezo et al. , 2020 ; Sharma et al. , 2020 ) is hard to implement exactly on near-term quantum devices .
The design of a useful generalization of neural networks on quantum computers has been challenging because the gradient signal will decay exponentially with respect to the depth of the quantum circuit (saturating to exponentially small in system size after the depth is linear in system size). This work provides a detailed analysis of quantum neural networks with a tree structure that uses only a depth logarithmic in the system size. The authors show that the gradient signal will only be polynomially small with respect to the system size. The authors also provide empirical verification of the theoretical analysis showing a much larger gradient norm. However, the improvement in prediction accuracy (under early stopping) when using tree-structure quantum neural networks is not very significant. This is likely because the considered system size (8 qubits) is too small to fully demonstrate the exponential decay and the inability to train random quantum neural networks.
SP:7cc59c8f556d03597f7ab391ef14d1a96191a4db
Solving Min-Max Optimization with Hidden Structure via Gradient Descent Ascent
1 Introduction . Traditionally , our understanding of convex-concave games revolves around von Neumann ’ s celebrated minimax theorem , which implies the existence of saddle point solutions with a uniquely defined value . These solutions are called von Nemann solutions and guarantee to each agent their corresponding value regardless of opponent play . Although many learning algorithms are known to be able to compute such saddle points [ 13 ] , recently there has there has been a fervor of activity in proving stronger results such as faster regret minimization rates or analysis of the day-to-day behavior [ 46 , 17 , 7 , 1 , 66 , 19 , 2 , 45 , 5 , 25 , 70 , 29 , 6 , 48 , 30 , 56 ] . This interest has been largely triggered by the impressive successes of AI architectures inspired by min-max games such as Generative Adversarial Networks ( GANS ) [ 26 ] , adversarial training [ 40 ] and reinforcement learning self-play in games [ 63 ] . Critically , however , all these applications are based upon non-convex non-concave games , our understanding of which is still nascent . Nevertheless , 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . some important early work in the area has focused on identifying new solution concepts that are widely applicable in general min-max games , such as ( local/differential ) Nash equilibrium [ 3 , 41 ] , local minmax [ 18 ] , local minimax [ 31 ] , ( local/differential ) Stackleberg equilibrium [ 24 ] , local robust point [ 69 ] . The plethora of solutions concepts is perhaps suggestive that “ solving '' general min-max games unequivocally may be too ambitious a task . Attraction to spurious fixed points [ 18 ] , cycles [ 65 ] , robustly chaotic behavior [ 15 , 16 ] and computational hardness issues [ 20 ] all suggest that general min-max games might inherently involve messy , unpredictable and complex behavior . Are there rich classes of non-convex non-concave games with an effectively unique game theoretic solution that is selected by standard optimization dynamics ( e.g . gradient descent ) ? Our class of games . We will define a general class of min-max optimization problems , where each agent selects its own vectors of parameters which are then processed separately by smooth functions . Each agent receives their respective payoff after entering the outputs of the processed decision vectors as inputs to a standard convex-concave game . Formally , there exist functions F : RN → X ⊂ Rn and G : RM → Y ⊂ Rm and a continuous convex-concave function L : X × Y → R , such that the min-max game is min θ∈RN max φ∈RM L ( F ( θ ) , G ( φ ) ) . ( Hidden Convex-Concave ( HCC ) ) We call this class of min-max problems Hidden Convex-Concave Games . It generalizes the recently defined hidden bilinear games of [ 65 ] . Our solution concept . Out of all the local Nash equilibria of HCC games , there exists a special subclass , the vectors ( θ∗ , φ∗ ) that implement the von Neumann solution of the convex-concave game . This solution has a strong and intuitive game theoretic justification . Indeed , it is stable even if the agents could perform arbitrary deviations directly on the output spaces X , Y . These parameter combinations ( θ∗ , φ∗ ) “ solve '' the “ hidden ” convex-concave L and thus we call them von Neumann solutions . Naturally , HCCs will typically have numerous local saddle/Nash equilibria/fixed points that do not satisfy this property . Instead , they correspond to stationary points of the F , G where their output is stuck , e.g. , due to an unfortunate initialization . At these points the agents may be receiving payoffs which can be arbitrarily smaller/larger than the game theoretic value of game L. Fortunately , we show that Gradient Descent Ascent ( GDA ) strongly favors von Neumann solutions over generic fixed points . Our results . In this work , we study the behavior of continuous GDA dynamics for the class of HCC games where each coordinate of F , G is controlled by disjoint sets of variables . In a nutshell , we show that GDA trajectories stabilize around or converge to the corresponding von Neumann solutions of the hidden game . Despite restricting our attention to a subset of HCC games , our analysis has to overcome unique hurdles not shared by standard convex concave games . Challenges of HCC games . In convex-concave games , deriving the stability of the von Neumann solutions relies on the Euclidean distance from the equilibrium being a Lyapunov function . In contrast , in HCC games where optimization happens in the parameter space of θ , φ , the non-linear nature of F , G distorts the convex-concave landscape in the output space . Thus , the Euclidean distance will not be in general a Lyapunov function . Moreover , the existence of any Lyapunov function for the trajectories in the output space of F , G does not translate to a well-defined function in the parameter space ( unless F , G are trivial , invertible maps ) . Worse yet , even if L has a unique solution in the output space , this solution could be implemented by multiple equilibria in the parameter space and thus each of them can not be individually globally attracting . Clearly any transfer of stability or convergence properties from the output to the parameter space needs to be initialization dependent . It is worth mentioning that similar challenges like transfering results from the output to the input space was also faced in the simpler class of hidden bilinear games . However , [ 65 ] to sidestep this issue assume the restricitve requirement of F , G to be invertible operators . Our results go beyond this simplified case requiring new proof techniques . Specifically , we show how to combine the powerful technologies of the the Center-Stable Manifold Theorem , typically used to argue convergence to equilibria in non-convex optimization settings [ 34 , 52 , 54 , 53 , 35 ] , along with a novel Lyapunov function argument to prove that almost all initial conditions converge to the our game theoretic solution . Lyapunov Stability . Our first step is to construct an initialization-dependent Lyapunov function that accounts for the curvature induced by the operators F and G ( Lemma 2 ) . Leveraging a potentially infinite number of initialization-dependent Lyapunov functions in Theorem 5 we prove that under mild assumptions the outputs of F , G stabilize around the von Neumann solution of L. Convergence . Mirroring convex concave games , we require strict convexity or concavity of L to provide convergence guarantees to von Neumann solutions ( Theorem 6 ) . Barring initializations where von Neumann solutions are not reachable due to the limitations imposed by F and G , the set of von Neumann solutions are globally asymptotically stable ( Corollary 1 ) . Even in non-strict HCC games , we can add regularization terms to make L strictly convex concave . Small amounts of regularization allows for convergence without significantly perturbing the von Neumann solution ( Theorem 7 ) while increasing regularization enables exponentially faster convergence rates ( Theorem 8 ) . Similar to the aforementioned theoretical work , our model of HCC games provides a formal and theoretical tractable testbed for evaluating the performance of different training methods in GAN inspired architectures . As a concrete example , [ 36 ] recently proved the success of WGAN training for learning the parameters of non-linearly transformed Gaussian distributions , where for simplicity they replaced the typical Lipschitz constraint of the discriminator function with a quadratic regularizer . Interestingly , we can elucidate on why regularized learning is actually necessary by establishing a formal connection to HCC games . On top of other such ML applications , our game theoretic framework can furthermore capture and generalize evolutionary game theoretic models . [ 57 ] analyze a model of evolutionary competition between two species ( host-parasite ) . The outcome of this competition depends on their respective phenotypes ( informally their properties , e.g. , agility , camouflage , etc. ) . These phenotypes are encoded via functions that map input vectors ( here genotype/DNA sequences ) to phenotypes . While [ 57 ] proved that learning in these games does not converge to equilibria and typically cycles for almost all initial conditions , we can explicitly construct initial conditions that do not satisfy our definition of safety and end up converging to artificial fixed points . Safety conditions aside , we show that a slight variation of the evolutionary/learning algorithm suffices to resolve the cycling issues and for the dynamics to equilibrate to the von Neumann solution . Hence , we provide the first instance of team zero-sum games [ 62 ] , a notoriously hard generalization of zero-sum games with a large duality gap , that is solvable by decentralized dynamics . Organization . In Section 2 we provide some preliminary notation , the definition of our model and some useful technical lemmas . Section 3 is devoted to the presentation of our the main results . Section 4 discusses applications of our framework to specific GAN formulations . Section 5 concludes our work with a discussion of future directions and challenges . We defer the full proofs of our results as well as further discussion on applications to the Appendix . 2 Preliminaries . 2.1 Notation . Vectors are denoted in boldface x , y unless otherwise indicated are considered as column vectors . We use ‖·‖ to denote the ` 2−norm . For a function f : Rd → R we use ∇f to denote its gradient . For functions of two vector arguments , f ( x , y ) : Rd1 × Rd2 → R , we use ∇xf , ∇yf to denote its partial gradient . For the time derivative we will use the dot accent abbreviation , i.e. , ẋ = ddt [ x ( t ) ] . A function f will belong to Cr if it is r times continuously differentiable . Additionally , f ◦ g = f ( g ( · ) ) denotes the composition of f , g. Finally , the term “ sigmoid ” function refers to σ : R→ R such that σ ( x ) = ( 1 + e−x ) −1 . 2.2 Hidden Convex Concave Games . θ11 θ12 · · · θ1n1 θ1 f1 ( θ1 ) ... θN1 θN2 · · · θNnN θN fN ( θN ) F ( θ ) L ( F ( θ ) , G ( φ ) ) φ11 φ12 · · · φ1n1 φ1 g1 ( φ1 ) ... φM1 φM2 · · · φMnM φM gM ( φM ) G ( φ ) θ̇i = −∇θiL ( F ( θ ) , G ( φ ) ) φ̇j = ∇φjL ( F ( θ ) , G ( φ ) ) Figure : Hidden Seperable Zero-Sum Game Model & Optimization Dynamics We will begin our discussion by defining the notion of convex concave functions as well as strictly convex concave functions . Note that our definition of strictly convex concave functions is a superset of strictly convex strictly concave functions that are usually studied in the literature . Definition 1 . L : Rn × Rm → R is convex concave if for every y ∈ Rn L ( · , y ) is convex and for every x ∈ Rm L ( x , · ) is concave . Function L will be called strictly convex concave if it is convex concave and for every x×y ∈ Rn×Rm either L ( · , y ) is strictly convex or L ( x , · ) is strictly concave . At the center of our definition of HCC games is a convex concave utility function L. Additionally , each player of the game is equipped with a set of operator functions . The minimization player is equipped with n functions fi : Rni → R while the maximization player is equipped with m functions gj : Rmj → R. We will assume in the rest of our discussion that fi , gj , L are all C2 functions . The inputs θi ∈ Rni and φj ∈ Rmj are grouped in two vectors θ = [ θ1 · · · θn ] > F ( θ ) = [ f1 ( θ1 ) · · · fn ( θn ) ] > φ = [ φ1 · · · φm ] > G ( φ ) = [ g1 ( φ1 ) · · · gm ( φm ) ] > We are ready to define the hidden convex concave game ( θ∗ , φ∗ ) = arg min θ∈RN arg max φ∈RM L ( F ( θ ) , G ( φ ) ) . where N = ∑n i=1 ni and M = ∑m j=1mj . Given a convex concave function L , all stationary points of L are ( global ) Nash equilibria of the min-max game . We will call the set of all equilibria of L , von Neumann solutions of L and denote them by Solution ( L ) . Unfortunately , Solution ( L ) can be empty for games defined over the entire Rn × Rm . For games defined over convex compact sets , the existence of at least one solution is guaranteed by von Neumann ’ s minimax theorem . Our definition of HCC games can capture games on restricted domains by choosing appropriately bounded functions fi and gj . In the following sections , we will just assume that Solution ( L ) is not empty . We note that our results hold for both bounded and unbounded fi and gj . We are now ready to write down the equations of the GDA dynamics for a HCC game : θ̇i = −∇θiL ( F ( θ ) , G ( φ ) ) =−∇θifi ( θi ) ∂L ∂fi ( F ( θ ) , G ( φ ) ) φ̇j = ∇φjL ( F ( θ ) , G ( φ ) ) =∇φjgj ( φj ) ∂L ∂gj ( F ( θ ) , G ( φ ) ) ( 1 )
In this paper, the authors introduce a class of games called Hidden Convex-Concave where a (stricly) convex-concave potential is composed with smooth maps. On this class of problems, they show that the continuous gradient dynamics converge to (a neighbordhood of) the minimax solutions of the problem. This is an exploratory theoretical paper which aims at better capturing the behaviors that can be observed e.g. in the training of GANs.
SP:8a8aa5f245c2fb82beddb19c82dddb8d67f66f8a
Predicting the Outputs of Finite Networks Trained with Noisy Gradients
1 INTRODUCTION . Deep neural networks ( DNNs ) have been rapidly advancing the state-of-the-art in machine learning , yet a complete analytic theory remains elusive . Recently , several exact results were obtained in the highly over-parameterized regime ( N →∞ where N denotes the width or number of channels for fully connected networks ( FCNs ) and convolutional neural networks ( CNNs ) , respectively ) ( Daniely et al. , 2016 ) . This facilitated the derivation of an exact correspondence with Gaussian Processes ( GPs ) known as the Neural Tangent Kernel ( NTK ) ( Jacot et al. , 2018 ) . The latter holds when highly over-parameterized DNNs are trained by gradient flow , namely with vanishing learning rate and involving no stochasticity . The NTK result has provided the first example of a DNN to GP correspondence valid after end-to-end DNN training . This theoretical breakthrough allows one to think of DNNs as inference problems with underlying GPs ( Rasmussen & Williams , 2005 ) . For instance , it provides a quantitative description of the generalization properties ( Cohen et al. , 2019 ; Rahaman et al. , 2018 ) and training dynamics ( Jacot et al. , 2018 ; Basri et al. , 2019 ) of DNNs . Roughly speaking , highly over-parameterized DNNs generalize well because they have a strong implicit bias to simple functions , and train well because low-error solutions in weight space can be reached by making a small change to the random values of the weights at initialization . Despite its novelty and importance , the NTK correspondence suffers from a few shortcomings : ( a ) Its deterministic training is qualitatively different from the stochastic one used in practice , which may lead to poorer performance when combined with a small learning rate ( Keskar et al. , 2016 ) . ( b ) It under-performs , often by a large margin , convolutional neural networks ( CNNs ) trained with SGD ( Arora et al. , 2019 ) . ( c ) Deriving explicit finite width corrections ( FWCs ) is challenging , as it requires solving a set of coupled ODEs ( Dyer & Gur-Ari , 2020 ; Huang & Yau , 2019 ) . Thus , there is a need for an extended theory for end-to-end trained deep networks which is valid for finite width DNNs . Our contribution is three-fold . First , we prove a correspondence between a DNN trained with noisy gradients and a Stochastic Process ( SP ) which at N → ∞ tends to the Neural Network Gaussian Process ( NNGP ) ( Lee et al. , 2018 ; Matthews et al. , 2018 ) . In these works , the NNGP kernel is determined by the distribution of the DNN weights at initialization which are i.i.d . random variables , whereas in our correspondence the weights are sampled across the stochastic training dynamics , drifting far away from their initial values . We call ours the NNSP correspondence , and show that it holds when the training dynamics in output space exhibit ergodicity . Second , we predict the outputs of trained finite-width DNNs , significantly improving upon the corresponding GP predictions . This is done by deriving leading FWCs which are found to scale with width as 1/N . The accuracy at which we can predict the empirical DNNs ’ outputs serves as a strong verification for our aforementioned ergodicity assumption . In the regime where the GP RMSE error scales as 1/n , we find that the leading FWC are a decaying function of n , and thus overall negligible . In the small n regime we find that the FWC is small and grows with n. We thus conclude that finite-width corrections are important for intermediate values of n ( Fig . 1 ) . Third , we propose an explanation for why finite CNNs trained on image classification tasks can outperform their infinite-width counterparts , as observed by Novak et al . ( 2018 ) . The key difference is that in finite CNNs weight sharing is beneficial . Our theory , which accounts for the finite width , quantifies this difference ( §4.2 ) . Overall , the NNSP correspondence provides a rich analytical and numerical framework for exploring the theory of deep learning , unique in its ability to incorporate finite over-parameterization , stochasticity , and depth . We note that there are several factors that make finite SGD-trained DNNs used in practice different from their GP counterparts , e.g . large learning rates , early stopping etc . ( Lee et al. , 2020 ) . Importantly , our framework quantifies the contribution of finite-width effects to this difference , distilling it from the contribution of these other factors . 1.1 RELATED WORK . The idea of leveraging the dynamics of the gradient descent algorithm for approximating Bayesian inference has been considered in various works ( Welling & Teh , 2011 ; Mandt et al. , 2017 ; Teh et al. , 2016 ; Maddox et al. , 2019 ; Ye et al. , 2017 ) . However , to the best of our knowledge , a correspondence with a concrete SP or a non-parametric model was not established nor was a comparison made of the DNN ’ s outputs with analytical predictions . Finite width corrections were studied recently in the context of the NTK correspondence by several authors . Hanin & Nica ( 2019 ) study the NTK of finite DNNs , but where the depth scales together with width , whereas we keep the depth fixed . Dyer & Gur-Ari ( 2020 ) obtained a finite N correction to the linear integral equation governing the evolution of the predictions on the training set . Our work differs in several aspects : ( a ) We describe a different correspondence under different a training protocol with qualitatively different behavior . ( b ) We derive relatively simple formulae for the outputs which become entirely explicit at large n. ( c ) We account for all sources of finite N corrections whereas finite N NTK randomness remained an empirical source of corrections not accounted for by Dyer & Gur-Ari ( 2020 ) . ( d ) Our formalism differs considerably : its statistical mechanical nature enables one to import various standard tools for treating randomness , ergodicity breaking , and taking into account non-perturbative effects . ( e ) We have no smoothness limitation on our activation functions and provide FWCs on a generic data point and not just on the training set . Another recent paper ( Yaida , 2020 ) studied Bayesian inference with weakly non-Gaussian priors induced by finite-N DNNs . Unlike here , there was no attempt to establish a correspondence with trained DNNs . The formulation presented here has the conceptual advantage of representing a distribution over function space for arbitrary training and test data , rather than over specific draws of data sets . This is useful for studying the large n behavior of learning curves , where analytical insights into generalization can be gained ( Cohen et al. , 2019 ) . A somewhat related line of work studied the mean field regime of shallow NNs ( Mei et al. , 2018 ; Chen et al. , 2020 ; Tzen & Raginsky , 2020 ) . We point out the main differences from our work : ( a ) The NN output is scaled differently with width . ( b ) In the mean field regime one is interested in the dynamics ( finite t ) of the distribution over the NN parameters in the form of a PDE of the Fokker-Planck type . In contrast , in our framework we are interested in the distribution over function space at equilibrium , i.e . for t→∞ . ( c ) It seems that the mean field analysis is tailored for two-layer fully-connected NNs and is hard to generalize to deeper nets or to CNNs . In contrast , our formalism generalizes to deeper fully-connected NNs and to CNNs as well , as we showed in section 4.2 . 2 THE NNSP CORRESPONDENCE . In this section we show that finite-width DNNs , trained in a specific manner , correspond to Bayesian inference using a non-parametric model which tends to the NNGP as N → ∞ . We first give a short review of Langevin dynamics in weight space as described by Neal et al . ( 2011 ) , Welling & Teh ( 2011 ) , which we use to generate samples from the posterior over weights . We then shift our perspective and consider the corresponding distribution over functions induced by the DNN , which characterizes the non-parametric model . Recap of Langevin-type dynamics - Consider a DNN trained with full-batch gradient descent while injecting white Gaussian noise and including a weight decay term , so that the discrete time dynamics of the weights read ∆wt : = wt+1 − wt = − ( γwt +∇wL ( zw ) ) dt+ √ 2Tdtξt ( 1 ) where wt is the vector of all network weights at time step t , γ is the strength of the weight decay , L ( zw ) is the loss as a function of the output zw , T is the temperature ( the magnitude of noise ) , dt is the learning rate and ξt ∼ N ( 0 , I ) . As dt → 0 these discrete-time dynamics converge to the continuous-time Langevin equation given by ẇ ( t ) = −∇w ( γ 2 ||w ( t ) || 2 + L ( zw ) ) + √ 2Tξ ( t ) with 〈ξi ( t ) ξj ( t′ ) 〉 = δijδ ( t− t′ ) , so that as t → ∞ the weights will be sampled from the equilibrium distribution in weight space , given by ( Risken & Frank , 1996 ) P ( w ) ∝ exp ( − 1 T ( γ 2 ||w||2 + L ( zw ) ) ) = exp ( − ( 1 2σ2w ||w||2 + 1 2σ2 L ( zw ) ) ) ( 2 ) The above equality holds since the equilibrium distribution of the Langevin dynamics is also the posterior distribution of a Bayesian neural network ( BNN ) with an i.i.d . Gaussian prior on the weights w ∼ N ( 0 , σ2wI ) . Thus we can map the hyper-parameters of the training to those of the BNN : σ2w = T/γ and σ 2 = T/2 . Notice that a sensible scaling for the weight variance at layer ` is σ2w , ` ∼ O ( 1/N ` −1 ) , thus the weight decay needs to scale as γ ` ∼ O ( N ` −1 ) . A transition from weight space to function space - We aim to move from a distribution over weight space Eq . 2 to a one over function space . Namely , we consider the distribution of zw ( x ) implied by the above P ( w ) where for concreteness we consider a DNN with a single scalar output zw ( x ) ∈ R on a regression task with data { ( xα , yα ) } nα=1 ⊂ Rd × R. Denoting by P [ f ] the induced measure on function space we formally write P [ f ] = ∫ dwδ [ f − zw ] P ( w ) ∝ e− 1 2σ2 L [ f ] ∫ dwe − 1 2σ2w ||w||2 δ [ f − zw ] ( 3 ) where ∫ dw denotes an integral over all weights and we denote by δ [ f − zw ] a delta-function in function-space . As common in path-integrals or field-theory formalism ( Schulman , 2012 ) , such a delta function is understood as a limit procedure where one chooses a suitable basis for function space , trims it to a finite subset , treats δ [ f − zw ] as a product of regular delta-functions , and at the end of the computation takes the size of the subset to infinity . To proceed we decompose the posterior over functions Eq . 3 as P [ f ] ∝ e− 1 2σ2 L [ f ] P0 [ f ] where the prior over functions is P0 [ f ] ∝ ∫ dwe − 1 2σ2w ||w||2 δ [ f − zw ] . The integration over weights now obtains a clear meaning : it yields the distribution over functions induced by a DNN with i.i.d . random weights chosen according to the prior P0 ( w ) ∝ e − 1 2σ2w ||w||2 . Thus , we can relate any correlation function in function space and weight space , for instance ( Df is the integration measure over function space ) ∫ DfP0 [ f ] f ( x ) f ( x′ ) = ∫ Df ∫ dwP0 ( w ) δ [ f−zw ] f ( x ) f ( x′ ) = ∫ dwP0 ( w ) zw ( x ) zw ( x ′ ) ( 4 ) As noted by Cho & Saul ( 2009 ) , for highly over-parameterized DNNs the r.h.s . of 4 equals the kernel of the NNGP associated with this DNN , K ( x , x′ ) . Moreover P0 [ f ] tends to a Gaussian and can be written as P0 [ f ] ∝ exp ( −1 2 ∫ dµ ( x ) dµ ( x′ ) f ( x ) K−1 ( x , x′ ) f ( x′ ) ) +O ( 1/N ) ( 5 ) where µ ( x ) is the measure of the input space , and the O ( 1/N ) scaling of the finite-N correction will be explained in §3 . If we now plug 5 in 3 , take the loss to be the total square error1 L [ f ] =∑n α=1 ( yα − f ( xα ) ) 2 , and take N →∞ we have that the posterior P [ f ] is that of a GP . Assuming ergodicity , one finds that training-time averaged output of the DNN is given by the posterior mean of a GP , with measurement noise2 equal to σ2 = T/2 and a kernel given by the NNGP of that DNN . We refer to the above expressions for P0 [ f ] and P [ f ] describing the distribution of outputs of a DNN trained according to our protocol – the NNSP correspondence . Unlike the NTK correspondence , the kernel which appears here is different and no additional initialization dependent terms appear ( as should be the case since we assumed ergodicity ) . Furthermore , given knowledge of P0 [ f ] at finite N , one can predict the DNN ’ s outputs at finite N . Henceforth , we refer to P0 [ f ] as the prior distribution , as it is the prior distribution of a DNN with random weights drawn from P0 ( w ) . Evidence supporting ergodicity - Our derivation relies on the ergodicity of the dynamics . Ergodicity is in general hard to prove rigorously in non-convex settings , and thus we must revert to heuristics . The most robust evidence of ergodicity in function space is the high level of accuracy of our analytical expressions w.r.t . to our numerical results . This is a self-consistency argument : we assume ergodicity in order to derive our analytical results and then indeed find that they agree very well with the experiment , thus validating our original assumption . Another indicator of ergodicity is a small auto-correlation time ( ACT ) of the dynamics . Although short ACT does not logically imply ergodicity ( in fact , the converse is true : exponentially long ACT implies non-ergodic dynamics ) . However , the empirical ACT gives a lower bound on the true correlation time of the dynamics . In our framework , it is sufficient that the dynamics of the outputs zw be ergodic , even if the dynamics of the weights converge much slower to an equilibrium distribution . Indeed , we have found that the ACTs of the outputs are considerably smaller than those of the weights ( see Fig . 2b ) . Full ergodicity may be too strong of a condition and we don ’ t really need it for our purposes , since we are mainly interested in collecting statistics that will allow us to accurately compute the posterior mean of the distribution in function space . Thus , a weaker condition which is sufficient here is ergodicity in the mean ( see App . F ) , and we believe our self-consistent argument above demonstrates that it holds . In a related manner , optimizing the train loss can be seen as an attempt to find a solution to n constraints using far more variables ( roughly M · N2 where M is the number of layers ) . From a different angle , in a statistical mechanical description of satisfiability problems , one typically expects ergodic behavior when the ratio of the number of variables to the number of constraints becomes much larger than one ( Gardner & Derrida , 1988 ) .
This paper shows a correspondence between deep neural networks (DNN) trained with noisy gradients and NNGP. It provides a general analytical form for the finite width correction (FWC) for NNSP expanding around NNGP. Finally, it argues that this FWC can be used to explain why finite width CNNs can improve the performance relative to their GP counterparts on image classification tasks.
SP:5e9b5c3ee27cf90eb73e2672a1bbf18a1b12e791
ON NEURAL NETWORK GENERALIZATION VIA PROMOTING WITHIN-LAYER ACTIVATION DIVERSITY
1 INTRODUCTION . Neural networks are a powerful class of non-linear function approximators that have been successfully used to tackle a wide range of problems . They have enabled breakthroughs in many tasks , such as image classification ( Krizhevsky et al. , 2012 ) , speech recognition ( Hinton et al. , 2012a ) , and anomaly detection ( Golan & El-Yaniv , 2018 ) . Formally , the output of a neural network consisting of P layers can be defined as follows : f ( x ; W ) = φP ( W P ( φP−1 ( · · ·φ2 ( W 2φ1 ( W 1x ) ) ) ) , ( 1 ) where φi ( . ) is the element-wise activation function , e.g. , ReLU and Sigmoid , of the ith layer and W = { W 1 , . . . , W P } are the corresponding weights of the network . The parameters of f ( x ; W ) are optimized by minimizing the empirical loss : L̂ ( f ) = 1 N N∑ i=1 l ( f ( xi ; W ) , yi ) , ( 2 ) where l ( · ) is the loss function , and { xi , yi } Ni=1 are the training samples and their associated groundtruth labels . The loss is minimized using the gradient decent-based optimization coupled with backpropagation . However , neural networks are often over-parameterized , i.e. , have more parameters than data . As a result , they tend to overfit to the training samples and not generalize well on unseen examples ( Goodfellow et al. , 2016 ) . While research on Double descent ( Belkin et al. , 2019 ; Advani et al. , 2020 ; Nakkiran et al. , 2020 ) shows that over-parameterization does not necessarily lead to overfitting , avoiding overfitting has been extensively studied ( Neyshabur et al. , 2018 ; Nagarajan & Kolter , 2019 ; Poggio et al. , 2017 ) and various approaches and strategies have been proposed , such as data augmentation ( Goodfellow et al. , 2016 ) , regularization ( Kukačka et al. , 2017 ; Bietti et al. , 2019 ; Arora et al. , 2019 ) , and dropout ( Hinton et al. , 2012b ; Wang et al. , 2019 ; Lee et al. , 2019 ; Li et al. , 2016 ) , to close the gap between the empirical loss and the expected loss . Diversity of learners is widely known to be important in ensemble learning ( Li et al. , 2012 ; Yu et al. , 2011 ) and , particularly in deep learning context , diversity of information extracted by the network neurons has been recognized as a viable way to improve generalization ( Xie et al. , 2017a ; 2015b ) . In most cases , these efforts have focused on making the set of weights more diverse ( Yang et al . ; Malkin & Bilmes , 2009 ) . However , diversity of the activation has not received much attention . Inspired by the motivation of dropout to co-adapt neuron activation , Cogswell et al . ( 2016 ) proposed to regularize the activations of the network . An additional loss using cross-covariance of hidden activations was proposed , which encourages the neurons to learn diverse or non-redundant representations . The proposed approach , known as Decov , has empirically been proven to alleviate overfitting and to improve the generalization ability of neural network , yet a theoretical analysis to prove this has so far been lacking . In this work , we propose a novel approach to encourage activation diversity within the same layer . We propose complementing ’ between-layer ’ feedback with additional ’ within-layer ’ feedback to penalize similarities between neurons on the same layer . Thus , we encourage each neuron to learn a distinctive representation and to enrich the data representation learned within each layer . Moreover , inspired by Xie et al . ( 2015b ) , we provide a theoretical analysis showing that the within-layer activation diversity boosts the generalization performance of neural networks and reduces overfitting . Our contributions in this paper are as follows : • Methodologically , we propose a new approach to encourage the ’ diversification ’ of the layer-wise feature maps ’ outputs in neural networks . The proposed approach has three variants based on how the global diversity is defined . The main intuition is that by promoting the within-layer activation diversity , neurons within the same layer learn distinct patterns and , thus , increase the overall capacity of the model . • Theoretically , we analyse the effect the within-layer activation diversity on the generalization error bound of neural network . The analysis is presented in Section 3 . As shown in Theorems 3.7 , 3.8 , 3.9 , 3.10 , 3.11 , and 3.12 , we express the upper-bound of the estimation error as a function of the diversity factor . Thus , we provide theoretical evidence that the within-layer activation diversity can help reduce the generalization error . • Empirically , we show that the within-layer activation diversity boosts the performance of neural networks . Experimental results show that the proposed approach outperforms the competing methods . 2 WITHIN-LAYER ACTIVATION DIVERSITY . We propose a diversification strategy , where we encourage neurons within a layer to activate in a mutually different manner , i.e. , to capture different patterns . To this end , we propose an additional within-layer loss which penalizes the neurons that activate similarly . The loss function L̂ ( f ) defined in equation 2 is augmented as follows : L̂aug ( f ) = L̂ ( f ) + λ P∑ i=1 J i , ( 3 ) where J i expresses the overall pair-wise similarity of the neurons within the ith layer and λ is the penalty coefficient for the diversity loss . As in ( Cogswell et al. , 2016 ) , our proposed diversity loss can be applied to a single layer or multiple layers in a network . For simplicity , let us focus on a single layer . Let φin ( xj ) and φ i m ( xj ) be the outputs of the n th and mth neurons in the ith layer for the same input sample xj . The similarity snm between the the nth and mth neurons can be obtained as the average similarity measure of their outputs for N input samples . We use the radial basis function to express the similarity : snm = 1 N N∑ j=1 exp ( − γ||φin ( xj ) − φim ( xj ) ||2 ) , ( 4 ) where γ is a hyper-parameter . The similarity snm can be computed over the whole dataset or batchwise . Intuitively , if two neurons n andm have similar outputs for many samples , their corresponding similarity snm will be high . Otherwise , their similarity smn is small and they are considered “ diverse ” . Based on these pair-wise similarities , we propose three variants for the global diversity loss J i of the ith layer : • Direct : J i = ∑ n 6=m snm . In this variant , we model the global layer similarity directly as the sum of the pairwise similarities between the neurons . By minimizing their sum , we encourage the neurons to learn different representations . • Det : J i = −det ( S ) , where S is a similarity matrix defined as Snm = snm . This variant is inspired by the Determinantal Point Process ( DPP ) ( Kulesza & Taskar , 2010 ; 2012 ) , as the determinant of S measures the global diversity of the set . Geometrically , det ( S ) is the volume of the parallelepiped formed by vectors in the feature space associated with s. Vectors that result in a larger volume are considered to be more “ diverse ” . Thus , maximizing det ( · ) ( minimizing −det ( · ) ) encourages the diversity of the learned features . • Logdet : J i = −logdet ( S ) 1 . This variant has the same motivation as the second one . We use logdet instead of det as logdet is a convex function over the positive definite matrix space . It should be noted here that the first proposed variant , i.e. , direct , similar to Decov ( Cogswell et al. , 2016 ) , captures only the pairwise diversity between components and is unable to capture the higherorder “ diversity ” , whereas the other two variants consider the global similarity and are able to measure diversity in a more global manner . Our newly proposed loss function defined in equation 3 has two terms . The first term is the classic loss function . It computes the loss with respect to the ground-truth . In the back-propagation , this feedback is back-propagated from the last layer to the first layer of the network . Thus , it can be considered as a between-layer feedback , whereas the second term is computed within a layer . From equation 3 , we can see that our proposed approach can be interpreted as a regularization scheme . However , regularization in deep learning is usually applied directly on the parameters , i.e. , weights ( Goodfellow et al. , 2016 ; Kukačka et al. , 2017 ) , while in our approach , similar to ( Cogswell et al. , 2016 ) , an additional term is defined over the output maps of the layers . For a layer with C neurons and a batch size of N , the additional computational cost is O ( C2 ( N + 1 ) ) for direct variant and O ( C3 + C2N ) ) for both the determinant and log of the determinant variants . 3 GENERALIZATION ERROR ANALYSIS . In this section , we analyze how the proposed within-layer diversity regularizer affects the generalization error of a neural network . Generalization theory ( Zhang et al. , 2017 ; Kawaguchi et al. , 2017 ) focuses on the relation between the empirical loss , as defined in equation 2 , and the expected risk defined as follows : L ( f ) = E ( x , y ) ∼Q [ l ( f ( x ) , y ) ] , ( 5 ) where Q is the underlying distribution of the dataset . Let f∗ = argminf L ( f ) be the expected risk minimizer and f̂ = argminf L̂ ( f ) be the empirical risk minimizer . We are interested in the estimation error , i.e. , L ( f∗ ) −L ( f̂ ) , defined as the gap in the loss between both minimizers ( Barron , 1994 ) . The estimation error represents how well an algorithm can learn . It usually depends on the complexity of the hypothesis class and the number of training samples ( Barron , 1993 ; Zhai & Wang , 2018 ) . 1This is defined only if S is positive definite . It can be shown that in our case S is positive semi-definite . Thus , in practice we use a regularized version ( S + I ) to ensure the positive definiteness . Several techniques have been used to quantify the estimation error , such as PAC learning ( Hanneke , 2016 ; Arora et al. , 2018 ) , VC dimension ( Sontag , 1998 ; Harvey et al. , 2017 ; Bartlett et al. , 2019 ) , and the Rademacher complexity ( Xie et al. , 2015b ; Zhai & Wang , 2018 ; Tang et al. , 2020 ) . The Rademacher complexity has been widely used as it usually leads to a tighter generalization error bound ( Sokolic et al. , 2016 ; Neyshabur et al. , 2018 ; Golowich et al. , 2018 ) . The formal definition of the empirical Rademacher complexity is given as follows : Definition 3.1 . ( Bartlett & Mendelson , 2002 ) For a given dataset with N samples D = { xi , yi } Ni=1 generated by a distribution Q and for a model space F : X → R with a single dimensional output , the empirical Rademacher complexityRN ( F ) of the set F is defined as follows : RN ( F ) = Eσ [ sup f∈F 1 N N∑ i=1 σif ( xi ) ] , ( 6 ) where the Rademacher variables σ = { σ1 , · · · , σN } are independent uniform random variables in { −1 , 1 } . In this work , we analyse the estimation error bound of a neural network using the Rademacher complexity and we are interested in the effect of the within-layer diversity on the estimation error . In order to study this effect , inspired by ( Xie et al. , 2015b ) , we assume that with a high probability τ , the distance between the output of each pair of neurons , ( φn ( x ) −φm ( x ) ) 2 , is lower bounded by dmin for any input x . Note that this condition can be expressed in terms of the similarity s defined in equation 4 : snm ≤ e ( −γdmin ) = smin for any two distinct neurons with the probability τ . Our analysis starts with the following lemma : Lemma 3.2 . ( Xie et al. , 2015b ; Bartlett & Mendelson , 2002 ) With a probability of at least 1− δ L ( f̂ ) − L ( f∗ ) ≤ 4RN ( A ) +B √ 2 log ( 2/δ ) N ( 7 ) for B ≥ supx , y , f |l ( f ( x ) , y ) | , whereRN ( A ) is the Rademacher complexity of the loss set A . It upper-bounds the estimation error using the Rademacher complexity defined over the loss set and supx , y , f |l ( f ( x ) , y ) | . Our analysis continues by seeking a tighter upper bound of this error and showing how the within-layer diversity , expressed with dmin , affects this upper bound . We start by deriving such an upper-bound for a simple network with one hidden layer trained for a regression task and then we extend it for a general multi-layer network and for different losses .
This paper proposes adding regularization terms to encourage diversity of the layer outputs in order to improve the generalization performance. The proposed idea is an extension of Cogswell's work with different regularization terms. In addition, the authors performed detailed generalization analysis based on the Rademacher complexity. The appearance of the term related to the layer output diversity in the generalization bound provides theoretical support for the proposed idea.
SP:95899f38fd0f1789510e67178b587c08a14203f5
Neural ODE Processes
1 INTRODUCTION . Many time-series that arise in the natural world , such as the state of a harmonic oscillator , the populations in an ecological network or the spread of a disease , are the product of some underlying dynamics . Sometimes , as in the case of a video of a swinging pendulum , these dynamics are latent and do not manifest directly in the observation space . Neural Ordinary Differential Equations ( NODEs ) ( Chen et al. , 2018 ) , which use a neural network to parametrise the derivative of an ODE , have become a natural choice for capturing the dynamics of such time-series ( Çağatay Yıldız et al. , 2019 ; Rubanova et al. , 2019 ; Norcliffe et al. , 2020 ; Kidger et al. , 2020 ; Morrill et al. , 2020 ) . However , despite their fundamental connection to dynamics-governed time-series , NODEs present certain limitations that hinder their adoption in these settings . Firstly , NODEs can not adjust predictions as more data is collected without retraining the model . This ability is particularly important for real-time applications , where it is desirable that models adapt to incoming data points as time passes and more data is collected . Secondly , without a larger number of regularly spaced measurements , there is usually a range of plausible underlying dynamics that can explain the data . However , NODEs do not capture this uncertainty in the dynamics . As many real-world time-series are comprised of sparse sets of measurements , often irregularly sampled , the model can fail to represent the diversity of suitable solutions . In contrast , the Neural Process ( Garnelo et al. , 2018a ; b ) family offers a class of ( neural ) stochastic processes designed for uncertainty estimation and fast adaptation to changes in the observed data . However , NPs modelling time-indexed random functions lack an explicit treatment of time . Designed for the general case of an arbitrary input domain , they treat time as an unordered set and do not explicitly consider the time-delay between different observations . To address these limitations , we introduce Neural ODE Processes ( NDPs ) , a new class of stochastic processes governed by stochastic data-adaptive dynamics . Our probabilistic Neural ODE formulation relies on and extends the framework provided by NPs , and runs parallel to other attempts to ∗Equal contribution . †Work done as an AI Resident at the University of Cambridge . incorporate application-specific inductive biases in this class of models such as Attentive NPs ( Kim et al. , 2019 ) , ConvCNPs ( Gordon et al. , 2019 ) , and MPNPs ( Day et al. , 2020 ) . We demonstrate that NDPs can adaptively capture many potential dynamics of low-dimensional systems when faced with limited amounts of data . Additionally , we show that our approach scales to high-dimensional time series with latent dynamics such as rotating MNIST digits ( Casale et al. , 2018 ) . Our code and datasets are available at https : //github.com/crisbodnar/ndp . 2 BACKGROUND AND FORMAL PROBLEM STATEMENT . Problem Statement We consider modelling random functions F : T → Y , where T = [ t0 , ∞ ) represents time and Y ⊂ Rd is a compact subset of Rd . We assume F has a distribution D , induced by another distribution D′ over some underlying dynamics that govern the time-series . Given a specific instantation F of F , let C = { ( tCi , yCi ) } i∈IC be a set of samples from F with some indexing set IC . We refer to C as the context points , as denoted by the superscript C. For a given context C , the task is to predict the values { yTj } j∈IT that F takes at a set of target times { tTj } j∈IT , where IT is another index set . We call T = { ( tTj , yTj ) } the target set . Additionally let tC = { ti|i ∈ IC } and similarly define yC , tT and yT . Conventionally , as in Garnelo et al . ( 2018b ) , the target set forms a superset of the context set and we have C ⊆ T. Optionally , it might also be natural to consider that the initial time and observation ( t0 , y0 ) are always included in C. During training , we let the model learn from a dataset of ( potentially irregular ) time-series sampled from F . We are interested in learning the underlying distribution over the dynamics as well as the induced distribution over functions . We note that when the dynamics are not latent and manifest directly in the observation space Y , the distribution over ODE trajectories and the distribution over functions coincide . Neural ODEs NODEs are a class of models that parametrize the velocity ż of a state z with the help of a neural network ż = fθ ( z , t ) . Given the initial time t0 and target time tTi , NODEs predict the corresponding state ŷTi by performing the following integration and decoding operations : z ( t0 ) = h1 ( y0 ) , z ( t T i ) = z ( t0 ) + ∫ tTi t0 fθ ( z ( t ) , t ) dt , ŷ T i = h2 ( z ( t T i ) ) , ( 1 ) where h1 and h2 can be neural networks . When the dimensionality of z is greater than that of y and h1 , h2 are linear , the resulting model is an Augmented Neural ODE ( Dupont et al. , 2019 ) with input layer augmentation ( Massaroli et al. , 2020 ) . The extra dimensions offer the model additional flexibility as well as the ability to learn higher-order dynamics ( Norcliffe et al. , 2020 ) . Neural Processes ( NPs ) NPs model a random function F : X → Y , where X ⊆ Rd1 and Y ⊆ Rd2 . The NP represents a given instantiation F of F through the global latent variable z , which parametrises the variation in F . Thus , we have F ( xi ) = g ( xi , z ) . For a given context set C = { ( xCi , yCi ) } and target set x1 : n , y1 : n , the generative process is given by : p ( y1 : n , z|x1 : n , C ) = p ( z|C ) n∏ i=1 N ( yi|g ( xi , z ) , σ2 ) , ( 2 ) where p ( z ) is chosen to be a multivariate standard normal distribution and y1 : n is a shorthand for the sequence ( y1 , . . . , yn ) . The model can be trained using an amortised variational inference procedure that naturally gives rise to a permutation-invariant encoder qθ ( z|C ) , which stores the information about the context points . Conditioned on this information , the decoder g ( x , z ) can make predictions at any input location x . We note that while the domain X of the random function F is arbitrary , in this work we are interested only in stochastic functions with domain on the real line ( time-series ) . Therefore , from here our notation will reflect that , using t as the input instead of x . The output y remains the same . 3 NEURAL ODE PROCESSES Model Overview We introduce Neural ODE Processes ( NDPs ) , a class of dynamics-based models that learn to approximate random functions defined over time . To that end , we consider an NP whose context is used to determine a distribution over ODEs . Concretely , the context infers a distribution over the initial position ( and optionally – the initial velocity ) and , at the same time , stochastically controls its derivative function . The positions given by the ODE trajectories at any time tTi are then decoded to give the predictions . In what follows , we offer a detailed description of each component of the model . A schematic of the model can be seen in Figure 1 . 3.1 GENERATIVE PROCESS . We first describe the generative process behind NDPs . A graphical model perspective of this process is also included in Figure 2 . Encoder and Aggregator Consider a given context set C = { ( tCi , yCi ) } i∈IC of observed points . We encode this context into two latent variables L ( t0 ) ∼ qL ( l ( t0 ) |C ) and D ∼ qD ( d|C ) , representing the initial state and the global control of an ODE , respectively . To parametrise the distribution of the latter variable , the NDP encoder produces a representation ri = fe ( ( tCi , y C i ) ) for each context pair ( tCi , y C i ) . The function fe is as a neural network , fully connected or convolutional , depending on the nature of y . An aggregator combines all the representations ri to form a global representation , r , that parametrises the distribution of the global latent context , D ∼ qD ( d|C ) = N ( z|µD ( r ) , diag ( σD ( r ) ) ) . As the aggregator must preserve order invariance , we choose to take the element-wise mean . The distribution of L0 might be parametrised identically as a function of the whole context by qL ( d|C ) , and , in particular , if the initial observation y0 is always known , then qL ( l ( 0 ) |C ) = qL ( l ( 0 ) |y0 ) = N ( l ( 0 ) |µL ( y0 ) , diag ( σL ( y0 ) ) ) . Latent ODE To obtain a distribution over functions , we are interested in capturing the dynamics that govern the time-series and exploiting the temporal nature of the data . To that end , we allow the latent context to evolve according to a Neural ODE ( Chen et al. , 2018 ) with initial position L ( 0 ) and controlled by D. These two random variables factorise the uncertainty in the underlying dynamics into an uncertainty over the initial conditions ( given by L ( t0 ) ) and an uncertainty over the ODE derivative , given by D. By using the target times , tT1 : n = ( t T 1 , ... , t T N ) , the latent state at a given time is found by evolving a Neural ODE : l ( tTi ) = l ( t0 ) + ∫ tTi t0 fθ ( l ( t ) , d , t ) dt , ( 3 ) where fθ is a neural network that models the derivative of l. As explained above , we allow d to modulate the derivative of this ODE by acting as a global control signal . Ultimately , for fixed initial conditions , this results in an uncertainty over the ODE trajectories . Decoder To obtain a prediction at a time tTi , we decode the random state of the ODE at time tTi , given by L ( tTi ) . Assuming that the outputs are noisy , for a given sample l ( t T i ) from this stochastic state , the decoder g produces a distribution over Y Tti ∼ p ( yTi |g ( l ( tTi ) , ti ) ) parametrised by the decoder output . Concretely , for regression tasks , we take the target output to be normally distributed with constant ( or optionally learned ) variance Y Tti ∼ N ( yi|g ( l ( ti ) , ti ) , σ2 ) . When Y Tti is a random vector formed of independent binary random variables ( e.g . a black and white image ) , we use a Bernoulli distribution Y Tti ∼ ∏dim ( Y ) j=1 Bernoulli ( g ( l ( ti ) , ti ) j ) . Putting everything together , for a set of observed context points C , the generative process of NDPs is given by the expression below , where we emphasise once again that l ( ti ) also implicitly depends on l ( 0 ) and d. p ( y1 : n , l ( 0 ) , d|t1 : n , C ) = p ( l ( 0 ) |C ) p ( d|C ) n∏ i=1 p ( yi|g ( l ( ti ) , ti ) ) , ( 4 ) We remark that NDPs generalise NPs defined over time . If the latent NODE learns the trivial velocity fθ ( l ( t ) , d , t ) = 0 , the random state L ( t ) = L ( t0 ) remains constant at all times t. In this case , the distribution over functions is directly determined by L ( t0 ) ∼ p ( l ( t0 ) |C ) , which substitutes the random variable Z from a regular NP . For greater flexibility , the control signal d can also be supplied to the decoder g ( l ( t ) , d , t ) . This shows that , in principle , NDPs are at least as expressive as NPs . Therefore , NDPs could be a sensible choice even in applications where the time-series are not solely determined by some underlying dynamics , but are also influenced by other generative factors .
The proposed NDP has two main advantages: 1- it has the capability to adapt the incoming data points in time-series (unlike NODE) without retraining, 2- it can provide a measure of uncertainty for the underlying dynamics of the time-series. NDP partitions the global latent context $z$ to a latent position $l$ and sub-context $z^\prime$. Then it lets $l$ follow an ODE, called latent ODE. This part is actually the innovation of the paper where by defining a latent ODE, the authors take advantages of ODEs to find the underlying hidden dynamics of the time-series. This assumption helps find better dynamics when the generating processes of time-series meet some ODEs. Then the authors define a stochastic process very like the idea from Neural Processes (NP) paper, that is, by defining a latent context $z$ (which here is a concatenation of $l$ and sub-context $z^\prime$) with a prior p(z) and integrating a Gaussian distribution of a function of $z$ (decoder $g(l,t,z^\prime)$ which is a neural network) over $z$.
SP:4fd499ebe9fddb6a3f57663d76bb7bf3b5f29ef7
Dataset Meta-Learning from Kernel Ridge-Regression
One of the most fundamental aspects of any machine learning algorithm is the training data used by the algorithm . We introduce the novel concept of - approximation of datasets , obtaining datasets which are much smaller than or are significant corruptions of the original training data while maintaining similar model performance . We introduce a meta-learning algorithm called Kernel Inducing Points ( KIP ) for obtaining such remarkable datasets , inspired by the recent developments in the correspondence between infinitely-wide neural networks and kernel ridge-regression ( KRR ) . For KRR tasks , we demonstrate that KIP can compress datasets by one or two orders of magnitude , significantly improving previous dataset distillation and subset selection methods while obtaining state of the art results for MNIST and CIFAR-10 classification . Furthermore , our KIP -learned datasets are transferable to the training of finite-width neural networks even beyond the lazy-training regime , which leads to state of the art results for neural network dataset distillation with potential applications to privacy-preservation . 1 INTRODUCTION . Datasets are a pivotal component in any machine learning task . Typically , a machine learning problem regards a dataset as given and uses it to train a model according to some specific objective . In this work , we depart from the traditional paradigm by instead optimizing a dataset with respect to a learning objective , from which the resulting dataset can be used in a range of downstream learning tasks . Our work is directly motivated by several challenges in existing learning methods . Kernel methods or instance-based learning ( Vinyals et al. , 2016 ; Snell et al. , 2017 ; Kaya & Bilge , 2019 ) in general require a support dataset to be deployed at inference time . Achieving good prediction accuracy typically requires having a large support set , which inevitably increases both memory footprint and latency at inference time—the scalability issue . It can also raise privacy concerns when deploying a support set of original examples , e.g. , distributing raw images to user devices . Additional challenges to scalability include , for instance , the desire for rapid hyper-parameter search ( Shleifer & Prokop , 2019 ) and minimizing the resources consumed when replaying data for continual learning ( Borsos et al. , 2020 ) . A valuable contribution to all these problems would be to find surrogate datasets that can mitigate the challenges which occur for naturally occurring datasets without a significant sacrifice in performance . This suggests the following Question : What is the space of datasets , possibly with constraints in regards to size or signal preserved , whose trained models are all ( approximately ) equivalent to some specific model ? In attempting to answer this question , in the setting of supervised learning on image data , we discover a rich variety of datasets , diverse in size and human interpretability while also robust to model architectures , which yield high performance or state of the art ( SOTA ) results when used as training data . We obtain such datasets through the introduction of a novel meta-learning algorithm called Kernel Inducing Points ( KIP ) . Figure 1 shows some example images from our learned datasets . We explore KIP in the context of compressing and corrupting datasets , validating its effectiveness in the setting of kernel-ridge regression ( KRR ) and neural network training on benchmark datasets MNIST and CIFAR-10 . Our contributions can be summarized as follows : 1.1 SUMMARY OF CONTRIBUTIONS . • We formulate a novel concept of -approximation of a dataset . This provides a theoretical framework for understanding dataset distillation and compression . • We introduce Kernel Inducing Points ( KIP ) , a meta-learning algorithm for obtaining - approximation of datasets . We establish convergence in the case of a linear kernel in Theorem 1 . We also introduce a variant called Label Solve ( LS ) , which gives a closed-form solution for obtaining distilled datasets differing only via labels . • We explore the following aspects of -approximation of datasets : 1 . Compression ( Distillation ) for Kernel Ridge-Regression : For kernel ridge regression , we improve sample efficiency by over one or two orders of magnitude , e.g . using 10 images to outperform hundreds or thousands of images ( Tables 1 , 2 vs Tables A1 , A2 ) . We obtain state of the art results for MNIST and CIFAR-10 classification while using few enough images ( 10K ) to allow for in-memory inference ( Tables A3 , A4 ) . 2 . Compression ( Distillation ) for Neural Networks : We obtain state of the art dataset distillation results for the training of neural networks , often times even with only a single hidden layer fully-connected network ( Tables 1 and 2 ) . 3 . Privacy : We obtain datasets with a strong trade-off between corruption and test accuracy , which suggests applications to privacy-preserving dataset creation . In particular , we produce images with up to 90 % of their pixels corrupted with limited degradation in performance as measured by test accuracy in the appropriate regimes ( Figures 3 , A3 , and Tables A5-A10 ) and which simultaneously outperform natural images , in a wide variety of settings . • We provide an open source implementation of KIP and LS , available in an interactive Colab notebook1 . 2 SETUP . In this section we define some key concepts for our methods . 1https : //colab.research.google.com/github/google-research/google-research/blob/master/kip/KIP.ipynb Definition 1 . A dataset in Rd is a set of n distinct vectors in Rd for some n ≥ 1 . We refer to each such vector as a datapoint . A dataset is labeled if each datapoint is paired with a label vector in RC , for some fixed C. A datapoint along with its corresponding label is a labeled datapoint . We use the notation D = ( X , y ) , where X ∈ Rn×d and y ∈ Rn×C , to denote the tuple of unlabeled datapoints X with their corresponding labels y . We henceforth assume all datasets are labeled . Next , we introduce our notions of approximation , both of functions ( representing learned algorithms ) and of datasets , which are characterized in terms of performance with respect to a loss function rather than closeness with respect to a metric . A loss function ` : RC × RC → R is one that is nonnegative and satisfies ` ( z , z ) = 0 for all z . Definition 2 . Fix a loss function ` and let f , f̃ : Rd → RC be two functions . Let ≥ 0 . 1 . Given a distribution P on Rd × RC , we say f and f̃ are weakly -close with respect to ( ` , P ) if ∣∣∣E ( x , y ) ∼P ( ` ( f ( x ) , y ) ) − E ( x , y ) ∼P ( ` ( f̃ ( x ) , y ) ) ∣∣∣ ≤ . ( 1 ) 2 . Given a distribution P on Rd we say f and f̃ are strongly -close with respect to ( ` , P ) if Ex∼P ( ` ( f ( x ) , f̃ ( x ) ) ) ≤ . ( 2 ) We drop explicit reference to ( ` , P ) if their values are understood or immaterial . Given a learning algorithm A ( e.g . gradient descent with respect to the loss function of a neural network ) , let AD denote the resulting model obtained after training A on D. We regard AD as a mapping from datapoints to prediction labels . Definition 3 . Fix learning algorithms A and à . Let D and D̃ be two labeled datasets in Rd with label space RC . Let ≥ 0 . We say D̃ is a weak -approximation of D with respect to ( à , A , ` , P ) if ÃD̃ and AD are weakly -close with respect to ( ` , P ) , where ` is a loss function and P is a distribution on Rd×RC . We define strong -approximation similarly . We drop explicit reference to ( some of ) the à , A , ` , P if their values are understood or immaterial . We provide some justification for this definition in the Appendix . In this paper , we will measure - approximation with respect to 0-1 loss for multiway classification ( i.e . accuracy ) . We focus on weak -approximation , since in most of our experiments , we consider models in the low-data regime with large classification error rates , in which case , sample-wise agreement of two models is not of central importance . On the other hand , observe that if two models have population classification error rates less than /2 , then ( 2 ) is automatically satisfied , in which case , the notions of weak-approximation and strong-approximation converge . We list several examples of -approximation , with = 0 , for the case when à = A are given by the following : Example 1 : Support Vector Machines . Given a datasetD of sizeN , train an SVM onD and obtain M support vectors . TheseM support vectors yield a dataset D̃ that is a strong 0-approximation toD in the linearly separable case , while for the nonseparable case , one has to also include the datapoints with positive slack . Asymptotic lower bounds asserting M = O ( N ) have been shown in Steinwart ( 2003 ) .2 Example 2 : Ridge Regression . Any two datasets D and D̃ that determine the same ridge-regressor are 0-approximations of each other . In particular , in the scalar case , we can obtain arbitrarily small 0-approximating D̃ as follows . Given training data D = ( X , y ) in Rd , the corresponding ridgeregressor is the predictor x∗ 7→ w · x∗ , ( 3 ) w = Φλ ( X ) y , ( 4 ) Φλ ( X ) = X T ( XXT + λI ) −1 ( 5 ) 2As a specific example , many thousands of support vectors are needed for MNIST classification ( Bordes et al . ( 2005 ) ) . where for λ = 0 , we interpret the inverse as a pseudoinverse . It follows that for any givenw ∈ Rd×1 , we can always find ( X̃ , ỹ ) of arbitrary size ( i.e . X̃ ∈ Rn×d , y ∈ Rn×1 with n arbitrarily small ) that satisfiesw = Φλ ( X̃ ) ỹ . Simply choose X̃ such thatw is in the range of Φλ ( X̃ ) . The resulting dataset ( X̃ , ỹ ) is a 0-approximation to D. If we have a C-dimensional regression problem , the preceding analysis can be repeated component-wise in label-space to show 0-approximation with a dataset of size at least C ( since then the rank of Φλ ( X̃ ) can be made at least the rank of w ∈ Rd×C ) . We are interested in learning algorithms given by KRR and neural networks . These can be investigated in unison via neural tangent kernels . Furthermore , we study two settings for the usage of -approximate datasets , though there are bound to be others : 1 . ( Sample efficiency / compression ) Fix . What is the minimum size of D̃ needed in order for D̃ to be an -approximate dataset ? 2 . ( Privacy guarantee ) Can an -approximate dataset be found such that the distribution from which it is drawn and the distribution from which the original training dataset is drawn satisfy a given upper bound in mutual information ? Motivated by these questions , we introduce the following definitions : Definition 4 . ( Heuristic ) Let D̃ and D be two datasets such that D̃ is a weak -approximation of D , with |D̃| ≤ |D| and small . We call |D|/|D̃| the compression ratio . In other words , the compression ratio is a measure of how well D̃ compresses the information available in D , as measured by approximate agreement of their population loss . Our definition is heuristic in that is not precisely quantified and so is meant as a soft measure of compression . Definition 5 . Let Γ be an algorithm that takes a dataset D in Rd and returns a ( random ) collection of datasets in Rd . For 0 ≤ ρ ≤ 1 , we say that Γ is ρ-corrupted if for any input dataset D , every datapoint3 drawn from the datasets of Γ ( D ) has at least ρ fraction of its coordinates independent of D. In other words , datasets produced by Γ have ρ fraction of its entries contain no information about the dataset D ( e.g . because they have a fixed value or are filled in randomly ) . Corrupting information is naturally a way of enhancing privacy , as it makes it more difficult for an attacker to obtain useful information about the data used to train a model . Adding noise to the inputs to neural network or of its gradient updates can be shown to provide differentially private guarantees ( Abadi et al . ( 2016 ) ) .
This paper proposes a data-driven approach to choose an informative surrogate sub-dataset, termed "a \epsilon-approximation", from the original data set. A meta-learning algorithm called Kernel Inducing Points (KIP ) is proposed to obtain such sub-datasets for (Linear) Kernel Ridge Regression (KRR), with the potential to extend to other machine learning algorithms such as neural networks. Some theoretical results are provided for the KRR with a linear kernel. The empirical performance of the proposed algorithm is evaluated by experiments based on synthetic data and some standard benchmark data sets.
SP:1c2c08605956eb4660a8f8a33ce13e80276582ed
Status-Quo Policy Gradient in Multi-agent Reinforcement Learning
1 INTRODUCTION . In sequential social dilemmas , individually rational behavior leads to outcomes that are sub-optimal for each individual in the group ( Hardin , 1968 ; Ostrom , 1990 ; Ostrom et al. , 1999 ; Dietz et al. , 2003 ) . Current state-of-the-art Multi-Agent Deep Reinforcement Learning ( MARL ) methods that train agents independently can lead to agents that play selfishly and do not converge to optimal policies , even in simple social dilemmas ( Foerster et al. , 2018 ; Lerer & Peysakhovich , 2017 ) . To illustrate why it is challenging to evolve optimal policies in such dilemmas , we consider the Coin Game ( Foerster et al. , 2018 ) . Each agent can play either selfishly ( pick all coins ) or cooperatively ( pick only coins of its color ) . Regardless of the other agent ’ s behavior , the individually rational choice for an agent is to play selfishly , either to minimize losses ( avoid being exploited ) or to maximize gains ( exploit the other agent ) . However , when both agents behave rationally , they try to pick all coins and achieve an average long term reward of −0.5 . In contrast , if both play cooperatively , then the average long term reward for each agent is 0.5 . Therefore , when agents cooperate , they are both better off . Training Deep RL agents independently in the Coin Game using state-of-the-art methods leads to mutually harmful selfish behavior ( Section 2.2 ) . The problem of how independently learning agents evolve optimal behavior in social dilemmas has been studied by researchers through human studies and simulation models ( Fudenberg & Maskin , 1986 ; Green & Porter , 1984 ; Fudenberg et al. , 1994 ; Kamada & Kominers , 2010 ; Abreu et al. , 1990 ) . A large body of work has looked at the mechanism of evolution of cooperation through reciprocal behaviour and indirect reciprocity ( Trivers , 1971 ; Axelrod , 1984 ; Nowak & Sigmund , 1992 ; 1993 ; 1998 ) , through variants of reinforcement using aspiration ( Macy & Flache , 2002 ) , attitude ( Damer & Gini , 2008 ) or multi-agent reinforcement learning ( Sandholm & Crites , 1996 ; Wunder et al. , 2010 ) , and under specific conditions ( Banerjee & Sen , 2007 ) using different learning rates ( de Cote et al. , 2006 ) similar to WoLF ( Bowling & Veloso , 2002 ) as well as using embedded emotion ( Yu et al. , 2015 ) , social networks ( Ohtsuki et al. , 2006 ; Santos & Pacheco , 2006 ) . However , these approaches do not directly apply to Deep RL agents ( Leibo et al. , 2017 ) . Recent work in this direction ( Kleiman-Weiner et al. , 2016 ; Julien et al. , 2017 ; Peysakhovich & Lerer , 2018 ) focuses on letting agents learn strategies in multi-agent settings through interactions with other agents . Leibo et al . ( 2017 ) defines the problem of social dilemmas in the Deep RL framework and analyzes the outcomes of a fruit-gathering game ( Julien et al. , 2017 ) . They vary the abundance of resources and the cost of conflict in the fruit environment to generate degrees of cooperation between agents . Hughes et al . ( 2018 ) defines an intrinsic reward ( inequality aversion ) that attempts to reduce the difference in obtained rewards between agents . The agents are designed to have an aversion to both advantageous ( guilt ) and disadvantageous ( unfairness ) reward allocation . This handcrafting of loss with mutual fairness evolves cooperation , but it leaves the agent vulnerable to exploitation . LOLA ( Foerster et al. , 2018 ) uses opponent awareness to achieve high cooperation levels in the Coin Game and the Iterated Prisoner ’ s Dilemma game . However , the LOLA agent assumes access to the other agent ’ s network architecture , observations , and learning algorithms . This access level is analogous to getting complete access to the other agent ’ s private information and therefore devising a strategy with full knowledge of how they are going to play . Wang et al . ( 2019 ) proposes an evolutionary Deep RL setup to evolve cooperation . They define an intrinsic reward that is based on features generated from the agent ’ s past and future rewards , and this reward is shared with other agents . They use evolution to maximize the sum of rewards among the agents and thus evolve cooperative behavior . However , sharing rewards in this indirect way enforces cooperation rather than evolving it through independently learning agents . Interestingly , humans evolve individual and socially optimal strategies in such social dilemmas without sharing rewards or having access to private information . Inspired by ideas from human psychology ( Samuelson & Zeckhauser , 1988 ; Kahneman et al. , 1991 ; Kahneman , 2011 ; Thaler & Sunstein , 2009 ) that attribute this behavior in humans to the status-quo bias ( Guney & Richter , 2018 ) , we present the SQLoss and the corresponding status-quo policy gradient formulation for RL . Agents trained with SQLoss evolve optimal policies in multi-agent social dilemmas without sharing rewards , gradients , or using a communication channel . Intuitively , SQLoss encourages an agent to stick to the action taken previously , with the encouragement proportional to the reward received previously . Therefore , mutually cooperating agents stick to cooperation since the status-quo yields higher individual reward , while unilateral defection by any agent leads to the other agent also switching to defection due to the status-quo loss . Subsequently , the short-term reward of exploitation is overcome by the long-term cost of mutual defection , and agents gradually switch to cooperation . To apply SQLoss to games where a sequence of non-trivial actions determines cooperation and defection , we present GameDistill , an algorithm that reduces a dynamic game with visual input to a matrix game . GameDistill uses self-supervision and clustering to extract distinct policies from a sequential social dilemma game automatically . Our key contributions can be summarised as : 1 . We introduce a Status-Quo loss ( SQLoss , Section 2.3 ) and an associated policy gradientbased algorithm to evolve optimal behavior for agents playing matrix games that can act in either a cooperative or a selfish manner , by choosing between a cooperative and selfish policy . We empirically demonstrate that agents trained with the SQLoss evolve optimal behavior in several social dilemmas iterated matrix games ( Section 4 ) . 2 . We propose GameDistill ( Section 2.4 ) , an algorithm that reduces a social dilemma game with visual observations to an iterated matrix game by extracting policies that implement cooperative and selfish behavior . We empirically demonstrate that GameDistill extracts cooperative and selfish policies for the Coin Game ( Section 4.2 ) . 3 . We demonstrate that when agents run GameDistill followed by MARL game-play using SQLoss , they converge to individually as well as socially desirable cooperative behavior in a social dilemma game with visual observations ( Section 4.2 ) . 2 APPROACH . 2.1 SOCIAL DILEMMAS MODELED AS ITERATED MATRIX GAMES . To remain consistent with previous work , we adopt the notations from Foerster et al . ( 2018 ) . We model social dilemmas as general-sum Markov ( simultaneous move ) games . A multi-agent Markov game is specified byG = 〈S , A , U , P , r , n , γ〉 . S denotes the state space of the game . n denotes the number of agents playing the game . At each step of the game , each agent a ∈ A , selects an action ua ∈ U . ~u denotes the joint action vector that represents the simultaneous actions of all agents . The joint action ~u changes the state of the game from s to s′ according to the state transition function P ( s′|~u , s ) : S × U × S → [ 0 , 1 ] . At the end of each step , each agent a gets a reward according to the reward function ra ( s , ~u ) : S × U → R. The reward obtained by an agent at each step is a function of the actions played by all agents . For an agent a , the discounted future return from time t is defined as Rat = ∑∞ l=0 γ lrat+l , where γ ∈ [ 0 , 1 ) is the discount factor . Each agent independently attempts to maximize its expected discounted return . Matrix games are the special case of two-player perfectly observable Markov games ( Foerster et al. , 2018 ) . Table 1 shows examples of matrix games that represent social dilemmas . Consider the Prisoner ’ s Dilemma game in Table 1a . Each agent can either cooperate ( C ) or defect ( D ) . Playing D is the rational choice for an agent , regardless of whether the other agent plays C or D. Therefore , if both agents play rationally , they each receive a reward of −2 . However , if each agent plays C , then it will obtain a reward of−1 . This fact that individually rational behavior leads to a sub-optimal group ( and individual ) outcome highlights the dilemma . In Infinitely Iterated Matrix Games , agents repeatedly play a particular matrix game against each other . In each iteration of the game , each agent has access to the actions played by both agents in the previous iteration . Therefore , the state input to an RL agent consists of both agents ’ actions in the previous iteration of the game . We adopt this state formulation as is typically done in such games ( Press & Dyson , 2012 ; Foerster et al. , 2018 ) . The infinitely iterated variations of the matrix games in Table 1 represent sequential social dilemmas . We refer to infinitely iterated matrix games as iterated matrix games in subsequent sections for ease of presentation . 2.2 LEARNING POLICIES IN ITERATED MATRIX GAMES : THE SELFISH LEARNER . The standard method to model agents in iterated matrix games is to model each agent as an RL agent that independently attempts to maximize its expected total discounted reward . Several approaches to model agents in this way use policy gradient-based methods ( Sutton et al. , 2000 ; Williams , 1992 ) . Policy gradient methods update an agent ’ s policy , parameterized by θa , by performing gradient ascent on the expected total discounted reward E [ Ra0 ] . Formally , let θa denote the parameterized version of an agent ’ s policy πa and V aθ1 , θ2 denote the total expected discounted reward for agent a . Here , V a is a function of the policy parameters ( θ1 , θ2 ) of both agents . In the ith iteration of the game , each agent updates θai to θ a i+1 , such that it maximizes it ’ s total expected discounted reward . θai+1 is computed as follows : θ1i+1 = argmaxθ1V 1 ( θ1 , θ2i ) and θ 2 i+1 = argmaxθ2V 2 ( θ1i , θ 2 ) ( 1 ) For agents trained using reinforcement learning , the gradient ascent rule to update θ1i+1 is , f1nl = ∇θi1V 1 ( θ1i , θ 2 i ) · δ and θ1i+1 = θ1i + f1nl ( θ1i , θ2i ) ( 2 ) where δ is the step size of the updates . In the Iterated Prisoner ’ s Dilemma ( IPD ) game , agents trained with the policy gradient update method converge to a sub-optimal mutual defection equilibrium ( Figure 3a , Lerer & Peysakhovich ( 2017 ) ) . This sub-optimal equilibrium attained by Selfish Learners motivates us to explore alternative methods that could lead to a desirable cooperative equilibrium . We denote the agent trained using policy gradient updates as a Selfish Learner ( SL ) .
This paper focuses on the problem of multi-agent cooperation in social dilemmas, in which mutual defection is individually rational but collectively suboptimal. The authors use the bias toward status-quo in human psychology to motivate a new training method, called SQLoss: 1) for repeated matrix games, each agent is trained with additional imagined episodes in which the actions taken by both agents are repeated for a random number of steps; 2) for settings where cooperation and defection are associated with a sequence of actions, the authors provide a procedure called GameDistill based on trajectory encoding, clustering, and action prediction to arive at oracles for "cooperative action" and "defection action" at each state, which can then be used for the imagination episodes. Experiments show that SQL achieve better social welfare than LOLA and standard independent RL in classic iterated matrix games, as well as in the Coin Game with higher dimensional image observations.
SP:c06539b9986064977dec933dcce4b81d42f47cc2
WaveGrad: Estimating Gradients for Waveform Generation
1 INTRODUCTION . Deep generative models have revolutionized speech synthesis ( Oord et al. , 2016 ; Sotelo et al. , 2017 ; Wang et al. , 2017 ; Biadsy et al. , 2019 ; Jia et al. , 2019 ; Vasquez & Lewis , 2019 ) . Autoregressive models , in particular , have been popular for raw audio generation thanks to their tractable likelihoods , simple inference procedures , and high fidelity samples ( Oord et al. , 2016 ; Mehri et al. , 2017 ; Kalchbrenner et al. , 2018 ; Song et al. , 2019 ; Valin & Skoglund , 2019 ) . However , autoregressive models require a large number of sequential computations to generate an audio sample . This makes it challenging to deploy them in real-world applications where faster than real time generation is essential , such as digital voice assistants on smart speakers , even using specialized hardware . There has been a plethora of research into non-autoregressive models for audio generation , including normalizing flows such as inverse autoregressive flows ( Oord et al. , 2018 ; Ping et al. , 2019 ) , generative flows ( Prenger et al. , 2019 ; Kim et al. , 2019 ) , and continuous normalizing flows ( Kim et al. , 2020 ; Wu & Ling , 2020 ) , implicit generative models such as generative adversarial networks ( GAN ) ( Donahue et al. , 2018 ; Engel et al. , 2019 ; Kumar et al. , 2019 ; Yamamoto et al. , 2020 ; Bińkowski et al. , 2020 ; Yang et al. , 2020a ; b ; McCarthy & Ahmed , 2020 ) and energy score ( Gritsenko et al. , 2020 ) , variational auto-encoder models ( Peng et al. , 2020 ) , as well as models inspired by digital signal processing ( Ai & Ling , 2020 ; Engel et al. , 2020 ) , and the speech production mechanism ( Juvela et al. , 2019 ; Wang et al. , 2020 ) . Although such models improve inference speed by requiring fewer sequential operations , they often yield lower quality samples than autoregressive models . This paper introduces WaveGrad , a conditional generative model of waveform samples that estimates the gradients of the data log-density as opposed to the density itself . WaveGrad is simple to train , and implicitly optimizes for the weighted variational lower-bound of the log-likelihood . ∗Work done during an internship at Google Brain . †Equal contribution . WaveGrad is non-autoregressive , and requires only a constant number of generation steps during inference . Figure 1 visualizes the inference process of WaveGrad . WaveGrad builds on a class of generative models that emerges through learning the gradient of the data log-density , also known as the Stein score function ( Hyvärinen , 2005 ; Vincent , 2011 ) . During inference , one can rely on the gradient estimate of the data log-density and use gradient-based samplers ( e.g. , Langevin dynamics ) to sample from the model ( Song & Ermon , 2019 ) . Promising results have been achieved on image synthesis ( Song & Ermon , 2019 ; 2020 ) and shape generation ( Cai et al. , 2020 ) . Closely related are diffusion probabilistic models ( Sohl-Dickstein et al. , 2015 ) , which capture the output distribution through a Markov chain of latent variables . Although these models do not offer tractable likelihoods , one can optimize a ( weighted ) variational lower-bound on the log-likelihood . The training objective can be reparameterized to resemble deonising score matching ( Vincent , 2011 ) , and can be interpreted as estimating the data log-density gradients . The model is non-autoregressive during inference , requiring only a constant number of generation steps , using a Langevin dynamics-like sampler to generate the output beginning from Gaussian noise . The key contributions of this paper are summarized as follows : • WaveGrad combines recent techniques from score matching ( Song et al. , 2020 ; Song & Ermon , 2020 ) and diffusion probabilistic models ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) to address conditional speech synthesis . • We build and compare two variants of the WaveGrad model : ( 1 ) WaveGrad conditioned on a discrete refinement step index following Ho et al . ( 2020 ) , ( 2 ) WaveGrad conditioned on a continuous scalar indicating the noise level . We find this novel continuous variant is more effective , especially because once the model is trained , different number of refinement steps can be used for inference . The proposed continuous noise schedule enables our model to use fewer inference iterations while maintaining the same quality ( e.g. , 6 vs. 50 ) . • We demonstrate that WaveGrad is capable of generating high fidelity audio samples , outperforming adversarial non-autoregressive models ( Yamamoto et al. , 2020 ; Kumar et al. , 2019 ; Yang et al. , 2020a ; Bińkowski et al. , 2020 ) and matching one of the best autoregressive models ( Kalchbrenner et al. , 2018 ) in terms of subjective naturalness . WaveGrad is capable of generating high fidelity samples using as few as six refinement steps . 2 ESTIMATING GRADIENTS FOR WAVEFORM GENERATION . We begin with a brief review of the Stein score function , Langevin dynamics , and score matching . The Stein score function ( Hyvärinen , 2005 ) is the gradient of the data log-density log p ( y ) with respect to the datapoint y : s ( y ) = ∇y log p ( y ) . ( 1 ) Given the Stein score function s ( · ) , one can draw samples from the corresponding density , ỹ ∼ p ( y ) , via Langevin dynamics , which can be interpreted as stochastic gradient ascent in the data space : ỹi+1 = ỹi + η 2 s ( ỹi ) + √ η zi , ( 2 ) where η > 0 is the step size , zi ∼ N ( 0 , I ) , and I denotes an identity matrix . A variant ( Ho et al. , 2020 ) is used as our inference procedure . A generative model can be built by training a neural network to learn the Stein score function directly , using Langevin dynamics for inference . This approach , known as score matching ( Hyvärinen , 2005 ; Vincent , 2011 ) , has seen success in image ( Song & Ermon , 2019 ; 2020 ) and shape ( Cai et al. , 2020 ) generation . The denoising score matching objective ( Vincent , 2011 ) takes the form : Ey∼p ( y ) Eỹ∼q ( ỹ|y ) [ ∥∥∥sθ ( ỹ ) −∇ỹ log q ( ỹ | y ) ∥∥∥2 2 ] , ( 3 ) where p ( · ) is the data distribution , and q ( · ) is a noise distribution . Recently , Song & Ermon ( 2019 ) proposed a weighted denoising score matching objective , in which data is perturbed with different levels of Gaussian noise , and the score function sθ ( ỹ , σ ) is conditioned on σ , the standard deviation of the noise used : ∑ σ∈S λ ( σ ) Ey∼p ( y ) Eỹ∼N ( y , σ ) [ ∥∥∥∥sθ ( ỹ , σ ) + ỹ − yσ2 ∥∥∥∥2 2 ] , ( 4 ) where S is a set of standard deviation values that are used to perturb the data , and λ ( σ ) is a weighting function for different σ. WaveGrad is a variant of this approach applied to learning conditional generative models of the form p ( y | x ) . WaveGrad adopts a similar objective which combines the idea of Vincent ( 2011 ) ; Ho et al . ( 2020 ) ; Song & Ermon ( 2019 ) . WaveGrad learns the gradient of the data density , and uses a sampler similar to Langevin dynamics for inference . The denoising score matching framework relies on a noise distribution to provide support for learning the gradient of the data log density ( i.e. , q in Equation 3 , andN ( · , σ ) in Equation 4 ) . The choice of the noise distribution is critical for achieving high quality samples ( Song & Ermon , 2020 ) . As shown in Figure 2 , WaveGrad relies on the diffusion model framework ( Sohl-Dickstein et al. , 2015 ; Ho et al. , 2020 ) to generate the noise distribution used to learn the score function . 2.1 WAVEGRAD AS A DIFFUSION PROBABILISTIC MODEL . Ho et al . ( 2020 ) observed that diffusion probabilistic models ( Sohl-Dickstein et al. , 2015 ) and score matching objectives ( Song & Ermon , 2019 ; Vincent , 2011 ; Song & Ermon , 2020 ) are closely related . As such , we will first introduce WaveGrad as a diffusion probabilistic model . We adapt the diffusion model setup in Ho et al . ( 2020 ) , from unconditional image generation to conditional raw audio waveform generation . WaveGrad models the conditional distribution pθ ( y0 | Algorithm 1 Training . WaveGrad directly conditions on the continuous noise level √ ᾱ . l is from a predefined noise schedule . 1 : repeat 2 : y0 ∼ q ( y0 ) 3 : s ∼ Uniform ( { 1 , . . . , S } ) 4 : √ ᾱ ∼ Uniform ( ls−1 , ls ) 5 : ∼ N ( 0 , I ) 6 : Take gradient descent step on ∇θ ∥∥ − θ ( √ᾱ y0 +√1− ᾱ , x , √ᾱ ) ∥∥1 7 : until converged Algorithm 2 Sampling . WaveGrad generates samples following a gradient-based sampler similar to Langevin dynamics . 1 : yN ∼ N ( 0 , I ) 2 : for n = N , . . . , 1 do 3 : z ∼ N ( 0 , I ) 4 : yn−1 = ( yn− 1−αn√1−ᾱn θ ( yn , x , √ ᾱn ) ) √ αn 5 : if n > 1 , yn−1 = yn−1 + σnz 6 : end for 7 : return y0 x ) where y0 is the waveform and x contains the conditioning features corresponding to y0 , such as linguistic features derived from the corresponding text , mel-spectrogram features extracted from y0 , or acoustic features predicted by a Tacotron-style text-to-speech synthesis model ( Shen et al. , 2018 ) : pθ ( y0 | x ) : = ∫ pθ ( y0 : N | x ) dy1 : N , ( 5 ) where y1 , . . . , yN is a series of latent variables , each of which are of the same dimension as the data y0 , and N is the number of latent variables ( iterations ) . The posterior q ( y1 : N | y0 ) is called the diffusion process ( or forward process ) , and is defined through the Markov chain : q ( y1 : N | y0 ) : = N∏ n=1 q ( yn | yn−1 ) , ( 6 ) where each iteration adds Gaussian noise : q ( yn | yn−1 ) : = N ( yn ; √ ( 1− βn ) yn−1 , βnI ) , ( 7 ) under some ( fixed constant ) noise schedule β1 , . . . , βN . We emphasize the property observed by Ho et al . ( 2020 ) , the diffusion process can be computed for any step n in a closed form : yn = √ ᾱn y0 + √ ( 1− ᾱn ) ( 8 ) where ∼ N ( 0 , I ) , αn : = 1− βn and ᾱn : = ∏n s=1 αs . The gradient of this noise distribution is ∇yn log q ( yn | y0 ) = − √ 1− ᾱn . ( 9 ) Ho et al . ( 2020 ) proposed to train on pairs ( y0 , yn ) , and to reparameterize the neural network to model θ . This objective resembles denoising score matching as in Equation 3 ( Vincent , 2011 ) : En , [ Cn ∥∥ θ ( √ᾱn y0 +√1− ᾱn , x , n ) − ∥∥22 ] , ( 10 ) where Cn is a constant related to βn . In practice Ho et al . ( 2020 ) found it beneficial to drop the Cn term , resulting in a weighted variational lower bound of the log-likelihood . Additionally in Ho et al . ( 2020 ) , θ conditions on the discrete index n , as we will discuss further below . We also found that substituting the original L2 distance metric with L1 offers better training stability .
The work uses diffusion probabilistic models for conditional speech synthesis tasks, specifically to convert mel-spectrogram to the raw audio waveform. Results from the proposed approach match the state-of-the-art WaveRNN model. The paper is very well-written and it is quite easy to follow. The study of the total number of diffusion steps and two different ways (continuous and discrete) ways to feed it in the network is very interesting. It is quite relevant and important for speech synthesis tasks. Using this, authors are able to find a 6-step inference procedure that yields very competitive performance to WaveRNN while still being computationally feasible.
SP:72f379cefb57913386cbd76978943bdc8d0545a7
HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark
1 INTRODUCTION . The recent performance breakthroughs of deep neural networks ( DNNs ) have attracted an explosion of research in designing efficient DNNs , aiming to bring powerful yet power-hungry DNNs into more resource-constrained daily life devices for enabling various DNN-powered intelligent functions ( Ross , 2020 ; Liu et al. , 2018b ; Shen et al. , 2020 ; You et al. , 2020a ) . Among them , HardWareaware Neural Architecture Search ( HW-NAS ) has emerged as one of the most promising techniques as it can automate the process of designing optimal DNN structures for the target applications , each of which often adopts a different hardware device and requires a different hardware-cost metric ( e.g. , prioritizes latency or energy ) . For example , HW-NAS in ( Wu et al. , 2019 ) develops a differentiable neural architecture search ( DNAS ) framework and discovers state-of-the-art ( SOTA ) DNNs balancing both accuracy and hardware efficiency , by incorporating a loss consisting of both the cross-entropy loss that leads to better accuracy and the latency loss that penalizes the network ’ s latency on a target device . Despite the promising performance achieved by SOTA HW-NAS , there exist paramount challenges that limit the development of HW-NAS innovations . First , HW-NAS requires the collection of hardware efficiency data corresponding to ( all ) the networks in the search space . To do so , current practice either pre-collects these data to construct a hardware-cost look-up table or adopts device-specific hardware-cost estimators/models , both of which can be time-consuming to obtain and impose a barrier-to-entry to non-hardware experts . This is because it requires knowledge about device-specific compilation and properly setting up the hardware measurement pipeline to collect hardware-cost data . Second , similar to generic NAS , it can be notoriously difficult to benchmark HW-NAS algorithms due to the required significant computational resources and the differences in their ( 1 ) hardware devices , which are specific for HW-NAS , ( 2 ) adopted search spaces , and ( 3 ) hyperparameters . Such a difficulty is even higher for HW-NAS considering the numerous choices of hardware devices , each of which can favor very different network structures even under the same target hardware efficiency , as discussed in ( Chu et al. , 2020 ) . While the number of floating-point operations ( FLOPs ) has been commonly used to estimate the hardware-cost , many works have pointed out that DNNs with fewer FLOPs are not necessarily faster or more efficient ( Wu et al. , 2019 ; 2018 ; Wang et al. , 2019b ) . For example , NasNet-A ( Zoph et al. , 2018 ) has a comparable complexity in terms of FLOPs as MobileNetV1 ( Howard et al. , 2017 ) , yet can have a larger latency than the latter due to NasNet-A ( Zoph et al. , 2018 ) ’ s adopted hardware-unfriendly structure . It is thus imperative to address the aforementioned challenges in order to make HW-NAS more accessible and reproducible to unfold HW-NAS ’ s full potential . Note that although pioneering NAS benchmark datasets ( Ying et al. , 2019 ; Dong & Yang , 2020 ; Klyuchnikov et al. , 2020 ; Siems et al. , 2020 ; Dong et al. , 2020 ) have made a significant step towards providing a unified benchmark dataset for generic NAS works , all of them either merely provide the latency on server-level GPUs ( e.g. , GTX 1080Ti ) or do not provide any hardware-cost data on real hardware , limiting their applicability to HW-NAS ( Wu et al. , 2019 ; Wan et al. , 2020 ; Cai et al. , 2018 ) which primarily targets commercial edge devices , FPGA , and ASIC . To this end , as shown in Figure 1 , we develop HW-NAS-Bench and make the following contributions in this paper : • We have developed HW-NAS-Bench , the first public dataset for HW-NAS research aiming to ( 1 ) democratize HW-NAS research to non-hardware experts and ( 2 ) facilitate a unified benchmark for HW-NAS to make HW-NAS research more reproducible and accessible , covering two SOTA NAS search spaces including NAS-Bench-201 and FBNet , with the former being one of the most popular NAS search spaces and the latter having been shown to be one of the most hardware friendly NAS search spaces . • We provide hardware-cost data collection pipelines for six commonly used hardware devices that fall into three categories ( i.e. , commercial edge devices , FPGA , and ASIC ) , in addition to the measured/estimated hardware-cost ( e.g. , energy cost and latency ) on these devices for all the networks in the search spaces of both NAS-Bench-201 and FBNet . • We conduct comprehensive analysis of the collected data in HW-NAS-Bench , such as studying the correlation between the collected hardware-cost and accuracy-cost data of all the networks on the six hardware devices , which provides insights to not only HW-NAS researchers but also DNN accelerator designers . Other researchers can extract useful insights from HW-NAS-Bench that have not been discussed in this work . • We demonstrate exemplary user cases to show : ( 1 ) how HW-NAS-Bench can be easily used by non-hardware experts to develop HW-NAS solutions by simply querying the collected data in our HW-NAS-Bench and ( 2 ) dedicated device-specific HW-NAS can indeed lead to optimal accuracy-cost trade-offs , demonstrating the great necessity of HW-NAS benchmarks like our proposed HW-NAS-Bench . 2 RELATED WORKS . 2.1 HARDWARE-AWARE NEURAL ARCHITECTURE SEARCH . Driven by the growing demand for efficient DNN solutions , HW-NAS has been proposed to automate the search for efficient DNN structures under the target efficiency constraints ( Fu et al. , 2020b ; a ; Zhang et al. , 2020 ) . For example , ( Tan et al. , 2019 ; Howard et al. , 2019 ; Tan & Le , 2019 ) adopt reinforcement learning based NAS with a multi-objective reward consisting of both the task performance and efficiency , achieving promising results yet suffering from prohibitive search time/cost . In parallel , ( Wu et al. , 2019 ; Wan et al. , 2020 ; Cai et al. , 2018 ; Stamoulis et al. , 2019 ) explore the design space in a differentiable manner following ( Liu et al. , 2018a ) and significantly improve the search efficiency . The promising performance of HW-NAS has motivated a tremendous interest in applying it to more diverse applications ( Fu et al. , 2020a ; Wang et al. , 2020a ; Marchisio et al. , 2020 ) paired with target hardware devices , e.g. , Edge TPU ( Xiong et al. , 2020 ) and NPU ( Lee et al. , 2020 ) , in addition to the widely explored mobile phones . As discussed in ( Chu et al. , 2020 ) , different hardware devices can favor very different network structures under the same hardware-cost metric , and the optimal network structure can differ significantly when considering different application-driven hardware-cost metrics on the same hardware device . As such , it would ideally lead to the optimal accuracy-cost trade-offs if the HW-NAS design is dedicated for the target device and hardware-cost metrics . However , this requires a good understanding of both device-specific compilation and hardware-cost characterization , imposing a barrier-to-entry to non-hardware experts , such as many NAS researchers , and thus limits the development of optimal HW-NAS results for numerous applications , each of which often prioritizes a different application-driven hardware-cost metric and adopts a different type of hardware devices . As such , our proposed HW-NAS-Bench will make HW-NAS more friendly to NAS researchers , who are often non-hardware experts , as it consists of comprehensive hardware-cost data in a wide range of hardware devices for all the networks in two commonly used SOTA NAS search spaces , expediting the development of HW-NAS innovations . 2.2 NEURAL ARCHITECTURE SEARCH BENCHMARKS . The importance and difficulty of NAS reproducibility and benchmarking has recently gained increasing attention . Pioneering efforts include ( Ying et al. , 2019 ; Dong & Yang , 2020 ; Klyuchnikov et al. , 2020 ; Siems et al. , 2020 ; Dong et al. , 2020 ) . Specifically , NAS-Bench-101 ( Ying et al. , 2019 ) presents the first large-scale and open-source architecture dataset for NAS , in which the ground truth test accuracy of all the architectures ( i.e. , 423k ) in its search space on CIFAR-10 ( Krizhevsky et al. , 2009 ) are provided . Later , NAS-Bench-201 ( Dong & Yang , 2020 ) further extends NAS-Bench-101 to support more NAS algorithm categories ( e.g. , differentiable algorithms ) and more datasets ( e.g. , CIFAR-100 ( Krizhevsky et al. , 2009 ) and ImageNet16-120 ( Chrabaszcz et al. , 2017 ) ) . Most recently , NAS-Bench-301 ( Siems et al. , 2020 ) and NATS-Bench ( Dong et al. , 2020 ) are developed to support benchmarking NAS algorithms on larger search spaces . However , all of these works either merely provide latency on the server-level GPU ( e.g. , GTX 1080Ti ) or do not consider any hardware-cost data on real hardware at all , limiting their applicability to HW-NAS ( Wu et al. , 2019 ; Wan et al. , 2020 ; Cai et al. , 2018 ) that primarily targets commercial edge devices , FPGA ( Wang et al. , 2020b ) , and ASIC ( Chen et al. , 2016 ; Lin et al. , 2017 ; 2016 ; Zhao et al. , 2020a ) . This has motivated us to develop the proposed HW-NAS-Bench , which aims to make HW-NAS more accessible especially for non-hardware experts and reproducible . A concurrent work ( published after our submission ) is BRP-NAS ( Chau et al. , 2020 ) , which presents a benchmark for the latency of all the networks in NAS-Bench-201 ( Dong & Yang , 2020 ) search space . In comparison , our proposed HW-NAS-Bench includes ( 1 ) more device categories ( i.e. , not only commercial devices , but also FPGA ( Wang et al. , 2020b ) and ASIC ( Chen et al. , 2016 ) ) , ( 2 ) more hardware-cost metrics ( i.e. , not only latency , but also energy ) , and ( 3 ) more search spaces ( i.e. , not only NAS-Bench-201 ( Dong & Yang , 2020 ) but also FBNet ( Wu et al. , 2019 ) ) . Additionally , we ( 4 ) add a detailed description of the pipeline to collect the hardware-cost of various devices and ( 5 ) analyze the necessity of device-specific HW-NAS solutions based on our collected data . 3 THE PROPOSED HW-NAS-BENCH FRAMEWORK . 3.1 HW-NAS-BENCH ’ S CONSIDERED SEARCH SPACES . To ensure a wide applicability , our HW-NAS-Bench considers two representative NAS search spaces : ( 1 ) NAS-Bench-201 ’ s cell-based search space and ( 2 ) FBNet search space . Both contribute valuable aspects to ensure our goal of constructing a comprehensive HW-NAS benchmark . Specifically , the former enables HW-NAS-Bench to naturally integrate the ground truth accuracy data of all NAS-Bench-201 ’ s considered network architectures , while the latter ensures that HW-NASBench includes the most commonly recognized hardware friendly search space . NAS-Bench-201 Search Space . Inspired from the search space used in the most popular cell-based NAS , NAS-Bench-201 adopts a fixed cell search space , where each architecture consists of a predefined skeleton with a stack of the searched cell that is represented as a densely-connected directed acyclic graph ( DAG ) . Specifically , it considers 4 nodes and 5 representative operation candidates for the operation set , and varies the feature map sizes and the dimensions of the final fully-connected layer to handle its considered three datasets ( i.e. , CIFAR-10 , CIFAR-100 ( Krizhevsky et al. , 2009 ) , and ImageNet16-120 ( Chrabaszcz et al. , 2017 ) ) , leading to a total of 3× 56 = 46875 architectures . Training log and accuracy are provided for each architecture . However , NAS-Bench-201 can not be directly used for HW-NAS as it only includes theoretical cost metrics ( i.e. , FLOPs and the number of parameters ( # Params ) ) and the latency on a server-level GPU ( i.e. , GTX 1080Ti ) . HW-NASBench enhances NAS-Bench-201 by providing all the 46875 architectures ’ measured/estimated hardware-cost on six devices , which are primarily targeted by SOTA HW-NAS works . FBNet Search Space . FBNet ( Wu et al. , 2019 ) constructs a layer-wise search space with a fixed macro-architecture , which defines the number of layers and the input/output dimensions of each layer and fixes the first and last three layers with the remaining layers to be searched . In this way , the network architectures in the FBNet ( Wu et al. , 2019 ) search space have more regular structures than those in NAS-Bench-201 , and have been shown to be more hardware friendly ( Fu et al. , 2020a ; Ma et al. , 2018 ) . The 9 considered pre-defined cell candidates and 22 unique positions lead to a total of 922 ≈ 1021 unique architectures . While HW-NAS researchers can develop their search algorithms on top of the FBNet ( Wu et al. , 2019 ) search space , tedious efforts are required to build the hardware-cost look-up tables or models for each target device . HW-NAS-Bench provides the measured/estimated hardware-cost on six hardware devices for all the 1021 architectures in the FBNet search space , aiming to make HW-NAS research more friendly to non-hardware experts and easier to be benchmarked .
The paper presents a benchmark / dataset, HW-NAS-Bench, for evaluating various neural architecture search algorithms. The benchmark is based on extensive measurements on real hardware. An important goal with the proposal is to support neural architecture searches for non-hardware experts. Further, the paper provides a good overview of related work in the domain.
SP:11cd869cd8c6dc657c136545fd2029f0c49843ba
Data-driven Learning of Geometric Scattering Networks
1 INTRODUCTION . Geometric deep learning has recently emerged as an increasingly prominent branch of machine learning in general , and deep learning in particular ( Bronstein et al. , 2017 ) . It is based on the observation that many of the impressive achievements of neural networks come in applications where the data has an intrinsic geometric structure which can be used to inform network design and training procedures . For example , in computer vision , convolutional neural networks use the spatial organization of pixels to define convolutional filters that hierarchically aggregate local information at multiple scales that in turn encode shape and texture information in data and task-driven representations . Similarly , in time-series analysis , recurrent neural networks leverage memory mechanisms based on the temporal organization of input data to collect multiresolution information from local subsequences , which can be interpreted geometrically via tools from dynamical systems and spectral analysis . While these examples only leverage Euclidean spatiotemporal structure in data , they exemplify the potential benefits of incorporating information about intrinsic data geometry in neural network design and processing . Indeed , recent advances have further generalized the utilization of geometric information in neural networks design to consider non-Euclidean structures , with particular interest in graphs that represent data geometry , either directly given as input or constructed as an approximation of a data manifold . At the core of geometric deep learning is the use of graph neural networks ( GNNs ) in general , and graph convolutional networks ( GCNs ) in particular , which ensure neuron activations follow the geometric organization of input data by propagating information across graph neighborhoods ( Bruna et al. , 2014 ; Defferrard et al. , 2016 ; Kipf & Welling , 2016 ; Hamilton et al. , 2017 ; Xu et al. , 2019 ; Abu-El-Haija et al. , 2019 ) . However , recent work has shown the difficulty in generalizing these methods to more complex structures , identifying common problems and phrasing them in terms of oversmoothing ( Li et al. , 2018 ) , oversquashing ( Alon & Yahav , 2020 ) or under-reaching ( Barceló et al. , 2020 ) . Using graph signal processing terminology from Kipf & Welling ( 2016 ) , these issues can be partly attributed to the limited construction of convolutional filters in many commonly used GCN architectures . Inspired by the filters learned in convolutional neural networks , GCNs consider node features as graph signals and aim to aggregate information from neighboring nodes . For example , Kipf & Welling ( 2016 ) presented a typical implementation of a GCN with a cascade of averaging ( essentially low pass ) filters . We note that more general variations of GCN architectures exist ( Defferrard et al. , 2016 ; Hamilton et al. , 2017 ; Xu et al. , 2019 ) , which are capable of representing other filters , but as investigated in Alon & Yahav ( 2020 ) , they too often have difficulty in learning long range connections . Recently , an alternative approach was presented to provide deep geometric representation learning by generalizing Mallat ’ s scattering transform ( Mallat , 2012 ) , originally proposed to provide a mathematical framework for understanding convolutional neural networks , to graphs ( Gao et al. , 2019 ; Gama et al. , 2019a ; Zou & Lerman , 2019 ) and manifolds ( Perlmutter et al. , 2018 ) . Similar to traditional scattering , which can be seen as a convolutional network with nonlearned wavelet filters , geometric scattering is defined as a GNN with handcrafted graph filters , typically constructed as diffusion wavelets over the input graph ( Coifman & Maggioni , 2006 ) , which are then cascaded with pointwise absolute-value nonlinearities . This wavelet cascade results in permutation equivariant node features that are typically aggregated via statistical moments over the graph nodes , as explained in detail in Sec . 2 , to provide a permutation invariant graph-level representation . The efficacy of geometric scattering features in graph processing tasks was demonstrated in Gao et al . ( 2019 ) , with both supervised learning and data exploration applications . Moreover , their handcrafted design enables rigorous study of their properties , such as stability to deformations and perturbations , and provides a clear understanding of the information extracted by them , which by design ( e.g. , the cascaded band-pass filters ) goes beyond low frequencies to consider richer notions of regularity ( Gama et al. , 2019b ; Perlmutter et al. , 2019 ) . However , while graph scattering transforms provide effective universal feature extractors , their rigid handcrafted design does not allow for the automatic task-driven representation learning that naturally arises in traditional GNNs . To address this deficiency , recent work has proposed a hybrid scattering-GCN ( Min et al. , 2020 ) model for obtaining node-level representations , which ensembles a GCN model with a fixed scattering feature extractor . In Min et al . ( 2020 ) , integrating channels from both architectures alleviates the well-known oversmoothing problem and outperforms popular GNNs on node classification tasks . Here , we focus on improving the geometric scattering transform by learning , in particular its scales . We focus on whole-graph representations with an emphasis on biochemical molecular graphs , where relatively large diameters and non-planar structures usually limit the effectiveness of traditional GNNs . Instead of the ensemble approach of Min et al . ( 2020 ) , we propose a native neural network architecture for learned geometric scattering ( LEGS ) , which directly modifies the scattering architecture from Gao et al . ( 2019 ) ; Perlmutter et al . ( 2019 ) , via relaxations described in Sec . 3 , to allow a task-driven adaptation of its wavelet configuration via backpropagation implemented in Sec . 4 . We note that other recent graph spectrum-based methods approach the learning of long range connections by approximating the spectrum of the graph with the Lancoz algorithm Liao et al . ( 2019 ) , or learning in block Krylov subspaces Luan et al . ( 2019 ) . Such methods are complementary to the work presented here , in that their spectral approximation can also be applied in the computation of geometric scattering when considering very long range scales ( e.g. , via spectral formulation of graph wavelet filters ) . However , we find that such approximations are not necessary in the datasets considered here and in other recent work focusing on whole-graph tasks , where direct computation of polynomials of the Laplacian is sufficient . The resulting learnable geometric scattering network balances the mathematical properties inherited from the scattering transform ( as shown in Sec . 3 ) with the flexibility enabled by adaptive representation learning . The benefits of our construction over standard GNNs , as well as pure geometric scattering , are discussed and demonstrated on graph classification and regression tasks in Sec . 5 . In particular , we find that our network maintains the robustness to small training sets present in graph scattering while improving classification on biological graph classification and regression tasks , and we show that in tasks where the graphs have a large diameter relative to their size , learnable scattering features improve performance over competing methods . 2 PRELIMINARIES : GEOMETRIC SCATTERING FEATURES . Let G = ( V , E , w ) be a weighted graph with V : = { v1 , . . . , vn } the set of nodes , E ⊂ { { vi , vj } ∈ V × V , i 6= j } the set of ( undirected ) edges and w : E → ( 0 , ∞ ) assigning ( positive ) edge weights to the graph edges . Note that w can equivalently be considered as a function of V ×V , where we set the weights of non-adjacent node pairs to zero . We define a graph signal as a function x : V → R on the nodes of G and aggregate them in a signal vector x ∈ Rn with the ith entry being x [ vi ] . We define the weighted adjacency matrix W ∈ Rn×n of the graph G as W [ vi , vj ] : = { w ( vi , vj ) if { vi , vj } ∈ E 0 otherwise , and the degree matrix D ∈ Rn×n of G as D : = diag ( d1 , . . . , dn ) with di : = deg ( vi ) : =∑n j=1W [ vi , vj ] being the degree of the node vi . The geometric scattering transform ( Gao et al. , 2019 ) relies on a cascade of graph filters constructed from a row stochastic diffusion matrix P : = 12 ( In + WD −1 ) , which corresponds to transition probabilities of a lazy random walk Markov process . The laziness of the process signifies that at each step it has equal probability of either staying at the current node or transitioning to a neighbor , where transition probabilities in the latter case are determined by ( normalized ) edge weights . Scattering filters are then defined via the graph-wavelet matrices Ψj ∈ Rn×n of scale j ∈ N0 , as Ψ0 : = In − P , Ψj : = P 2j−1 − P 2 j = P 2 j−1 ( In − P 2 j−1 ) , j ≥ 1 . ( 1 ) These diffusion wavelet operators partition the frequency spectrum into dyadic frequency bands , which are then organized into a full wavelet filter bankWJ : = { Ψj , ΦJ } 0≤j≤J , where ΦJ : = P 2 J is a pure low-pass filter , similar to the one used in GCNs . It is easy to verify that the resulting wavelet transform is invertible , since a simple sum of filter matrices inWJ yields the identity . Moreover , as discussed in Perlmutter et al . ( 2019 ) , this filter bank forms a nonexpansive frame , which provides energy preservation guarantees as well as stability to perturbations , and can be generalized to a wider family of constructions that encompasses the variations of scattering transforms on graphs from Gama et al . ( 2019a ; b ) and Zou & Lerman ( 2019 ) . Given the wavelet filter bankWJ , node-level scattering features are computed by stacking cascades of bandpass filters and element-wise absolute value nonlinearities to form Upx : = Ψjm |Ψjm−1 . . . |Ψj2 |Ψj1x|| . . . | , ( 2 ) indexed ( or parametrized ) by the scattering path p : = ( j1 , . . . , jm ) ∈ ∪m∈NNm0 that determines the filter scales captured by each scattering coefficient . Then , a whole-graph scattering representation is obtained by aggregating together node-level features via statistical moments over the nodes of the graph ( Gao et al. , 2019 ) . This construction yields the geometric scattering features Sp , qx : = n∑ i=1 |Upx [ vi ] |q . ( 3 ) indexed by the scattering path p and moment order q . Finally , we note that it can be shown that the graph-level scattering transform Sp , q guarantees node-permutation invariance , while Up is permutation equivariant ( Perlmutter et al. , 2019 ; Gao et al. , 2019 ) . 3 RELAXED GEOMETRIC SCATTERING CONSTRUCTION TO ALLOW TRAINING . The geometric scattering construction , described in Sec . 2 , can be seen as a particular GNN with handcrafted layers , rather than learned ones . This provides a solid mathematical framework for understanding the encoding of geometric information in GNNs , as shown in Perlmutter et al . ( 2019 ) , while also providing effective unsupervised graph representation learning for data exploration , which also has some advantages even in supervised learning task , as shown in Gao et al . ( 2019 ) . While the handcrafted design in Perlmutter et al . ( 2019 ) ; Gao et al . ( 2019 ) is not a priori amenable to task-driven tuning provided by end-to-end GNN training , we note that the cascade in Eq . 3 does conform to a neural network architecture suitable for backpropagation . Therefore , in this section , we show how and under what conditions a relaxation of the laziness of the random walk and the selection of the scales preserves some of the useful mathematical properties established in Perlmutter et al . ( 2019 ) . We then establish in section 5 the empirical benefits of learning the diffusion scales over a purely handcrafted design . We first note that the construction of the diffusion matrix P that forms the lowpass filter used in the fixed scattering construction can be relaxed to encode adaptive laziness by setting Pα : = αIn + ( 1−α ) WD−1 . Where α ∈ [ 1/2 , 1 ) controls the reluctance of the random walk to transition from one node to another . α = 1/2 gives an equal probability to stay in the same node as to transition to one of its neighbors . At this point , we note that one difference between the diffusion lowpass filter here and the one typically used in GCN and its variation is the symmetrization applied in Kipf & Welling ( 2016 ) . However , Perlmutter et al . ( 2019 ) established that for the original construction , this is only a technical difference since P can be regarded as self-adjoint under an appropriate measure which encodes degree variations in the graph . This is then used to generate a Hilbert space L2 ( G , D−1/2 ) of graph signals with inner product 〈x , y〉D−1/2 : = 〈D−1/2x , D−1/2y〉 . The following lemma shows that a similar property is retained for our adaptive lowpass filter Pα . Lemma 1 . The matrix Pα is self-adjoint on the Hilbert space L2 ( G , D−1/2 ) from Perlmutter et al . ( 2019 ) . We note that the self-adjointness shown here is interesting , as it links models that use symmetric and asymmetric versions of the Laplacian or adjacency matrix . Namely , Lemma 1 shows that the diffusion matrix P ( which is column normalized but not row normalized ) is self-adjoint , as an operator , and can thus be considered as “ symmetric ” in a suitable inner product space , thus establishing a theoretical link between these design choices . As a second relaxation , we propose to replace the handcrafted dyadic scales in Eq . 1 with an adaptive monotonic sequence of integer diffusion time scales 0 < t1 < · · · < tJ , which can be selected or tuned via training . Then , an adaptive filter bank is constructed asW ′J : = { Ψ′j , Φ′J } J−1 j=0 , with Φ′J : = P tJ α , Ψ′0 : = In − P t1α , ( 4 ) Ψ′j : = P tj α − P tj+1α , 1 ≤ j ≤ J − 1 . The following theorem shows that for any selection of scales , the relaxed construction ofW ′J constructs a nonexpansive frame , similar to the result from Perlmutter et al . ( 2019 ) shown for the original handcrafted construction . Theorem 1 . There exist a constant C > 0 that only depends on t1 and tJ such that for all x ∈ L2 ( G , D−1/2 ) , C‖x‖2D−1/2 6 ‖Φ ′ Jx‖2D−1/2 + J∑ j=0 ‖Ψ′jx‖2D−1/2 6 ‖x‖ 2 D−1/2 , where the norm considered here is the one induced by the space L2 ( G , D−1/2 ) . Intuitively , the upper ( i.e. , nonexpansive ) frame bound implies stability in the sense that small perturbations in the input graph signal will only result in small perturbations in the representation extracted by the constructed filter bank . Further , the lower frame bound ensures certain energy preservation by the constructed filter bank , thus indicating the nonexpansiveness is not implemented in a trivial fashion ( e.g. , by constant features independent of input signal ) . In the next section we leverage the two relaxations described here to design a neural network architecture for learning the configuration α , t1 , . . . , tJ of this relaxed construction via backpropagation through the resulting scattering filter cascade . The following theorem establishes that for any such configuration , extracted fromW ′J via Eqs . 2-3 , is permutation equivariant at the node-level and permutation invariant at the graph level . This guarantees that the extracted ( in this case learned ) features indeed encode intrinsic graph geometry rather than a priori indexation . Theorem 2 . Let U ′p and S′p , q be defined as in Eq . 2 and 3 ( correspondingly ) , with the filters from W ′J with an arbitrary configuration 0 < α < 1 , 0 < t1 < · · · < tJ . Then , for any permutation Π over the nodes of G , and any graph signal x ∈ L2 ( G , D−1/2 ) U ′pΠx = ΠU ′ px and S ′ p , qΠx = S ′ p , qx p ∈ ∪m∈NNm0 , q ∈ N where geometric scattering implicitly considers here the node ordering supporting its input signal . We note that the results in Lemma 1 and Theorems 1-2 , as well as their proofs , closely follow the theoretical framework proposed by Perlmutter et al . ( 2019 ) . We carefully account here for the relaxed learned configuration , which replaces the originally handcrafted configuration there . For completeness , the adjusted proofs appear in Sec . A of the Appendix .
This paper proposes a novel graph neural network-based architecture. Building upon the theoretical success of graph scattering transforms, the authors propose to learn some aspects of it providing them with more flexibility to adapt to data (recall that graph scattering transforms are built on pre-designed graph wavelet filter banks and do not learn from data). By dropping the dyadic distribution of frequencies within the wavelet bank, the proposed architecture actually learns a more suitable frequency separation among the different wavelets.
SP:f65217b47950d0dbf8e77622489d8883211a012d
DiffWave: A Versatile Diffusion Model for Audio Synthesis
1 INTRODUCTION . Deep generative models have produced high-fidelity raw audio in speech synthesis and music generation . In previous work , likelihood-based models , including autoregressive models ( van den Oord et al. , 2016 ; Kalchbrenner et al. , 2018 ; Mehri et al. , 2017 ) and flow-based models ( Prenger et al. , 2019 ; Ping et al. , 2020 ; Kim et al. , 2019 ) , have predominated in audio synthesis because of the simple training objective and superior ability of modeling the fine details of waveform in real data . There are other waveform models , which often require auxiliary losses for training , such as flow-based models trained by distillation ( van den Oord et al. , 2018 ; Ping et al. , 2019 ) , variational auto-encoder ( VAE ) based model ( Peng et al. , 2020 ) , and generative adversarial network ( GAN ) based models ( Kumar et al. , 2019 ; Bińkowski et al. , 2020 ; Yamamoto et al. , 2020 ) . Most of previous waveform models focus on audio synthesis with informative local conditioner ( e.g. , mel spectrogram or aligned linguistic features ) , with only a few exceptions for unconditional generation ( Mehri et al. , 2017 ; Donahue et al. , 2019 ) . It has been noticed that autoregressive models ( e.g. , WaveNet ) tend to generate made-up word-like sounds ( van den Oord et al. , 2016 ) , or inferior samples ( Donahue et al. , 2019 ) under unconditional settings . This is because very long sequences need to be generated ( e.g. , 16,000 time-steps for one second speech ) without any conditional information . Diffusion probabilistic models ( diffusion models for brevity ) are a class of promising generative models , which use a Markov chain to gradually convert a simple distribution ( e.g. , isotropic Gaussian ) into complicated data distribution ( Sohl-Dickstein et al. , 2015 ; Goyal et al. , 2017 ; Ho et al. , 2020 ) . Although the data likelihood is intractable , diffusion models can be efficiently trained by optimizing the variational lower bound ( ELBO ) . Most recently , a certain parameterization has been shown successful in image synthesis ( Ho et al. , 2020 ) , which is connected with denoising score matching ( Song ∗Contributed to the work during an internship at Baidu Research , USA . 1Audio samples are in : https : //diffwave-demo.github.io/ & Ermon , 2019 ) . Diffusion models can use a diffusion ( noise-adding ) process without learnable parameters to obtain the “ whitened ” latents from training data . Therefore , no additional neural networks are required for training in contrast to other models ( e.g. , the encoder in VAE ( Kingma & Welling , 2014 ) or the discriminator in GAN ( Goodfellow et al. , 2014 ) ) . This avoids the challenging “ posterior collapse ” or “ mode collapse ” issues stemming from the joint training of two networks , and hence is valuable for high-fidelity audio synthesis . In this work , we propose DiffWave , a versatile diffusion probabilistic model for raw audio synthesis . DiffWave has several advantages over previous work : i ) It is non-autoregressive thus can synthesize high-dimensional waveform in parallel . ii ) It is flexible as it does not impose any architectural constraints in contrast to flow-based models , which need to keep the bijection between latents and data ( e.g. , see more analysis in Ping et al . ( 2020 ) ) . This leads to small-footprint neural vocoders that still generate high-fidelity speech . iii ) It uses a single ELBO-based training objective without any auxiliary losses ( e.g. , spectrogram-based losses ) for high-fidelity synthesis . iv ) It is a versatile model that produces high-quality audio signals for both conditional and unconditional waveform generation . Specifically , we make the following contributions : 1 . DiffWave uses a feed-forward and bidirectional dilated convolution architecture motivated by WaveNet ( van den Oord et al. , 2016 ) . It matches the strong WaveNet vocoder in terms of speech quality ( MOS : 4.44 vs. 4.43 ) , while synthesizing orders of magnitude faster as it only requires a few sequential steps ( e.g. , 6 ) for generating very long waveforms . 2 . Our small DiffWave has 2.64M parameters and synthesizes 22.05 kHz high-fidelity speech ( MOS : 4.37 ) more than 5× faster than real-time on a V100 GPU without engineered kernels . Although it is still slower than the state-of-the-art flow-based models ( Ping et al. , 2020 ; Prenger et al. , 2019 ) , it has much smaller footprint . We expect further speed-up by optimizing its inference mechanism in the future . 3 . DiffWave significantly outperforms WaveGAN ( Donahue et al. , 2019 ) and WaveNet in the challenging unconditional and class-conditional waveform generation tasks in terms of audio quality and sample diversity measured by several automatic and human evaluations . We organize the rest of the paper as follows . We present the diffusion models in Section 2 , and introduce DiffWave architecture in Section 3 . Section 4 discusses related work . We report experimental results in Section 5 and conclude the paper in Section 6 . 2 DIFFUSION PROBABILISTIC MODELS . We define qdata ( x0 ) as the data distribution on RL , where L is the data dimension . Let xt ∈ RL for t = 0 , 1 , · · · , T be a sequence of variables with the same dimension , where t is the index for diffusion steps . Then , a diffusion model of T steps is composed of two processes : the diffusion process , and the reverse process ( Sohl-Dickstein et al. , 2015 ) . Both of them are illustrated in Figure 1 . Algorithm 1 Training for i = 1 , 2 , · · · , Niter do Sample x0 ∼ qdata , ∼ N ( 0 , I ) , and t ∼ Uniform ( { 1 , · · · , T } ) Take gradient step on ∇θ‖ − θ ( √ ᾱtx0 + √ 1− ᾱt , t ) ‖22 according to Eq . ( 7 ) end for Algorithm 2 Sampling Sample xT ∼ platent = N ( 0 , I ) for t = T , T − 1 , · · · , 1 do Compute µθ ( xt , t ) and σθ ( xt , t ) using Eq . ( 5 ) Sample xt−1 ∼ pθ ( xt−1|xt ) = N ( xt−1 ; µθ ( xt , t ) , σθ ( xt , t ) 2I ) end for return x0 The diffusion process is defined by a fixed Markov chain from data x0 to the latent variable xT : q ( x1 , · · · , xT |x0 ) = T∏ t=1 q ( xt|xt−1 ) , ( 1 ) where each of q ( xt|xt−1 ) is fixed to N ( xt ; √ 1− βtxt−1 , βtI ) for a small positive constant βt . The function of q ( xt|xt−1 ) is to add small Gaussian noise to the distribution of xt−1 . The whole process gradually converts data x0 to whitened latents xT according to a variance schedule β1 , · · · , βT . 2 The reverse process is defined by a Markov chain from xT to x0 parameterized by θ : platent ( xT ) = N ( 0 , I ) , and pθ ( x0 , · · · , xT−1|xT ) = T∏ t=1 pθ ( xt−1|xt ) , ( 2 ) where platent ( xT ) is isotropic Gaussian , and the transition probability pθ ( xt−1|xt ) is parameterized asN ( xt−1 ; µθ ( xt , t ) , σθ ( xt , t ) 2I ) with shared parameter θ . Note that both µθ and σθ take two inputs : the diffusion-step t ∈ N , and variable xt ∈ RL . µθ outputs an L-dimensional vector as the mean , and σθ outputs a real number as the standard deviation . The goal of pθ ( xt−1|xt ) is to eliminate the Gaussian noise ( i.e . denoise ) added in the diffusion process . Sampling : Given the reverse process , the generative procedure is to first sample an xT ∼ N ( 0 , I ) , and then sample xt−1 ∼ pθ ( xt−1|xt ) for t = T , T − 1 , · · · , 1 . The output x0 is the sampled data . Training : The likelihood pθ ( x0 ) = ∫ pθ ( x0 , · · · , xT−1|xT ) · platent ( xT ) dx1 : T is intractable to calculate in general . The model is thus trained by maximizing its variational lower bound ( ELBO ) : Eqdata ( x0 ) log pθ ( x0 ) = Eqdata ( x0 ) logEq ( x1 , ··· , xT |x0 ) [ pθ ( x0 , · · · , xT−1|xT ) × platent ( xT ) q ( x1 , · · · , xT |x0 ) ] ≥ Eq ( x0 , ··· , xT ) log pθ ( x0 , · · · , xT−1|xT ) × platent ( xT ) q ( x1 , · · · , xT |x0 ) : = ELBO . ( 3 ) Most recently , Ho et al . ( 2020 ) showed that under a certain parameterization , the ELBO of the diffusion model can be calculated in closed-form . This accelerates the computation and avoids Monte Carlo estimates , which have high variance . This parameterization is motivated by its connection to denoising score matching with Langevin dynamics ( Song & Ermon , 2019 ; 2020 ) . To introduce this parameterization , we first define some constants based on the variance schedule { βt } Tt=1 in the diffusion process as in Ho et al . ( 2020 ) : αt = 1− βt , ᾱt = t∏ s=1 αs , β̃t = 1− ᾱt−1 1− ᾱt βt for t > 1 and β̃1 = β1 . ( 4 ) Then , the parameterizations of µθ and σθ are defined by µθ ( xt , t ) = 1 √ αt ( xt − βt√ 1− ᾱt θ ( xt , t ) ) , and σθ ( xt , t ) = β̃ 1 2 t , ( 5 ) where θ : RL × N→ RL is a neural network also taking xt and the diffusion-step t as inputs . Note that σθ ( xt , t ) is fixed to a constant β̃ 1 2 t for every step t under this parameterization . In the following proposition , we explicitly provide the closed-form expression of the ELBO . 2One can find that q ( xT |x0 ) approaches to isotropic Gaussian with large T in Eq . ( 11 ) in the Appendix A . Proposition 1 . ( Ho et al. , 2020 ) Suppose a series of fixed schedule { βt } Tt=1 are given . Let ∼ N ( 0 , I ) and x0 ∼ qdata . Then , under the parameterization in Eq . ( 5 ) , we have − ELBO = c+ T∑ t=1 κtEx0 , ‖ − θ ( √ ᾱtx0 + √ 1− ᾱt , t ) ‖22 ( 6 ) for some constants c and κt , where κt = βt2αt ( 1−ᾱt−1 ) for t > 1 , and κ1 = 1 2α1 . Note that c is irrelevant for optimization purpose . The key idea in the proof is to expand the ELBO into a sum of KL divergences between tractable Gaussian distributions , which have a closed-form expression . We refer the readers to look at Section A in the Appendix for the full proof . In addition , Ho et al . ( 2020 ) reported that minimizing the following unweighted variant of the ELBO leads to higher generation quality : min θ Lunweighted ( θ ) = Ex0 , , t ‖ − θ ( √ ᾱtx0 + √ 1− ᾱt , t ) ‖22 ( 7 ) where t is uniformly taken from 1 , · · · , T . Therefore , we also use this training objective in this paper . We summarize the training and sampling procedures in Algorithm 1 and 2 , respectively . Fast sampling : Given a trained model from Algorithm 1 , we noticed that the most effective denoising steps at sampling occur near t = 0 ( see Section IV on demo website ) . This encourages us to design a fast sampling algorithm with much fewer denoising steps Tinfer ( e.g. , 6 ) than T at training ( e.g. , 200 ) . The key idea is to “ collapse ” the T -step reverse process into a Tinfer-step process with carefully designed variance schedule . We provide the details in Appendix B .
This paper describes a neural vocoder based on a diffusion probabilistic model. The model utilizes a fixed-length markov chain to convert between a latent uncorrelated Gaussian vector and a full-length observation. The conversion from observation to latent is fixed and amounts to adding noise at each step. The conversion from latent to observation reveals slightly more of the observation from the latent at each step via a sort of cancellation. This process is derived theoretically based on maximizing the variational lower bound (ELBO) of the model and follows Ho et al. (2020) who derived it for image generation. Thorough experiments show that the model produces high quality speech syntheses on the LJ dataset (MOS comparable to WaveNet and real speech) when conditionally synthesizing from the true mel spectrogram, while generating much more quickly than WaveNet. Perhaps more interesting and surprising, however, is that it generates very high quality and intelligible short utterances with no conditioning, and also admits to global conditioning, e.g., with a digit label.
SP:c90a894d965bf8e529df296b9d5c76864aa5f4f9
Dance Revolution: Long-Term Dance Generation with Music via Curriculum Learning
1 INTRODUCTION . Arguably , dancing to music is one of human ’ s innate abilities , as we can spontaneously sway along with the tempo of music we hear . The research in neuropsychology indicates that our brain is hardwired to make us move and synchronize with music regardless of our intention ( Chen et al. , 2008 ) . Another study of archaeology also suggests that dance is a social communication skill among early humans connected to the ability of survival long time ago ( Adshead-Lansdale & Layson , 2006 ) . Nowadays , dance ( to music ) has become a means to the cultural promotion , a method of emotional expression , a tool for socialization and an art form to bring aesthetic enjoyment . The neurological mechanism behind dancing behavior and the unique value of dance to society motivate us to explore a computational approach to dance creation from a piece of music in artificial intelligence research . Such work is potentially beneficial to a wide range of applications such as dance creation assistant in art and sports , character motion generation for audio games and research on cross-modal behavior . In literature , the music-conditioned dance generation is a relatively new task that attracts increasing research interests recently . Early works ( Fan et al. , 2011 ; Lee et al. , 2013 ) synthesize dance sequences from music by retrieval-based methods , which show the limited creativity in practice . Recently , Lee et al . ( 2019 ) formulate the task from the generative perspective , and further propose a decompositionto-composition framework . Their model first generates basic dance units from the music clips and then composes them by using the last pose of current unit to initialize the first pose of the next ∗Equal contribution . †Work done during the internship at Microsoft STCA . ‡Corresponding author . unit . Although this approach shows better performance than the retrieval-based methods , several challenges still remain . First , existing generative methods synthesize new human motion sequences through autoregressive models like RNN , which tend to result in short sequences . In other words , the generated sequences often quickly freeze within a few seconds due to an accumulation of prediction errors that are fed back into the neural network . This problem becomes even more severe in long motion sequence generation . For instance , composing a dance for a 1-minute music clip under 15 Frame Per Second ( FPS ) means generating 900 poses at one time . In practice , we need novel methods which can effectively generate long motion sequences . Besides , how to enhance the harmony between the synthesized dance movements and the given music is a largely unexplored challenge . Inutitively , the movements need to be consistent with the music in terms of style , rhythm and beat . However , to achieve this goal is non-trivial , which requires the generation model to have the capability to capture the fine-grained correspondence between music and dance . In this paper , we formalize music-conditioned dance generation as a sequence-to-sequence learning problem where the fine-grained correspondence between music and dance is represented through sequence modeling and their alignment is established via mapping from a sequence of acoustic features of the music to a sequence of movements of the dance . The model consists of a music encoder and a dance decoder . The encoder transforms low-level acoustic features of an input music clip into high-level latent representations via self-attention with a receptive field restricted within k-nearest neighbors of an element . Thus , the encoder can efficiently process long sequences of music features , e.g. , a sequence with more than 1000 elements , and model local characteristics of the music such as chord and rhythm patterns . The decoder exploits a recurrent structure to predict the dance movement frame by frame conditioned on the corresponding element in the latent representations of music feature sequence . Furthermore , we propose a curriculum learning ( Bengio et al. , 2009 ) strategy to alleviate error accumulation ( Li et al. , 2017 ) of autoregressive models in long motion sequence generation . Specifically , it gently changes the training process from a fully guided teacher-forcing scheme using the previous ground-truth movements , towards a less guided autoregressive scheme which mostly utilizes the generated movements instead . This strategy bridges the gap between training and inference of autoregressive models , and thus alleviates error accumulation at inference . The length of each video clip in the dataset released by Lee et al . ( 2019 ) is only about 6 seconds , which can not be used by other methds to generate long-term dance except for their specially designed decomposition-to-composition framework . To facilitate the task of long-term dance generation with music , we collect a high-quality dataset consisting of 1-minute video clips , totaling about 12 hours . And there are three representative styles in our dataset : “ Ballet ” , “ Hiphop ” and “ Japanese Pop ” . Our contributions in this work are four-fold : ( 1 ) We formalize music-conditioned dance generation as a sequence-to-sequence learning problem and devise a novel seq2seq architecture for the long-term dance generation with music . ( 2 ) We propose a novel curriculum learning strategy to alleviate error accumulation of autoregressive models in long motion sequence generation . ( 3 ) To facilitate long-term dance generation with music , we collect a high-quality dataset that is available with our code1 . ( 4 ) The extensive experiments show that our approach significantly outperforms the existing state-of-the-arts on both automatic metrics and human judgements . The demo video in the supplementary material also exhibits our approach can generate diverse minute-length dances that are smooth , natural-looking , style-consistent and beat-matching with the musics from test set . 2 RELATED WORK . Cross-Modal Learning . Most existing works focus on the modeling between vision and text , such as image captioning ( Lu et al. , 2017 ; Xu et al. , 2015 ) and text-to-image generation ( Reed et al. , 2016 ; Zhang et al. , 2017 ) . There are some other works to study the translation between audio and text like Automatic Speech Recognition ( ASR ) ( Hinton et al. , 2012 ) and Text-To-Speech ( TTS ) ( Oord et al. , 2016 ) . While the modeling between audio and vision is largely unexplored and the music-conditioned dance generation is a typical cross-modal learning problem from audio to vision . 1https : //github.com/stonyhu/DanceRevolution Human Motion Prediction . Prediction of human motion dynamics has been a challenging problem in computer vision , which suffers from the high spatial-temporal complexity . Existing works ( Chan et al. , 2019 ; Wang et al. , 2018 ) represent the human pose as 2D or 3D body keyjoints ( Cao et al. , 2017 ) and address the problem via sequence modeling . Early methods , such as hidden markov models ( Lehrmann et al. , 2014 ) , Gaussian processes ( Wang et al. , 2006 ) and restricted boltzmann machines ( Taylor et al. , 2007 ) , have to balance the model capacity and inference complexity due to complicated training procedures . Recently , neural networks dominate the human motion modeling . For instance , Fragkiadaki et al . ( 2015 ) present LSTM-3LR and Encoder-Recurrent-Decoder ( ERD ) as two recurrent architectures for the task ; Jain et al . ( 2016 ) propose a structural-RNN to model human-object interactions in a spatio-temporal graph ; and Ghosh et al . ( 2017 ) equip LSTM-3LR with a dropout autoencoder to enhance the long-term prediction . Besides , convolutional neural networks ( CNNs ) have also been utilized to model the human motion prediction ( Li et al. , 2018 ) . Audio-Conditioned Dance Generation . In the research of audio-conditioned dance generation , most existing works study 2D dance motion generation with music since the training data for paired 2D pose and music can be extracted from the huge amount of dance videos available online . Various methods have been proposed to handle this task , such as adversarial learning based methods ( Lee et al. , 2019 ; Sun et al. , 2020 ; Ferreira et al. , 2021 ) , autoencoder methods ( Tang et al. , 2018 ) and sequence-to-sequence methods ( Lee et al. , 2018 ; Ren et al. , 2019 ; Yalta et al. , 2019 ; Ye et al. , 2020 ) . While these works mainly focus on exploring the different neural architectures and overlook the freezing motion issue in dance motion synthesis . In this work , we first propose a novel seq2seq architecture to model the fine-grained correspondence between music and dance , and then introduce a novel curriculum learning strategy to address the freezing motion issue caused by error accumulation ( Li et al. , 2017 ) in long-term dance motion generation . 3 APPROACH . In this section , we present our approach to music-conditioned dance generation . After formalization of the problem in question , we elaborate the model architecture and the dynamic auto-condition learning approach that facilitates long-term dance generation according to the given music . 3.1 PROBLEM FORMALIZATION . Suppose that there is a dataset D = { ( Xi , Yi ) } Ni=1 , where X = { xt } nt=1 is a music clip with xt being a vector of acoustic features at time-step t , and Y = { yt } nt=1 is a sequence of dance movements with yt aligned to xt . The goal is to estimate a generation model g ( · ) from D , and thus given a new music input X , the model can synthesize dance Y to music X based on g ( X ) . We first present our seq2seq architectures chosen for music-conditioned dance generation in the following section . Later in the experiments , we empirically justify the choice by comparing the architectures with other alternatives . 3.2 MODEL ARCHITECTURE . In the architecture of g ( · ) , a music encoder first transforms X = ( x1 , ... , xn ) ( xi ∈ Rdx ) into a hidden sequence Z = ( z1 , ... , zn ) ( zi ∈ Rdz ) using a local self-attention mechanism to reduce the memory requirement for long sequence modeling , and then a dance decoder exploits a recurrent structure to autoregressively predicts movements Y = ( y1 , ... , yn ) conditioned on Z . Music Encoder . Encouraged by the compelling performance on music generation ( Huang et al. , 2018 ) , we define the music encoder with a transformer encoder structure . While the self-attention mechanism ( Vaswani et al. , 2017 ) in transformer can effectively represent the multi-scale structure of music , the quadratic memory complexity O ( n2 ) about sequence length n impedes its application to long sequence modeling due to huge of GPU memory consumption ( Child et al. , 2019 ) . To keep the effectiveness of representation and control the cost , we introduce a local self-attention mechanism that modifies the receptive field of self-attention by restricting the element connections within k-nearest neighbors . Thus , the memory complexity is reduced toO ( nk ) . k could be small in our scenario since we only pursue an effective representation for a given music clip . Therefore , the local patterns of music are encoded in zt that is sufficient for the generation of movement yt at time-step t , which is aligned with the common sense that the dance movement at certain time-step is highly influenced by the nearby clips of music . Yet we can handle long sequences of acoustic features , e.g. , more than 1000 elements , in an efficient and memory-economic way . Specifically , we first embed X = ( x1 , ... , xn ) into U = ( u1 , ... , un ) with a linear layer parameterized by WE ∈ Rdx×dz . Then ∀i ∈ { 1 , ... , n } , zi can be formulated as : zi = F ( ai ) , ai = i+bk/2c∑ j=i−bk/2c αij ( ujW V l ) , U = XW E , ( 1 ) where F ( · ) : Rdv → Rdz is a feed forward neural network . Each uj is only allowed to attend its k-nearest neighbors including itself , where k is a hyper-parameter referring to the sliding window size of the local self-attention . Then attention weight αij is calculated using a softmax function as : αij = exp eij∑j+bk/2c t=j−bk/2c exp eit , eij = ( uiW Q l ) ( ujW K l ) > √ dk , ( 2 ) where for the the l-th head , WQl , W K l ∈ Rdz×dk and WVl ∈ Rdz×dv are parameters that transform U into a query , a key , and a value respectively . dz is the dimension of hidden state zi while dk is the dimension of query , key and dv is the dimension of value . Dance Decoder . We choose a recurrent neural network as the dance decoder in consideration of two factors : ( 1 ) the chain structure can well capture the spatial-temporal dependency among human movement dynamics , which has proven to be highly effective in the state-of-the-art methods for human motion prediction ( Li et al. , 2018 ; Mao et al. , 2019 ) ; ( 2 ) our proposed learning strategy is tailored for the autoregressive models like RNN , as will be described in the next section . Specifically , with Z = ( z1 , ... , zn ) , the dance movements Y = ( ŷ1 , ... , ŷn ) are synthesized by : ŷi = [ hi ; zi ] W S + b , ( 3 ) hi = RNN ( hi−1 , ŷi−1 ) , ( 4 ) where hi is the i-th hidden state of the decoder and h0 is initialized by sampling from the standard normal distribution to enhance variation of the generated sequences . [ · ; · ] denotes the concatenation operation . WS ∈ R ( ds+dz ) ×dy and b ∈ Rdy are parameters where ds and dy are the dimensions of hi and ŷi , respectively . At the i-th time-step , the decoder predicts the movement ŷi conditioned on hi as well as the latent feature representation zi , and thus can capture the fine-grained correspondence between music feature sequence and dance movement sequence .
The authors present a seq2seq model with a sparse transformer encoder and an LSTM decoder. They utilize a learning curriculum wherein the autoregressive decoder is initially trained using teacher forcing and is gradually fed its past predictions as training progresses. The authors introduce a new dataset for long term dance generation. They utilize both subjective and objective metrics to evaluate their method. The proposed method outperforms other baselines for dance generation. Finally they conduct ablation studies demonstrating the benefits of using a transformer encoder over other architectures, and the benefits of the proposed curriculum learning scheme.
SP:efbb0e2e944f1d810a6f0b6bc71e636af9ae9c13
Block Skim Transformer for Efficient Question Answering
Transformer-based encoder models have achieved promising results on natural language processing ( NLP ) tasks including question answering ( QA ) . Different from sequence classification or language modeling tasks , hidden states at all positions are used for the final classification in QA . However , we do not always need all the context to answer the raised question . Following this idea , we proposed Block Skim Transformer ( BST ) to improve and accelerate the processing of transformer QA models . The key idea of BST is to identify the context that must be further processed and the blocks that could be safely discarded early on during inference . Critically , we learn such information from self-attention weights . As a result , the model hidden states are pruned at the sequence dimension , achieving significant inference speedup . We also show that such extra training optimization objection also improves model accuracy . As a plugin to the transformer based QA models , BST is compatible to other model compression methods without changing existing network architectures . BST improves QA models ’ accuracies on different datasets and achieves 1.6× speedup on BERTlarge model . 1 INTRODUCTION . With the rapid development of neural networks in NLP tasks , the Transformer ( Vaswani et al. , 2017 ) that uses multi-head attention ( MHA ) mechanism is a recent huge leap ( Goldberg , 2016 ) . It has become a standard building block of recent NLP models . The Transformer-based BERT ( Devlin et al. , 2018 ) model further advances the model accuracy by introducing self-supervised pre-training and has reached the state-of-the-art accuracy on many NLP tasks . One of the most challenging tasks in NLP is question answering ( QA ) ( Huang et al. , 2020 ) . Our key insight is that when human beings are answering a question with a passage as a context , they do not spend the same level of comprehension for each of the sentences equally across the paragraph . Most of the contents are quickly skimmed over with little attention on it . However , in the Transformer architecture , all tokens go through the same amount of computation , which suggests that we can take advantage of that by discarding many of the tokens in the early layers of the Transformer . This redundant nature of the transformer induces high execution overhead on the input sequence dimension . To mitigate the inefficiencies in QA tasks , we propose to assign more attention to some blocks that are more likely to contain actual answer while terminating other blocks early during inference . By doing so , we reduce the overhead of processing irrelevant texts and accelerate the model inference . Meanwhile , by feeding the attention mechanism with the knowledge of the answer position directly during training , the attention mechanism and QA model ’ s accuracy are improved . In this paper , we provide the first empirical study on attention featuremap to show that an attention map could carry enough information to locate the answer scope . We then propose Block Skim Transformer ( BST ) , a plug-and-play module to the transformer-based models , to accelerate transformer-based models on QA tasks . By handling the attention weight matrices as feature maps , the CNN-based Block Skim module extracts information from the attention mechanism to make a skim decision . With the predicted block mask , BST skips irrelevant context blocks , which do not enter subsequent layers ’ computation . Besides , we devise a new training paradigm that jointly trains the Block Skim objective with the native QA objective , where extra optimization signals regarding the question position are given to the attention mechanism directly . In our evaluation , we show BST improves the QA accuracy and F1 score on all the datasets and models we evaluated . Specifically , BERTlarge is accelerated for 1.6× without any accuracy loss and nearly 1.8× with less than 0.5 % F1 score degradation . This paper contributes to the following 3 aspects . • We for the first time show that an attention map is effective for locating the answer position in the input sequence . • We propose Block Skim Transformer ( BST ) , which leverages the attention mechanism to improve and accelerate transformer models on QA tasks . The key is to extract information from the attention mechanism during processing and intelligently predict what blocks to skim . • We evaluate BST on several Transformer-based model architectures and QA datasets and demonstrate BST ’ s efficiency and generality . 2 RELATED WORK . Recurrent Models with Skimming . The idea to skip or skim irrelevant section or tokens of input sequence has been studied in NLP models , especially recurrent neural networks ( RNN ) ( Rumelhart et al. , 1986 ) and long short-term memory network ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) . LSTM-Jump ( Yu et al. , 2017 ) uses the policy-gradient reinforcement learning method to train a LSTM model that decides how many time steps to jump at each state . They also use hyper-parameters to control the tokens before jump , maximum tokens to jump , and maximum number of jumping . Skim-RNN ( Seo et al. , 2018 ) dynamically decides the dimensionality and RNN model size to be used at next time step . In specific , they adopt two ” big ” and ” small ” RNN models and select the ” small ” one for skimming . Structural-Jump-LSTM ( Hansen et al. , 2018 ) use two agents to decide whether jump a small step to next token or structurally to next punctuation . Skip-RNN ( Campos et al. , 2017 ) learns to skip state updates thus results in reduced computation graph size . The difference of BST to these works are two-fold . Firstly , the previous works make skimming decisions based on the hidden states or embeddings during processing . However , we are the first to analyze and utilize the attention relationship for skimming . Secondly , our work is based on Transformer model ( Vaswani et al. , 2017 ) , which has outperformed the recurrent type models on most NLP tasks . Transformer with Input Reduction . On contrast to aforementioned recurrent models , in the processing of Transformer-based model , all input sequence tokens are calculated in parallel . As such , skimming can be regarded as reduction on sequence dimension . PoWER-BERT ( Goyal et al. , 2020 ) extracts input sequence token-wise during processing based on attention scores to each token . During the fine-tuning process for downstream tasks , Goyal et al . proposes soft-extract layer to train the model jointly . Funnel-Transformer ( Dai et al. , 2020 ) proposes a novel pyramid architecture with input sequence length dimension reduced gradually regardless of semantic clues . For tasks requiring full sequence length output , like masked language modeling and extractive question answering , Funnel-Transformer up-sample at the input dimension to recover . Universal Transformer ( Dehghani et al. , 2018 ) proposes a dynamic halting mechanism that determines the refinement steps for each token . Different from these works , BST utilizes attention information between question and token pairs and skims the input sequence at the block granularity accordingly . Efficient Transformer . There are also many attempts for designing efficient Transformers ( Zhou et al. , 2020 ; Wu et al. , 2019 ; Tay et al. , 2020 ) . Well studied model compression methods for Transformer models include pruning ( Guo et al. , 2020 ) , quantization ( Wang & Zhang , 2020 ) , distillation ( Sanh et al. , 2019 ) , weight sharing . Plenty of works and efforts focus on dedicated efficient attention mechanism considering its quadratic complexity of sequence length ( Kitaev et al. , 2019 ; Beltagy et al. , 2020 ; Zaheer et al. , 2020 ) . BST is orthogonal to these techniques on the input dimension and therefore is compatible with them . We demonstrate this feasibility with the weight sharing model Albert ( Lan et al. , 2019 ) in Sec . 5 . 3 PROBLEM FORMULATION : IS ATTENTION EFFECTIVE FOR SKIM . Transformer . Transformer model with multi-head self-attention mechanism calculates hidden states for each position as a weighted sum of input hidden states . The weight vector is calculated by parameterized linear projection query Q and key K as eq . 1 . Given a sequence of input embeddings , the output contextual embedding is composed by the input sequence with different attention at each position . Attention ( Q , K ) = So f tmax ( QKT√ dk ) , ( 1 ) where Q , K are query and key matrix of input embeddings , dk is the length of a query or key vector . Multiple parallel groups of such attention weights , also referred to as attention heads , make it possible to attend to information at different positions . QA is one of the ultimate downstream tasks in the NLP . Given a text document and a question about the context , the answer is a contiguous span of the text . To predict the start and end position of the input context given a question , the embedding of each certain token is processed for all transformer layers in the encoder model . In many end-to-end open domain QA systems , information retrieval is the advance procedure at coarse-grained passage or paragraph level . Under the characteristic of extractive QA problem that answer spans are contiguous , our question is that whether we can utilize such idea at fine-grained block granularity during the processing of transformer . Is the attention weights effective for distinguish the answer blocks ? To answer the above question , we build a simple logistic regression model with attention matrix from each layer to predict whether an input sentence block contains the answer . The attention matrices are profiled from a BERTlarge SQuAD QA model and reduced to block level following Eq . 2 ( Clark et al. , 2019 ) . The attention from block [ a , b ] attending to block [ c , d ] is aggregated to one value . And the attention between a block and the question sentence , special tokens `` [ CLS ] '' and `` [ SEP ] '' are used to denote the attending relation of the block . Such 6-dimensional vector from all attention heads in the layer are concatenated as the final classification feature . The result is shown in Fig . 1 with attention matrices from different layers . Simple logistic regression with hand crafted feature from attention weight achieves quite promising classification accuracy . This suggests that the attending relationship between question and targets is indeed capable for figuring out answer position . BlockAttention ( [ a , b ] , [ c , d ] ) = 1 b−a b ∑ i=a d ∑ j=c Attention ( i , j ) ( 2 ) 4 BLOCK SKIMMING TRANSFORMER ( BST ) . 4.1 ARCHITECTURE OVERVIEW OF BST . We propose the Block Skimming Transformer ( BST ) model to accelerate the question answering task without degrading the answer accuracy . Unlike the conventional Transform-based model that uses all input tokens throughout the entire layers , our BST model accurately identifies the irrelevant contexts for the question in the early layers , and remove those irrelevant contexts in the following layers . As such , our model reduces the computation requirement and enables fast question answering . In Sec . 3 , we have shown that it is feasible to identify those tokens that are irrelevant to the question through a hand-crafted feature using the attentions relationship among tokens . However , using this approach could significantly hurt the question answering task accuracy as we show later . As such , we propose an end-to-end learnable feature extractor that captures the attention behavior better . Fig . 2 shows the overall architecture of our BST model , where a layer is composed of a Transformer layer and a learnable Block Skim Module ( BSM ) . The BSM adopts the convolutional neural network for feature extraction . The input is attention matrices of attention heads , which are treated as feature maps of multiple input channels . The output is a block-level mask that corresponds to the relevance of a block of input tokens to the question . In each BSM module , we use convolution to collect local attending information and use pooling to reduce the size of feature maps . Two 3×3 convolution and one 1×1 convolution are connected with pooling operations intersected . For all the convolution operations , ReLU funcition ( Hahnloser & Seung , 2001 ) is used as activation function . To locate the answer context blocks , we use a linear classification layer to calculate the score for each block . Also , two Batch Normalization layers ( Ioffe & Szegedy , 2015 ) are inserted to improve the model accuracy . Formally , we denote the input sequence of a transformer layer as X = ( x0 , x1 , . . . , xn ) . Then the attention matrices of this layer are denoted as Attention ( X ) . Given the attention output of a transformer layer , the kth block prediction result B is represented as B = BST ( Attention ( X ) ) , where BST is the proposed architecture . The main functions of BST is expressed as Eq . 3 . BST ( Attention ) = Linear ( Conv1×1 ( Conv3×3 ( Pool ( Conv3×3 ( Pool ( Attention ) ) ) ) ) ) ( 3 )
This paper presents the "Block Skim Transformer" for extractive question answering tasks. The key idea in this model is using a classifier, on the self-attention distributions of a particular layer, to classify whether a large spans of non-contiguous text (blocks) contain the answer. If a block is rejected by the classifier, it is excluded in subsequent layers of self-attention. During training, no blocks are thrown away and the classifier is applied to every layer to provide a regularization effect, which leads to small improvements in performance in 5 datasets. During inference, blocks are thrown away at a fixed layer. The reduction in sequence length leads to ~1.5x batch size 1 speed improvements.
SP:18e9f58ab4fc8532cbd298730cff5b7f8ec31a5f
Predicting Classification Accuracy When Adding New Unobserved Classes
1 INTRODUCTION . Advances in machine learning and representation learning led to automatic systems that can identify an individual class from very large candidate sets . Examples are abundant in visual object recognition ( Russakovsky et al. , 2015 ; Simonyan & Zisserman , 2014 ) , face identification ( Liu et al. , 2017b ) , and brain-machine interfaces ( Naselaris et al. , 2011 ; Seeliger et al. , 2018 ) . In all of these domains , the possible set of classes is much larger than those observed at training or testing . Acquiring and curating data is often the most expensive component in developing new recognition systems . A practitioner would prefer knowing early in the modeling process whether the datacollection apparatus and the classification algorithm are expected to meet the required accuracy levels . In large multi-class problems , the pilot data may contain considerably fewer classes than would be found when the system is deployed ( consider , for example , the case in which researchers develop a face recognition system that is planned to be used on 10,000 people , but can only collect 1,000 in the initial development phase ) . This increase in the number of classes changes the difficulty of the classification problem and therefore the expected accuracy . The magnitude of change varies depending on the classification algorithm and the interactions between the classes : usually classification accuracy will deteriorate as the number of classes increases , but this deterioration varies across classifiers and data-distributions . For pilot experiments to work , theory and algorithms are needed to estimate how accuracy of multi-class classifiers is expected to change when the number of classes grows . In this work , we develop a prediction algorithm that observes the classification results for a small set of classes , and predicts the accuracy on larger class sets . In large multiclass classification tasks , a representation is often learned on a set of k1 classes , whereas the classifier is eventually used on a new larger class set . On the larger set , classification can be performed by applying simple procedures such as measuring the distances in an embedding space between a new example x ∈ X and labeled examples associated with the classes yi ∈ Y . Such classifiers , where the score assigned to a data point x to belong to a class y is independent of the other classes , are defined as marginal classifiers ( Zheng et al. , 2018 ) . Their performance on the larger set describes how robust the learned representation is . Examples of classifiers that are marginal when used on a larger class set include siamese neural networks ( Koch et al. , 2015 ) , oneshot learning ( Fei-Fei et al. , 2006 ) and approaches that directly optimize the embedding ( Schroff et al. , 2015 ) . Our goal in this work is to estimate how well a given marginal classifier will perform on a large unobserved set of k2 classes , based on its performance on a smaller set of k1 classes . Recent works ( Zheng & Benjamini , 2016 ; Zheng et al. , 2018 ) set a probabilistic model for rigorously studying this problem , assuming that the k1 available classes are sampled from the same distribution as the larger set of k2 classes . Following the framework they propose , we assume that the sets of k1 and k2 classes on which the classifier is trained and evaluated are sampled independently from an infinite continuous set Y according to Yi ∼ PY ( y ) , and for each class , r data points are sampled independently from X according to the conditional distribution PX|Y ( x | y ) . In their work , the authors presented two methods for predicting the expected accuracy , one of them originally due to Kay et al . ( 2008 ) . We cover these methods in Section 2 . As a first contribution of this work ( Section 3 ) , we provide a theoretical analysis that connects the accuracy of marginal classifiers to a variant of the receiver operating characteristic ( ROC ) curve , which is achieved by reversing the roles of classes and data points in the common ROC . We show that the reversed ROC ( rROC ) measures how well a classifier ’ s learned representation separates the correct from the incorrect classes of a given data point . We then prove that the accuracy of marginal classifiers is a function of the rROC , allowing the use of well researched ROC estimation methods ( Gonçalves et al. , 2014 ; Bhattacharya & Hughes , 2015 ) to predict the expected accuracy . Furthermore , the reversed area under the curve ( rAUC ) equals the expected accuracy of a binary classifier , where the expectation is taken over all randomly selected pairs of classes . We use our results regarding the rROC to provide our second contribution ( Section 4 ) : CleaneX ( Classification Expected Accuracy Neural EXtrapolation ) , a new neural-network-based method for predicting the expected accuracy of a given classifier on an arbitrarily large set of classes1 . CleaneX differs from previous methods by using both the raw classification scores and the observed classification accuracies for different class-set sizes to calibrate its predictions . In Section 5 we verify the performance of CleaneX on simulations and real data-sets . We find it achieves better overall predictions of the expected accuracy , and very few “ large ” errors , compared to its competitors . We discuss the implications , and how the method can be used by practitioners , in Section 6 . 1.1 PRELIMINARIES AND NOTATION . In this work x are data points , y are classes , and when referred to as random variables they are denoted by X , Y respectively . We denote by y ( x ) the correct class of x , and use y∗ when x is implicitly understood . Similarly , we denote by y′ an incorrect class of x . We assume that for each x and y the classifier h assigns a score Sy ( x ) , such that the predicted class of x is arg maxy Sy ( x ) . On a given dataset of k classes , { y1 , . . . , yk } , the accuracy of the trained classifier h is the probability that it assigns the highest score to the correct class A ( y1 , . . . , yk ) = PX ( Sy∗ ( x ) ≥ maxki=1Syi ( x ) ) ( 1 ) where PX is the distribution of the data points x in the sample of classes . Since r points are sampled from each class , PX assumes a uniform distribution over the classes within the given sample . An important quantity for a data point x is the probability of the correct class y∗ to outscore a randomly chosen incorrect class Y ′ ∼ PY |Y 6=y∗ , that is Cx = PY ′ ( Sy∗ ( x ) ≥ Sy′ ( x ) ) . This is the cumulative distribution function of the incorrect scores , evaluated at the value of the correct score . We denote the expected accuracy over all possible subsets of k classes from Y by Ek [ A ] and its estimator by Êk [ A ] . We refer to the curve of Ek [ A ] at different values of k ≥ 2 as the accuracy curve . Given a sample of K classes , the average accuracy over all subsets of k ≤ K classes from the sample is denoted by ĀKk . 1Code is publicly available at : https : //github.com/YuliSl/CleaneX 2 RELATED WORK . Learning theory provides bounds of sample complexity in multiclass classification that depend on the number of classes ( Shalev-Shwartz & Ben-David , 2014 ) , and the extension to large mutliclass problems is a topic of much interest ( Kuznetsov et al. , 2014 ; Lei et al. , 2015 ; Li et al. , 2018 ) . However , these bounds can not be used to estimate the expected accuracy . Generalization to out-of-label accuracy includes the work of Jain & Learned-Miller ( 2010 ) . The generalization of classifiers from datasets with few classes to larger class sets include those of Oquab et al . ( 2014 ) and Griffin et al . ( 2007 ) , and are closely related to transfer learning ( Pan et al. , 2010 ) and extreme classification ( Liu et al. , 2017a ) . More specific works include that of Abramovich & Pensky ( 2019 ) , which provides lower and upper bounds for the distance between classes that is required in order to achieve a given accuracy . Kay et al . ( 2008 ) , as adapted by Zheng et al . ( 2018 ) , propose to estimate the accuracy of a marginal classifier on a given set of k classes by averaging over x the probability that its correct class outscores a single random incorrect class , raised to the power of k − 1 ( the number of incorrect classes in the sample ) , that is Ek [ A ] = EX [ PY ′ ( Sy∗ ( x ) ≥ Sy′ ( x ) ) k−1 ] = Ex [ Ck−1x ] . ( 2 ) Therefore , the expected accuracy can be predicted by estimating the values of Cx on the available data . To do so , the authors propose using kernel density estimation ( KDE ) choosing the bandwidth with pseudo-likelihood cross-validation ( Cao et al. , 1994 ) . Zheng et al . ( 2018 ) define a discriminability function D ( u ) = PX ( PY ′ ( Sy∗ ( x ) > Sy′ ( x ) ) ≤ u ) , ( 3 ) and show that for marginal classifiers , the expected accuracy at k classes is given by Ek [ A ] = 1− ( k − 1 ) ∫ 1 0 D ( u ) uk−2du . ( 4 ) The authors assume a non-parametric regression model with pre-chosen basis functions bj , so that D ( u ) = ∑ j βjbj . To obtain β̂ the authors minimize the mean squared error ( MSE ) between the resulting estimation Êk [ A ] and the observed accuracies Āk1k . 3 REVERSED ROC . In this section we show that the expected accuracy , Ek [ A ] , can be better understood by studying an ROC-like curve . To do so , we first recall the definition of the common ROC : for two classes in a setting where one class is considered as the positive class and the other as the negative one , the ROC is defined as the graph of the true-positive rate ( TPR ) against the false-positive rate ( FPR ) ( Fawcett , 2006 ) . The common ROC curve represents the separability that a classifier h achieves between data points of the positive class and those of the negative one . At a working point in which the FPR of the classifier is u , we have ROC ( u ) = TPR ( FPR−1 ( u ) ) . In a multiclass setting , we can define ROCy for each class y by considering y as the positive class , and the union of all other classes as the negative one . An adaptation of the ROC for this setting can be defined as the expectation of ROCy over the classes , that is ROC ( u ) = ∫ Y ROCy ( u ) dP ( y ) . In terms of classification scores , we have TPRy ( t ) = PX ( Sy ( x ) > t | y ( x ) = y ) , FPRy ( t ) = PX ( Sy ( x ) > t | y ( x ) 6= y ) and thus FPR−1y ( u ) = supt { PX ( Sy ( x ) > t | y ( x ) 6= y ) ≥ u } . Here , we single out each time one of the classes y and compare the score of the data points that belong to this class with the score of those that do not . However , when the number of classes is large , we could instead single out a data point x and compare the score that it gets for the correct class with the scores for the incorrect ones . This reverse view is formalized in the following definition , where we exchange the roles of data points x and classes y , to obtain the reversed ROC : Definition 1 . Given a data point x , its corresponding reversed true-positive rate is rTPRx ( t ) = { 1 Sy∗ ( x ) > t 0 Sy∗ ( x ) ≤ t ( 5 ) The reversed false-positive rate is rFPRx ( t ) = PY ′ ( Sy′ ( x ) > t ) ( 6 ) and accordingly rFPR−1x ( u ) = sup t { PY ′ ( Sy′ ( x ) > t ) ≥ u } . ( 7 ) Consequently , the reversed ROC is rROCx ( u ) = rTPRx ( rFPR−1y ( u ) ) = { 1 Sy∗ ( x ) > supt { PY ′ ( Sy′ ( x ) > t ) ≥ u } 0 otherwise ( 8 ) and the average reversed ROC is2 rROC ( u ) = ∫ X rROCx ( u ) dP ( x ) . ( 9 ) Since PY ′ ( Sy′ ( x ) > t ) is a decreasing function of t , it can be seen that rROCx ( u ) = 1 iff u > PY ′ ( Sy′ ( x ) > Sy∗ ) = 1−Cx ( see Proposition 1 in Appendix A ) . However , even though rROCx is a step function , the rROC resembles a common ROC curve , as illustrated in Figure 1 .
The authors discuss how a classifier’s performance over the initial class sample can be used to extrapolate its expected accuracy on a larger, unobserved set of classes by mean of the dual of the ROC function, swapping the roles of classes and samples. Grounded on such function, the authors develop a novel ANN approach learning to estimate the accuracy of classifiers on arbitrarily large sets of classes. Effectiveness of the approach is demonstrated on a suite of benchmark datasets, both synthetic and real-world.
SP:977fc8d3bb7266d1beaecc609a91970783347ed3
AdaFuse: Adaptive Temporal Fusion Network for Efficient Action Recognition
1 INTRODUCTION . Over the last few years , video action recognition has made rapid progress with the introduction of a number of large-scale video datasets ( Carreira & Zisserman , 2017 ; Monfort et al. , 2018 ; Goyal et al. , 2017 ) . Despite impressive results on commonly used benchmark datasets , efficiency remains a great challenge for many resource constrained applications due to the heavy computational burden of deep Convolutional Neural Network ( CNN ) models . Motivated by the need of efficiency , extensive studies have been recently conducted that focus on either designing new lightweight architectures ( e.g. , R ( 2+1 ) D ( Tran et al. , 2018 ) , S3D ( Xie et al. , 2018 ) , channel-separated CNNs ( Tran et al. , 2019 ) ) or selecting salient frames/clips conditioned on the input ( Yeung et al. , 2016 ; Wu et al. , 2019b ; Korbar et al. , 2019 ; Gao et al. , 2020 ) . However , most of the existing approaches do not consider the fact that there exists redundancy in CNN features which can significantly save computation leading to more efficient action recognition . In particular , orthogonal to the design of compact models , the computational cost of a CNN model also has much to do with the redundancy of CNN features ( Han et al. , 2019 ) . Furthermore , the amount of redundancy depends on the dynamics and type of events in the video : A set of still frames for a simple action ( e.g . “ Sleeping ” ) will have a higher redundancy comparing to a fast-changed action with rich interaction and deformation ( e.g . “ Pulling two ends of something so that it gets stretched ” ) . Thus , based on the input we could compute just a subset of features , while the rest of the channels can reuse history feature maps or even be skipped without losing any accuracy , resulting in large computational savings compared to computing all the features at a given CNN layer . Based on this intuition , we present a new perspective for efficient action recognition by adaptively deciding what channels to compute or reuse , on a per instance basis , for recognizing complex actions . In this paper , we propose AdaFuse , an adaptive temporal fusion network that learns a decision policy to dynamically fuse channels from current and history feature maps for efficient action recognition . Specifically , our approach reuses history features when necessary ( i.e. , dynamically decides which channels to keep , reuse or skip per layer and per instance ) with the goal of improving both recognition ∗Email : mengyuethu @ gmail.com . This work was done while Yue was an AI Resident at IBM Research . accuracy and efficiency . As these decisions are discrete and non-differentiable , we rely on a Gumbel Softmax sampling approach ( Jang et al. , 2016 ) to learn the policy jointly with the network parameters through standard back-propagation , without resorting to complex reinforcement learning as in ( Wu et al. , 2019b ; Fan et al. , 2018 ; Yeung et al. , 2016 ) . We design the loss to achieve both competitive performance and resource efficiency required for action recognition . Extensive experiments on multiple benchmarks show that AdaFuse significantly reduces the computation without accuracy loss . The main contributions of our work are as follows : • We propose a novel approach that automatically determines which channels to keep , reuse or skip per layer and per target instance for efficient action recognition . • Our approach is model-agnostic , which allows this to be served as a plugin operation for a wide range of 2D CNN-based action recognition architectures . • The overall policy distribution can be seen as an indicator for the dataset characteristic , and the block-level distribution can bring potential guidance for future architecture designs . • We conduct extensive experiments on four benchmark datasets ( Something-Something V1 ( Goyal et al. , 2017 ) , Something-Something V2 ( Mahdisoltani et al. , 2018 ) , Jester ( Materzynska et al. , 2019 ) and Mini-Kinetics ( Kay et al. , 2017 ) ) to demonstrate the superiority of our proposed approach over state-of-the-art methods . 2 RELATED WORK . Action Recognition . Much progress has been made in developing a variety of ways to recognize complex actions , by either applying 2D-CNNs ( Karpathy et al. , 2014 ; Wang et al. , 2016 ; Fan et al. , 2019 ) or 3D-CNNs ( Tran et al. , 2015 ; Carreira & Zisserman , 2017 ; Hara et al. , 2018 ) . Most successful architectures are usually based on the two-stream model ( Simonyan & Zisserman , 2014 ) , processing RGB frames and optical-flow in two separate CNNs with a late fusion in the upper layers ( Karpathy et al. , 2014 ) or further combining with other modalities ( Asghari-Esfeden et al. , 2020 ; Li et al. , 2020a ) . Another popular approach for CNN-based action recognition is the use of 2D-CNN to extract frame-level features and then model the temporal causality using different aggregation modules such as temporal averaging in TSN ( Wang et al. , 2016 ) , a bag of features scheme in TRN ( Zhou et al. , 2018 ) , channel shifting in TSM ( Lin et al. , 2019 ) , depthwise convolutions in TAM ( Fan et al. , 2019 ) , non-local neural networks ( Wang et al. , 2018a ) , temporal enhancement and interaction module in TEINet ( Liu et al. , 2020 ) , and LSTMs ( Donahue et al. , 2015 ) . Many variants of 3D-CNNs such as C3D ( Tran et al. , 2015 ; Ji et al. , 2013 ) , I3D ( Carreira & Zisserman , 2017 ) and ResNet3D ( Hara et al. , 2018 ) , that use 3D convolutions to model space and time jointly , have also been introduced for action recognition . SlowFast ( Feichtenhofer et al. , 2018 ) employs two pathways to capture temporal information by processing a video at both slow and fast frame rates . Recently , STM ( Jiang et al. , 2019 ) proposes new channel-wise convolutional blocks to jointly capture spatio-temporal and motion information in consecutive frames . TEA ( Li et al. , 2020b ) introduces a motion excitation module including multiple temporal aggregation modules to capture both short- and long-range temporal evolution in videos . Gate-Shift networks ( Sudhakaran et al. , 2020 ) use spatial gating for spatial-temporal decomposition of 3D kernels in Inception-based architectures . While extensive studies have been conducted in the last few years , limited efforts have been made towards efficient action recognition ( Wu et al. , 2019b ; a ; Gao et al. , 2020 ) . Specifically , methods for efficient recognition focus on either designing new lightweight architectures that aim to reduce the complexity by decomposing the 3D convolution into 2D spatial convolution and 1D temporal convolution ( e.g. , R ( 2+1 ) D ( Tran et al. , 2018 ) , S3D ( Xie et al. , 2018 ) , channel-separated CNNs ( Tran et al. , 2019 ) ) or selecting salient frames/clips conditioned on the input ( Yeung et al. , 2016 ; Wu et al. , 2019b ; Korbar et al. , 2019 ; Gao et al. , 2020 ) . Our approach is most related to the latter which focuses on conditional computation and is agnostic to the network architecture used for recognizing actions . However , instead of focusing on data sampling , our approach dynamically fuses channels from current and history feature maps to reduce the computation . Furthermore , as feature maps can be redundant or noisy , we use a skipping operation to make it more efficient for action recognition . Conditional Computation . Many conditional computation methods have been recently proposed with the goal of improving computational efficiency ( Bengio et al. , 2015 ; 2013 ; Veit & Belongie , 2018 ; Wang et al. , 2018b ; Graves , 2016 ; Meng et al. , 2020 ; Pan et al. , 2021 ) . Several works have been proposed that add decision branches to different layers of CNNs to learn whether to exit the network for faster inference ( Figurnov et al. , 2017 ; McGill & Perona , 2017 ; Wu et al. , 2020 ) . BlockDrop ( Wu et al. , 2018 ) effectively reduces the inference time by learning to dynamically select which layers to execute per sample during inference . SpotTune ( Guo et al. , 2019 ) learns to adaptively route information through finetuned or pre-trained layers . Conditionally parameterized convolutions ( Yang et al. , 2019 ) or dynamic convolutions ( Chen et al. , 2019a ; Verelst & Tuytelaars , 2019 ) have also been proposed to learn specialized convolutional kernels for each example to improve efficiency in image recognition . Our method is also related to recent works on dynamic channel pruning ( Gao et al. , 2018 ; Lin et al. , 2017 ) that generate decisions to skip the computation for a subset of output channels . While GaterNet ( Chen et al. , 2019b ) proposes a separate gating network to learn channel-wise binary gates for the backbone network , Channel gating network ( Hua et al. , 2019 ) identifies regions in the features that contribute less to the classification result , and skips the computation on a subset of the input channels for these ineffective regions . In contrast to the prior works that focus on only dropping unimportant channels , our proposed approach also reuses history features when necessary to make the network capable for strong temporal modelling . 3 METHODOLOGY . In this section , we first show the general approach using 2D-CNN for action recognition . Then we present the concept of adaptive temporal fusion and analyze its computation cost . Finally , we describe the end-to-end optimization and network specifications . Using 2D-CNN for Action Recognition . One popular solution is to first generate frame-wise predictions and then utilize a consensus operation to get the final prediction ( Wang et al. , 2016 ) . The network takes uniformly sampled T frames { X1 ... XT } and predicts the un-normalized class score : P ( X1 , ... , XT ; Θ ) = G ( F ( X1 ; Θ ) , F ( X2 ; Θ ) , ... , F ( XT ; Θ ) ) ( 1 ) where F ( · ; Θ ) is the 2D-CNN with learnable parameters Θ . The consensus function G reduces the frame-level predictions to a final prediction . One common practice for G is the averaging operation . The major drawback is that this can not capture the order of the frames . The network performs poorly on datasets that contain temporal-related labels ( e.g . “ turning left ” , “ moving forward ” , etc ) . LSTM ( Hochreiter & Schmidhuber , 1997 ) can also be used as G to get the final prediction ( Donahue et al. , 2015 ) , but it can not capture low-level features across the frames , as mentioned in Lin et al . ( 2019 ) . A few works have been recently proposed to model temporal causality using a bag of features scheme in TRN ( Zhou et al. , 2018 ) , channel shifting in TSM ( Lin et al. , 2019 ) , depthwise convolutions in TAM ( Fan et al. , 2019 ) . Different from these methods , in this work , we hypothesis that an inputdependent fusion of framewise features will be beneficial for temporal understanding and efficiency , as the amount of temporal information depends on the dynamics and the type of events in the video . Hence we propose adaptive temporal fusion for action recognition . Adaptive Temporal Fusion . Consider a single 2D convolutional layer : yt = φ ( Wx ∗xt+bx ) , where xt ∈ Rc×h×w denotes the input feature map at time step t with c channels and spatial dimension h × w , and yt ∈ Rc ′×h′×w′ is the output feature map . Wx ∈ Rc ′×k×k×c denotes the convolution filters ( with kernel size k× k ) and bx ∈ Rc ′ is the bias . We use “ ∗ ” for convolution operation . φ ( · ) is the combination of batchnorm and non-linear functions ( e.g . ReLU ( Nair & Hinton , 2010 ) ) . We introduce a policy network consisting of two fully-connected layers and a ReLU function designed to adaptively select channels for keeping , reusing or skipping . As shown in Figure 1 , at time t , we first generate feature vectors vt−1 , vt ∈ Rc from history feature map xt−1 and current feature map xt via global average pooling . Then the policy network predicts : pt = g ( vt−1 , vt ; Θg ) ( 2 ) where pt ∈ { 0 , 1 , 2 } c ′ is a channel-wise policy ( choosing “ keep ” , “ reuse ” or “ skip ” ) to generate the output feature map : if pit = 0 , the i-th channel of output feature map will be computed via the normal convolution ; if pit = 1 , it will reuse the i-th channel of the feature map yt−1 which has been already computed at time t − 1 ; otherwise , the i-th channel will be just padded with zeros . Formally , this output feature map can be written as ỹt = f ( yt−1 , yt , pt ) where the i-th channel is : ỹit = 1 [ pit = 0 ] · yit + 1 [ pit = 1 ] · yit−1 ( 3 ) here 1 [ · ] is the indicator function . In Figure 1 , the policy network instructs the convolution layer to only compute the first and fourth channels , reuses the second channel of the history feature and skips the third channel . Features from varied time steps are adaptively fused along the channel dimension . Adaptive temporal fusion enables the 2D convolution to capture temporal information : its temporal perceptive field grows linearly to the depth of the layers , as more features from different time steps are fused when going deeper in the network . Our novel design can be seen as a general methodology for many state-of-the-art 2D-CNN approaches : if we discard `` skip '' and use a predefined fixed policy , then it becomes the online temporal fusion in Lin et al . ( 2019 ) . If the policy only chooses from `` skip '' and `` keep '' , then it becomes dynamic pruning methods ( Gao et al. , 2018 ; Hua et al. , 2019 ) . Our design is a generalized approach taking both temporal modelling and efficiency into consideration . Complexity Analysis . To illustrate the efficiency of our framework , we compute the floating point operations ( FLOPS ) , which is a hardware-independent metric and widely used in the field of efficient action recognition1 ( Wu et al. , 2019b ; Gao et al. , 2020 ; Meng et al. , 2020 ; Fan et al. , 2019 ) . To compute saving from layers before and after the policy network , we add another convolution after ỹt with kernel Wy ∈ Rc ′′×k′×k′×c′ and bias by ∈ Rc ′′ . The total FLOPS for each convolution will be : { mx = c ′ · h′ · w′ · ( k · k · c+ 1 ) my = c ′′ · h′′ · w′′ · ( k′ · k′ · c′ + 1 ) ( 4 ) When the policy is applied , only those output channels used in time t or going to be reused in time t+ 1 need to be computed in the first convolution layer , and only the channels not skipped in time t count for input feature maps for the second convolution layer . Hence the overall FLOPS is : M = T−1∑ τ=0 [ 1 c′ c′−1∑ i=0 Keep at τ or resue at τ + 1︷ ︸︸ ︷ 1 [ piτ · ( piτ+1 − 1 ) = 0 ] ·mx︸ ︷︷ ︸ FLOPS from the first conv at time τ + ( 1− 1 c′ c′−1∑ i=0 Skip at τ︷ ︸︸ ︷ 1 ( piτ = 2 ) ) ·my︸ ︷︷ ︸ FLOPS from the second conv at time τ ] ( 5 ) Thus when the policy network skips more channels or reuses channels that are already computed in the previous time step , the FLOPS for those two convolution layers can be reduced proportionally . Loss functions . We take the average of framewise predictions as the video prediction and minimize : L = ∑ ( x , y ) ∼Dtrain [ −y log ( P ( x ) ) + λ · B−1∑ i=0 Mi ] ( 6 ) 1Latency is another important measure for efficiency , which can be reduced via CUDA optimization for sparse convolution ( Verelst & Tuytelaars , 2019 ) . We leave it for future research . The first term is the cross entropy between one-hot encoded ground truth labels y and predictions P ( x ) . The second term is the FLOPS measure for all the B temporal fusion blocks in the network . In this way , our network is learned to achieve both accuracy and efficiency at a trade-off controlled by λ. Discrete policies for “ keep ” , “ reuse ” or “ skip ” shown in Eq . 3 and Eq . 5 make L non-differentiable hence hard to optimize . One common practice is to use a score function estimator ( e.g . REINFORCE ( Glynn , 1990 ; Williams , 1992 ) ) to avoid backpropagating through categorical samplings , but the high variance of the estimator makes the training slow to converge ( Wu et al. , 2019a ; Jang et al. , 2016 ) . As an alternative , we use Gumbel-Softmax Estimator to enable efficient end-to-end optimization . Training using Gumbel Softmax Estimator . Specifically , the policy network first generates a logit q ∈ R3 for each channel in the output feature map and then we use Softmax to derive a normalized categorical distribution : π = { ri|ri = exp ( qi ) exp ( q0 ) +exp ( q1 ) +exp ( q2 ) } . With the Gumbel-Max trick , discrete samples from the distribution π can be drawn as ( Jang et al. , 2016 ) : r̂ = argmaxi ( log ri+Gi ) , where Gi = − log ( − logUi ) is a standard Gumbel distribution with i.i.d . Ui sampled from a uniform distribution Unif ( 0 , 1 ) . Since the argmax operator is not differentiable , the Gumbel Softmax distribution is used as a continuous approximation . In forward pass we represent the discrete sample r̂ as a one-hot encoded vector and in back-propagation we relax it to a real-valued vector R = { R0 , R1 , R2 } via Softmax as follows : Ri = exp ( ( log ri +Gi ) /τ ) ∑2 j=1 exp ( ( log rj +Gj ) /τ ) ( 7 ) where τ is a temperature factor controlling the “ smooothness ” of the distribution : lim τ→∞ R converges to a uniform distribution and lim τ→0 R becomes a one-hot vector . We set τ = 0.67 during the training . Network Architectures and Notations . Our adaptive temporal fusion module can be easily plugged into any existing 2D-CNN models . Specifically , we focus on BN-Inception ( Ioffe & Szegedy , 2015 ) , ResNet ( He et al. , 2016 ) and EfficientNet ( Tan & Le , 2019 ) . For Bn-Inception , we add a policy network between every two consecutive Inception modules . For ResNet/EfficientNet , we insert the policy network between the first and the second convolution layers in each “ residual block '' / “ inverted residual block '' . We denote our model as AdaFuseMethodBackbone , where the “ Backbone ” is chosen from { “ R18 ” ( ResNet18 ) , “ R50 ” ( ResNet50 ) , “ Inc ” ( BN-Inception ) , “ Eff ” ( EfficientNet ) } , and the “ Method ” can be { “ TSN ” , “ TSM ” , “ TSM+Last ” } . More details can be found in the following section .
The paper presented an adaptive inference model for efficient action recognition in videos. The core of the model is the dynamic gating of feature channels that controls the fusion between two frame features, whereby the gating is conditioned on the input video and helps to reduce the computational cost at runtime. The proposed model was evaluated on several video action datasets and compared against a number of existing deep models. The results demonstrated a good efficiency-accuracy trade-off for the proposed model.
SP:eb5f64c7d1e303394f4650a14806e60dba1afdd3
On the Geometry of Deep Bayesian Active Learning
1 INTRODUCTION . Lack of training labels restricts the performance of deep neural networks ( DNNs ) , though prices of GPU resources were falling fast . Recently , leveraging the abundance of unlabeled data has become a potential solution to relieve this bottleneck whereby expert knowledge is involved to annotate those unlabeled data . In such setting , the deep learning community introduced active learning ( AL ) ( Gal et al. , 2017 ) that , maximizing the model uncertainty ( Ashukha et al. , 2019 ; Lakshminarayanan et al. , 2017 ) to acquire a set of highly informative or representative unlabeled data , and solicit experts ’ annotations . During this AL process , the learning model tries to achieve a desired accuracy using minimal data labeling . Recent shift of model uncertainty in many fields , such as Bayesian neural networks ( Blundell et al. , 2015 ) , Monte-Carlo ( MC ) dropout ( Gal & Ghahramani , 2016 ) , and Bayesian core-set construction ( Sener & Savarese , 2018 ) , shows that , new scenarios arise from deep Bayesian AL ( Pinsler et al. , 2019 ; Kirsch et al. , 2019 ) . Bayesian AL ( Golovin et al. , 2010 ; Jedoui et al. , 2019 ) presents an expressive probabilistic interpretation on model uncertainty ( Gal & Ghahramani , 2016 ) . Theoretically , for a simple regression model such as linear , logistic , and probit , AL can derive their closed-forms on updating one sparse subset that maximally reduces the uncertainty of the posteriors over the regression parameters ( Pinsler et al. , 2019 ) . However , for a DNN model , optimizing massive training parameters is not easily tractable . It is thus that Bayesian approximation provides alternatives including importance sampling ( Doucet et al. , 2000 ) and Frank-Wolfe optimization ( Vavasis , 1992 ) . With importance sampling , a typical approach is to express the information gain in terms of the predictive entropy over the model , and it is called Bayesian active learning by disagreements ( BALD ) ( Houlsby et al. , 2011 ) . BALD has two interpretations : model uncertainty estimation and core-set construction . To estimate the model uncertainty , a greedy strategy is applied to select those data that maximize the parameter disagreements between the current training model and its subsequent updates as ( Gal et al. , 2017 ) . However , naively interacting with BALD using uninformative prior ( Strachan & Van Dijk , 2003 ) ( Price & Manson , 2002 ) , which can be created to reflect a balance among outcomes when no information is available , leads to unstable biased acquisitions ( Gao et al. , 2020 ) , e.g . insufficient prior labels . Moreover , the similarity or consistency of those acquisitions to the previous acquired samples , brings redundant information to the model and decelerates its training . Core-set construction ( Campbell & Broderick , 2018 ) avoids the greedy interaction to the model by capturing characteristics of the data distributions . By modeling the complete data posterior over the distributions of parameters , BALD can be deemed as a core-set construction process on a sphere ( Kirsch et al. , 2019 ) , which seamlessly solicits a compact subset to approximate the input data distribution , and efficiently mitigates the sensitivity to uninformative prior and redundant information . From the view of geometry , updates of core-set construction is usually optimized with sphere geodesic as ( Nie et al. , 2013 ; Wang et al. , 2019 ) . Once the core-set is obtained , deep AL immediately seeks annotations from experts and starts the training . However , data points located at the boundary regions of the distribution , usually win uniform distribution , can not be highly-representative candidates for the core-set . Therefore , constructing the coreset on a sphere may not be the optimal choice for deep AL . This paper presents a novel AL framework , namely Geometric BALD ( GBALD ) , over the geometric interpretation of BALD that , interpreting BALD with core-set construction on an ellipsoid , initializes an effective representation to drive a DNN model . The goal is to seek for significant accuracy improvements against an uninformative prior and redundant information . Figure 1 describes this two-stage framework . In the first stage , geometric core-set construction on an ellipsoid initializes effective acquisitions to start a DNN model regardless of the uninformative prior . Taking the core-set as the input features , the next stage ranks the batch acquisitions of model uncertainty according to their geometric representativeness , and then solicits some highly-representative examples from the batch . With the representation constraints , the ranked acquisitions reduce the probability of sampling nearby samples of the previous acquisitions , preventing redundant acquisitions . To guarantee the improvement , our generalization analysis shows that , the lower bound of generalization errors of AL with the ellipsoid is proven to be tighter than that of AL with the sphere . Achieving a nearly zero generalization error by AL with ellipsoid is also proven to have higher probability . Contributions of this paper can be summarized from Geometric , Algorithmic , and Theoretical perspectives . • Geometrically , our key innovation is to construct the core-set on an ellipsoid , not typical sphere , preventing its updates towards the boundary regions of the distributions . • In term of algorithm design , in our work , from a Bayesian perspective , we propose a two-stage framework that sequentially introduces the core-set representation and model uncertainty , strengthening their performance “ independently ” . Moreover , different to the typical BALD optimizations , we present geometric solvers to construct core-set and estimate model uncertainty , which result in a different view for Bayesian active learning . • Theoretically , to guarantee those improvements , our generalization analysis proves that , compared to typical Bayesian spherical interpretation , geodesic search with ellipsoid can derive a tighter lower error bound and achieve higher probability to obtain a nearly zero error . See Appendix B . The rest of this paper is organized as follows . In Section 2 , we first review the related work . Secondly , we elaborate BALD and GBALD in Sections 3 and 4 , respectively . Experimental results are presented in Section 5 . Finally , we conclude this paper in Section 6 . 2 RELATED WORK . Model uncertainty . In deep learning community , AL ( Cohn et al. , 1994 ) was introduced to improve the training of a DNN model by annotating unlabeled data , where the data which maximize the model uncertainty ( Lakshminarayanan et al. , 2017 ) are the primary acquisitions . For example , in ensemble deep learning ( Ashukha et al. , 2019 ) , out-of-domain uncertainty estimation selects those data which do not follow the same distribution as the input training data ; in-domain uncertainty draws the data from the original input distribution , producing reliable probability estimates . Gal & Ghahramani ( 2016 ) use MC dropout to estimate predictive uncertainty for approximating a Bayesian convolutional neural network . Lakshminarayanan et al . ( 2017 ) estimate predictive uncertainty using a proper scoring rule as the training criteria to fed a DNN . Bayesian AL . Taking a Bayesian perspective ( Golovin et al. , 2010 ) , AL can be deemed as minimizing the Bayesian posterior risk with multiple label acquisitions over the input unlabeled data . A potential informative approach is to reduce the uncertainty about the parameters using Shannon ’ s entropy ( Tang et al. , 2002 ) . This can be interpreted as seeking the acquisitions for which the Bayesian parameters under the posterior disagree about the outcome the most , so this acquisition algorithm is referred to as Bayesian active learning by disagreement ( BALD ) ( Houlsby et al. , 2011 ) . Deep AL . Recently , deep Bayesian AL attracted our eyes . Gal et al . ( 2017 ) proposed to cooperate BALD with a DNN to improve the training . The unlabeled data which maximize the model uncertainty provide positive feedback . However , it needs to repeatedly update the model until the acquisition budget is exhausted . To improve the acquisition efficiency , batch sampling with BALD is applied as ( Kirsch et al. , 2019 ; Pinsler et al. , 2019 ) . In BatchBALD , Kirsch et al . ( 2019 ) developed a tractable approximation to the mutual information of one batch of unlabeled data and current model parameters . However , those uncertainty evaluations of Bayesian AL whether in single or batch acquisitions all take greedy strategies , which lead to computationally infeasible , or excursive parameter estimations . For deep Bayesian AL , being short of interactions to DNN can not maximally drive their model performance as ( Pinsler et al. , 2019 ; Sener & Savarese , 2018 ) , etc . 3 BALD . BALD has two different interpretations : model uncertainty estimation and core-set construction . We simply introduce them in this section . 3.1 MODEL UNCERTAINTY ESTIMATION . We consider a discriminative model p ( y∣x , θ ) parameterized by θ that maps x ∈ X into an output distribution over a set of y ∈ Y . Given an initial labeled ( training ) set D0 ∈ X × Y , the Bayesian inference over this parameterized model is to estimate the posterior p ( θ∣D0 ) , i.e . estimate θ by repeatedly updating D0 . AL adopts this setting from a Bayesian view . With AL , the learner can choose unlabeled data from Du = { xi } Nj=1 ∈ X , to observe the outputs of the current model , maximizing the uncertainty of the model parameters . Houlsby et al . ( 2011 ) proposed a greedy strategy termed BALD to update D0 by estimating a desired data x∗ that maximizes the decrease in expected posterior entropy : x∗ = arg max x∈Du H [ θ∣D0 ] −Ey∼p ( y∣x , D0 ) [ H [ θ∣x , y , D0 ] ] , ( 1 ) where the labeled and unlabeled sets are updated by D0 = D0 ∪ { x∗ , y∗ } , Du = Du/x∗ , and y∗ denotes the output of x∗ . In deep AL , y∗ can be annotated as a label from experts and θ yields a DNN model . 3.2 CORE-SET CONSTRUCTION . Let p ( θ∣D0 ) be updated by its log posterior logp ( θ∣D0 , x∗ ) , y∗ ∈ { yi } Ni=1 , assume the outputs are conditional independent of the inputs , i.e . p ( y∗∣x∗ , D0 ) = ∫θ p ( y ∗∣x∗ , θ ) p ( θ∣D0 ) dθ , then we have the complete data log posterior following ( Pinsler et al. , 2019 ) : Ey∗ [ logp ( θ∣D0 , x∗ , y∗ ) ] = Ey∗ [ logp ( θ∣D0 ) + logp ( y∗∣x∗ , θ ) − logp ( y∗∣x∗ , D0 ) ] = logp ( θ∣D0 ) +Ey∗ [ logp ( y∗∣x∗ , θ ) +H [ y∗∣x∗ , D0 ] ] = logp ( θ∣D0 ) + N ∑ i=1 ⎛ ⎝ Eyi [ logp ( yi∣xi , θ ) +H [ yi∣xi , D0 ] ] ⎞ ⎠ . ( 2 ) The key idea of core-set construction is to approximate the log posterior of Eq . ( 2 ) by a subset of D′u ⊆ Du such that : EYu [ logp ( θ∣D0 , Du , Yu ) ] ≈ EY ′u [ logp ( θ∣D0 , D ′ u , Y ′ u ) ] , where Yu and Y ′ u denote the predictive labels of Du and D′u respectively by the Bayesian discriminative model , that is , p ( Yu∣Du , D0 ) = ∫θ p ( Yu∣Du , θ ) p ( θ∣D0 ) dθ , and p ( Y ′ u∣D ′ u , D0 ) = ∫θ p ( Y ′ u∣D ′ u , θ ) p ( θ∣D0 ) dθ . Here D′u can be indicated by a core-set ( Pinsler et al. , 2019 ) that highly represents Du . Optimization tricks such as Frank-Wolfe optimization ( Vavasis , 1992 ) then can be adopted to solve this problem . Motivations . Eqs . ( 1 ) and ( 2 ) provide the Bayesian rules of BALD over model uncertainty and core-set construction respectively , which further attract the attention of the deep learning community . However , the two interpretations of BALD are limited by : 1 ) redundant information and 2 ) uninformative prior , where one major reason which causes these two issues is the poor initialization on the prior , i.e . p ( D0∣θ ) . For example , unbalanced label initialization on D0 usually leads to an uninformative prior , which further conducts the acquisitions of AL to select those unlabeled data from one or some fixed classes ; highly-biased results with ( Gao et al. , 2020 ) redundant information are inevitable . Therefore , these two limitations affect each other . 4 GBALD GBALD consists of two components : initial acquisitions based on core-set construction and model uncertainty estimation with those initial acquisitions .
This paper is basically unreadable. The sentence structure / grammar is strange, and if that was the only issue it could be overlooked. The paper also does not describe or explain the motivation and interpretation of anything, but instead just lists equations. For example, eta is the parameter that projects a spherical geodesic onto an the ellipsoid one, and an ellipsoid geodesic prevents updates of the core-set towards the boundary regions where the characteristics of the distribution cannot be captured. However, what are these characteristics, and how can they motivate how to choose eta?
SP:8b0cee077c1bcdf9a546698dc041654ca6a222ed
Directional graph networks
1 INTRODUCTION . One of the most important distinctions between convolutional neural networks ( CNNs ) and graph neural networks ( GNNs ) is that CNNs allow for any convolutional kernel , while most GNN methods are limited to symmetric kernels ( also called isotropic kernels in the literature ) ( Kipf & Welling , 2016 ; Xu et al. , 2018a ; Gilmer et al. , 2017 ) . There are some implementation of asymmetric kernels using gated mechanisms ( Bresson & Laurent , 2017 ; Veličković et al. , 2017 ) , motif attention ( Peng et al. , 2019 ) , edge features ( Gilmer et al. , 2017 ) or by using the 3D structure of molecules for message passing ( Klicpera et al. , 2019 ) . However , to the best of our knowledge , there are currently no methods that allow asymmetric graph kernels that are dependent on the full graph structure or directional flows . They either depend on local structures or local features . This is in opposition to images which exhibit canonical directions : the horizontal and vertical axes . The absence of an analogous concept in graphs makes it difficult to define directional message passing and to produce an analogue of the directional frequency filters ( or Gabor filters ) widely present in image processing ( Olah et al. , 2020 ) . We propose a novel idea for GNNs : use vector fields in the graph to define directions for the propagation of information , with an overview of the paper presented in 1 . Hence , the aggregation or message passing will be projected onto these directions so that the contribution of each neighbouring node nv will be weighted by its alignment with the vector fields at the receiving node nu . This enables our method to propagate information via directional derivatives or smoothing of the features . We also explore using the gradients of the low-frequency eigenvectors of the Laplacian of the graph φk , since they exhibit interesting properties ( Bronstein et al. , 2017 ; Chung et al. , 1997 ) . In particular , they can be used to define optimal partitions of the nodes in a graph , to give a natural ordering ( Levy , 2006 ) , and to find the dominant directions of the graph diffusion process ( Chung & Yau , 2000 ) . Further , we show that they generalize the horizontal and vertical directional flows in a grid ( see figure 2 ) , allowing them to guide the aggregation and mimic the asymmetric and directional kernels present in computer vision . In fact , we demonstrate mathematically that our work generalizes CNNs by reproducing all convolutional kernels of radius R in an n-dimensional grid , while also bringing the powerful data augmentation capabilities of reflection , rotation or distortion of the directions . We further show that our directional graph network ( DGN ) model theoretically and empirically allows for efficient message passing across distant communities , which reduces the well-known problem of over-smoothing , and aligns well with the need of independent aggregation rules ( Corso et al. , 2020 ) . Alternative methods reduce the impact of over-smoothing by using skip connections ( Luan et al. , 2019 ) , global pooling ( Alon & Yahav , 2020 ) , or randomly dropping edges during training time ( Rong et al. , 2020 ) , but without solving the underlying problem . In fact , we also prove that DGN is more discriminative than standard GNNs in regards to the Weisfeiler-Lehman 1-WL test , showing that the reduction of over-smoothing is accompanied by an increase of expressiveness . Our method distinguishes itself from other spectral GNNs since the literature usually uses the low frequencies to estimate local Fourier transforms in the graph ( Levie et al. , 2018 ; Xu et al. , 2019 ) . Instead , we do not try to approximate the Fourier transform , but only to define a directional flow at each node and guide the aggregation . 2 THEORETICAL DEVELOPMENT . 2.1 INTUITIVE OVERVIEW . One of the biggest limitations of current GNN methods compared to CNNs is the inability to do message passing in a specific direction such as the horizontal one in a grid graph . In fact , it is difficult to define directions or coordinates based solely on the shape of the graph . The lack of directions strongly limits the discriminative abilities of GNNs to understand local structures and simple feature transformations . Most GNNs are invariant to the permutation of the neighbours ’ features , so the nodes ’ received signal is not influenced by swapping the features of 2 neighbours . Therefore , several layers in a deep network will be employed to understand these simple changes instead of being used for higher level features , thus over-squashing the message sent between 2 distant nodes ( Alon & Yahav , 2020 ) . In this work , one of the main contributions is the realisation that low-frequency eigenvectors of the Laplacian can overcome this limitation by providing a variety of intuitive directional flows . As a first example , taking a grid-shaped graph of sizeN×M with N2 < M < N , we find that the eigenvector associated to the smallest non-zero eigenvalue increases in the direction of the width N and the second one increases in the direction of the height M . This property generalizes to n-dimensional grids and motivated the use of gradients of eigenvectors as preferred directions for general graphs . We validated this intuition by looking at the flow of the gradient of the eigenvectors for a variety of graphs , as shown in figure 2 . For example , in the Minnesota map , the first 3 non-constant eigenvectors produce logical directions , namely South/North , suburb/city , and West/East . Another important contribution also noted in figure 2 is the ability to define any kind of direction based on prior knowledge of the problem . Hence , instead of relying on eigenvectors to find directions in a map , we can simply use the cardinal directions or the rush-hour traffic flow . 2.2 VECTOR FIELDS IN A GRAPH . Based on a recent review from Bronstein et al . ( 2017 ) , this section presents the ideas of differential geometry applied to graphs , with the goal of finding proper definitions of scalar products , gradients and directional derivatives . Let G = ( V , E ) be a graph with V the set of vertices and E ⊂ V × V the set of edges . The graph is undirected meaning that ( i , j ) ∈ E iff ( j , i ) ∈ E. Define the vector spaces L2 ( V ) and L2 ( E ) as the set of maps V → R and E → R with x , y ∈ L2 ( V ) and F , H ∈ L2 ( E ) and scalar products 〈x , y〉L2 ( V ) : = ∑ i∈V xiyi , 〈F , H〉L2 ( E ) : = ∑ ( i , j ) ∈E F ( i , j ) H ( i , j ) ( 1 ) Think of E as the “ tangent space ” to V and of L2 ( E ) as the set of “ vector fields ” on the space V with each row Fi , : representing a vector at the i-th node . Define the pointwise scalar product as the map L2 ( E ) ×L2 ( E ) → L2 ( V ) taking 2 vector fields and returning their inner product at each point of V , at the node i is defined by equation 2 . 〈F , H〉i : = ∑ j : ( i , j ) ∈E Fi , jHi , j ( 2 ) In equation 3 , we define the gradient ∇ as a mapping L2 ( V ) → L2 ( E ) and the divergence div as a mapping L2 ( E ) → L2 ( V ) , thus leading to an analogue of the directional derivative in equation 4 . ( ∇x ) ( i , j ) : = x ( j ) − x ( i ) , ( divF ) i : = ∑ j : ( i , j ) ∈E F ( i , j ) ( 3 ) Definition 1 . The directional derivative of the function x on the graph G in the direction of the vector field F̂ where each vector is of unit-norm is DF̂x ( i ) : = 〈∇x , F̂ 〉i = ∑ j : ( i , j ) ∈E ( x ( j ) − x ( i ) ) F̂i , j ( 4 ) |F | will denote the absolute value of F and ||Fi , :||Lp the Lp-norm of the i-th row of F . We also define the forward/backward directions as the positive/negative parts of the field F± . 2.3 DIRECTIONAL SMOOTHING AND DERIVATIVES . Next , we show how the vector field F is used to guide the graph aggregation by projecting the incoming messages . Specifically , we define the weighted aggregation matrices Bav and Bdx that allow to compute the directional smoothing and directional derivative of the node features . The directional average matrixBav is the weighted aggregation matrix such that all weights are positives and all rows have an L1-norm equal to 1 , as shown in equation 5 and theorem 2.1 , with a proof in the appendix C.1 . Bav ( F ) i , : = |Fi , :| ||Fi , :||L1 + ( 5 ) The variable is an arbitrarily small positive number used to avoid floating-point errors . The L1norm denominator is a local row-wise normalization . The aggregator works by assigning a large weight to the elements in the forward or backward direction of the field , while assigning a small weight to the other elements , with a total weight of 1 . Theorem 2.1 ( Directional smoothing ) . The operation y = Bavx is the directional average of x , in the sense that yu is the mean of xv , weighted by the direction and amplitude of F . The directional derivative matrix Bdx is defined in ( 6 ) and theorem 2.2 , with the proof in appendix C.2 . Again , the denominator is a local row-wise normalization but can be replaced by a global normalization . diag ( a ) is a square , diagonal matrix with diagonal entries given by a . The aggregator works by subtracting the projected forward message by the backward message ( similar to a center derivative ) , with an additional diagonal term to balance both directions . Bdx ( F ) i , : = F̂i , : − diag ( ∑ j F̂ : ,j ) i , : F̂i , : = ( Fi , : ||Fi , :||L1 + ) ( 6 ) Theorem 2.2 ( Directional derivative ) . Suppose F̂ have rows of unit L1 norm . The operation y = Bdx ( F̂ ) x is the centered directional derivative of x in the direction of F , in the sense of equation 4 , i.e . y = DF̂x = ( F̂ − diag ( ∑ j F̂ : ,j ) ) x These aggregators are directional , interpretable and complementary , making them ideal choices for GNNs . We discuss the choice of aggregators in more details in appendix A , while also providing alternative aggregation matrices such as the center-balanced smoothing , the forward-copy , the phantom zero-padding , and the hardening of the aggregators using softmax/argmax on the field . We further provide a visual interpretation of theBav andBdx aggregators in figure 3 . Interestingly , we also note in appendix A.1 thatBav andBdx yield respectively the mean and Laplacian aggregations when F is a vector field such that all entries are constant Fij = ±C . 𝑭𝑣 , 𝑢3 𝑭𝑣 , 𝑢1 𝑭𝑣 , 𝑢2 Direc�onal smoothing aggrega�on 𝑩𝑎𝑣 𝑭 𝒙 Direc�onal deriva�ve aggrega�on 𝑩𝑑𝑥 𝑭 𝒙Graph features focused on the neighbourhood of 𝒏𝒗 𝑣 : Node receiving the message𝑢1,2,3 : Neighbouring node𝒙𝑢 : Feature at node 𝑢𝑭𝑣 , 𝑢 : Direc�onal vector field between the node 𝑣 and 𝑢 Weighted forward deriva�ve with 𝑢1 Weighted backwardderiva�ve with 𝑢2 Weighted backwardderiva�ve with 𝑢3+ +Sum of the absolute weights Figure 3 : Illustration of how the directional aggregation works at a node nv , with the arrows representing the direction and intensity of the field F .
The authors propose a convolution as a message passing of node features over edges where messages are aggregated weighted by a "direction" edge field. Furthermore, the authors propose to use the gradients of Laplace eigenfunctions as direction fields. Presumably, the aggregation is done with different direction fields derived from the Laplace eigenfunctions with lowest eigenvalues, which are then linearly combined with learnable parameters. Doing so allows their graph network to behave more like a conventional CNN, in which the kernels have different parameters for signals from different directions. The authors achieve good results on several benchmarks. Furthermore, the authors prove that their method reduces to a conventional CNN on a rectangular grid and have theoretical results that suggest that their method suffers less from the "over-smoothing" and "over-squashing" problems.
SP:09bbd1a342033a65e751a8878c23e3fa6facc636
Signal Coding and Reconstruction using Spike Trains
In many animal sensory pathways , the transformation from external stimuli to spike trains is essentially deterministic . In this context , a new mathematical framework for coding and reconstruction , based on a biologically plausible model of the spiking neuron , is presented . The framework considers encoding of a signal through spike trains generated by an ensemble of neurons via a standard convolve-thenthreshold mechanism , albeit with a wide variety of convolution kernels . Neurons are distinguished by their convolution kernels and threshold values . Reconstruction is posited as a convex optimization minimizing energy . Formal conditions under which perfect and approximate reconstruction of the signal from the spike trains is possible are then identified . Coding experiments on a large audio dataset are presented to demonstrate the strength of the framework . 1 INTRODUCTION . In biological systems , sensory stimuli is communicated to the brain primarily via ensembles of discrete events that are spatiotemporally compact electrical disturbances generated by neurons , otherwise known as spikes . Spike train representation of signals , when sparse , are not only intrinsically energy efficient , but can also facilitate downstream computation ( 6 ; 10 ) . In their seminal work , Olshausen and Field ( 13 ) showed how efficient codes can arise from learning sparse representations of natural stimulus statistics , resulting in striking similarities with observed biological receptive fields . ( 19 ) developed a biophysically motivated spiking neural network which for the first time predicted the full diversity of V1 simple cell receptive field shapes when trained on natural images . Although these results signify substantial progress , an effective end to end signal processing framework that deterministically represents signals via spike train ensembles is yet to be laid out . Here we present a new framework for coding and reconstruction leveraging a biologically plausible coding mechanism which is a superset of the standard leaky integrate-and-fire neuron model ( 5 ) . Our proposed framework identifies reconstruction guarantees for a very general class of signals—those with finite rate of innovation ( 18 ) —as shown in our perfect and approximate reconstruction theorems . Most other classes , e.g . bandlimited signals , are subsets of this class . The proposed technique first formulates reconstruction as an optimization that minimizes the energy of the reconstructed signal subject to consistency with the spike train , and then solves it in closed form . We then identify a general class of signals for which reconstruction is provably perfect under certain ideal conditions . Subsequently , we present a mathematical bound on the error of an approximate reconstruction when the model deviates from those ideal conditions . Finally , we present simulation experiments coding for a large dataset of audio signals that demonstrate the efficacy of the framework . In a separate set of experiments on a smaller subset of audio signals we compare our framework with existing sparse coding algorithms viz matching pursuit and orthogonal matching pursuit , establishing the strength of our technique . The remainder of the paper is structured as follows . In Sections 2 and 3 we introduce the coding and decoding frameworks . Section 4 identifies the class of signals for which perfect reconstruction is achievable if certain ideal conditions are met . In Section 5 we discuss how in practice those ideal conditions can be approached and provide a mathematical bound for approximate reconstruction . Simulation results are presented in Section 6 . We conclude in Section 8 . 2 CODING . The general class of deterministic mappings ( i.e. , the set of all nonlinear operators ) from continuous time signals to spike trains is difficult to characterize because the space of all spike trains does not lend itself to a natural topology that is universally embraced . The result is that simple characterizations , such as the set of all continuous operators , can not be posited in a manner that has general consensus . To resolve this issue , we take a cue from biological systems . In most animal sensory pathways , external stimulus passes through a series of transformations before being turned into spike trains ( 17 ) . For example , visual signal in the retina is processed by multiple layers of non-spiking horizontal , amacrine and bipolar cells , before being converted into spike trains by the retinal ganglion cells . Accordingly , we can consider the set of transformations that pass via an intermediate continuous time signal which is then transformed into a spike train through a stereotyped mapping where spikes mark threshold crossings . The complexity of the operator now lies in the mapping from the continuous time input signal to the continuous time intermediate signal . Since any time invariant , continuous , nonlinear operator with fading memory can be approximated by a finite Volterra series operator ( 2 ) , this general class of nonlinear operators from continuous time signals to spike trains can be modeled as the composition of a finite Volterra series operator and a neuronal thresholding operation to generate a spike train . Here , the simplest subclass of these transformations is considered : the case where the Volterra series operator has a single causal , bounded-time , linear term , the output of which is composed with a thresholding operation of a potentially time varying threshold . The overall operator from the input signal to the spike train remains nonlinear due to the thresholding operation . The code generated by an ensemble of such transformations , corresponding to an ensemble of spike trains , is explored . Formally , we assume the input signal X ( t ) to be a bounded square integrable function over the compact interval [ 0 , T ] for some T ∈ R+ , i.e. , we are interested in the class of input signals F = { X ( t ) |X ( t ) ∈ L2 [ 0 , T ] } . Since the framework involves signal snippets of arbitrary length , this choice of T is without loss of generalization . We assume an ensemble of convolution kernels K = { Kj |j ∈ Z+ , j ≤ n } , consisting of n kernels Kj , j = 1 , . . . , n. We assume that Kj ( t ) is a continuous function on a bounded time interval [ 0 , T ] , i.e . ∀j ∈ { 1 , . . . , n } , Kj ( t ) ∈ C [ 0 , T ] , T ∈ R+ . Finally , we assume that Kj has a time varying threshold denoted by T j ( t ) . The ensemble of convolution kernels K encodes a given input signal X ( t ) into a sequence of spikes { ( ti , Kji ) } , where the ith spike is produced by the jthi kernel Kji at time ti if and only if : ∫ X ( τ ) Kji ( ti − τ ) dτ = T ji ( ti ) In our experiments a specific threshold function is assumed in which the time varying threshold T j ( t ) of the jth kernel remains constant at Cj until that kernel produces a spike , at which time an after-hyperpolarization potential ( ahp ) is introduced to raise the threshold to a high value M j Cj , which then drops back linearly to its original value within a refractory period δj . Stated formally , T j ( t ) = C j , t− δj > tjl ( t ) M j − ( t−t j l ( t ) ) ( M j−Cj ) δj , t− δj ≤ tjl ( t ) ( 1 ) Where tjl ( t ) denotes the time of the last spike generated by K j prior to time t . 3 DECODING . How rich is the coding mechanism just described ? We can investigate this question formally by positing a decoding module . The objective of the decoding module is to reconstruct the original signal from the encoded ensemble of spike trains . It is worthwhile to mention that to be able to communicate signals properly by our proposed framework , the decoding module needs to be designed in a manner so that it can operate solely on the spike train data handed over by the encoding module , without explicit access to the input signal itself . Considering the prospect of the invertibility of the coding scheme , we seek a signal that satisfies the same set of constraints as the original signal when generating all spikes apropos the set of kernels in ensemble K. Recognizing that such a signal might not be unique , we choose the reconstructed signal as the one with minimum L2-norm . Formally , the reconstruction ( denoted by X∗ ( t ) ) of the input signal X ( t ) is formulated to be the solution to the optimization problem : X∗ ( t ) = argmin X̃ ||X̃ ( t ) ||22 s.t . ∫ X̃ ( τ ) Kji ( ti − τ ) dτ = T ji ( ti ) ; 1 ≤ i ≤ N ( 2 ) where { ( ti , Kji ) |i ∈ { 1 , ... , N } } is the set of all spikes generated by the encoder . The choice of L2 minimization as the objective of the reconstruction problem—which is the linchpin of our framework , as demonstrated in the theorems—can only be weakly justified at the current juncture . The perfect reconstruction theorem that follows provides the strong justification . As it stands , the L2 minimization objective is in congruence with the dictum of energy efficiency in biological systems . The assumption is that , of all signals , the one with the minimum energy that is consistent with the spike trains is desirable . Additionally , an L2 minimization in the objective of ( 2 ) reduces the convex optimization problem to a solvable linear system of equations as shown in Lemmas 1 and 3 . Later we shall show that L2-minimization has the surprising benefit of recovering the original signal perfectly under certain conditions . 4 SIGNAL CLASS FOR PERFECT RECONSTRUCTION . To establish the effectiveness of the described coding-decoding model , we have to evaluate the accuracy of reconstruction over a class of input signals . We observe that in general the encoding of square integrable signals into spike trains is not a one-to-one map ; the same set of spikes can be generated by different signals so as to result in the same convolved values at the spike times . Naturally , with a finite and fixed ensemble of kernelsK , one can not achieve perfect reconstruction for the general class of signals F as defined in Section 2 . We now restrict ourselves to a subset G of the original class F defined as G = { X ( t ) |X ( t ) ∈ F , X ( t ) = ∑N p=1 αpK jp ( tp − t ) , jp ∈ { 1 , ... , n } , αp ∈ R , tp ∈ R+ , N ∈ Z+ } and address the question of reconstruction accuracy . Essentially G consists of all linear combinations of arbitrarily shifted kernel functions . N is bounded above by the total number of spikes that the ensemble K can generate over [ 0 , T ] . In the parlance of signal processing , G constitutes Finite rate of Innovation signals ( 18 ) . For the class G the perfect reconstruction theorem is presented below . The theorem is proved with the help of three lemmas . Perfect Reconstruction Theorem : Let X ( t ) ∈ G be an input signal . Then for appropriately chosen time-varying thresholds of the kernels , the reconstruction , X∗ ( t ) , resulting from the proposed codingdecoding framework is accurate with respect to the L2 metric , i.e. , ||X∗ ( t ) −X ( t ) ||2 = 0 . Lemma 1 : The solution X∗ ( t ) to the reconstruction problem given by ( 2 ) can be written as : X∗ ( t ) = N∑ i=1 αiK ji ( ti − t ) ( 3 ) where the coefficients αi ∈ R can be solved from a system of linear equations . Proof : An approach analogous to the Representer Theorem ( 15 ) , splitting a putative solution to ( 2 ) into its within the span of the kernels component and a remnant orthogonal component , results in equation ( 3 ) . In essence , the reconstructed signal X∗ ( t ) becomes a summation of the kernels , shifted to their respective times of generation of spikes , scaled by appropriate coefficients . Plugging ( 3 ) into the constraints ( 2 ) gives : ∀1≤i≤N ; ∫ N∑ k=1 αkK jk ( tk − t ) Kji ( ti − t ) dτ = T ji ( ti ) Setting bi = T ji ( ti ) and Pik = ∫ Kjk ( tk − τ ) Kji ( ti − τ ) dτ results in : ∀1≤i≤N ; N∑ k=1 Pikαk = bi ( 4 ) Equation ( 4 ) defines a system of N equations in N unknowns of the form : Pα = T ( 5 ) where α = 〈α1 , ... , αN 〉T , T = 〈T j1 ( t1 ) , ... , T jN ( tN ) 〉T and P is an N ×N matrix with elements Pik = ∫ Kjk ( tk − τ ) Kji ( ti − τ ) dτ . Clearly P is the Gramian Matrix of the shifted kernels { Kji ( ti − t ) |i ∈ 1 , 2 , ... , N } in the Hilbert space with the standard inner product . It is well known that P is invertible if and only if { Kji ( ti − t ) |i ∈ 1 , 2 , ... , N } is a linearly independent set . If P is invertible α has a unique solution . If , on the other hand , P is not invertible , α has multiple solutions . However , as the next lemma shows , every such solution leads to the same reconstruction X∗ ( t ) , and hence any value of α that satisfies 5 can be chosen . We note in passing that in our experiments we have used the least square solution . 2 Import : The goal of the optimization problem is to find the best object in the feasible set . However , the application of the Representer Theorem converts the constraints into a determined system of unknowns and equations , turning the focus onto the feasible set , effectively changing the optimization problem into a solvable system that results in a closed form solution for the αi ’ s . This implies that instead of solving ( 2 ) , we can solve for the reconstruction from X∗ ( t ) = ∑N i=1 αiK ji ( ti− t ) , where αi is the i-th element of α = P−1T . Here , P−1 represents either the inverse or the Moore-Penrose inverse , as the case may be . Lemma 2 : Let equation 5 resulting from the optimization problem 2 have multiple solutions . Consider any two different solutions for α , namely α1 and α2 , and hence the corresponding reconstructions are given by X1 ( t ) = ∑N i=1 α1iK ji ( ti − t ) and X2 ( t ) = ∑N i=1 α2iK ji ( ti − t ) , respectively . Then X1 = X2 . Proof : The proof of this lemma follows from the existence of a unique function in the Hilbert Space spanned by { Kji ( ti − t ) |i ∈ 1 , 2 , ... , N } that satisfies the constraint of equation 2 . The details of the proof is furnished in the appendix A . Import : Lemma 2 essentially establishes the uniqueness of solution to the optimization problem formulated in 2 as any solution to equation 5 . The proof follows from the fact that the reconstruction is in the span of the shifted kernels { Kji ( ti − t ) |i ∈ 1 , 2 , ... , N } and the inner products of the reconstruction with each of Kji ( ti − t ) is given ( by the spike constraints of 2 ) . Such a reconstruction must be unique in the subspace S. Lemma 3 : Let X∗ ( t ) be the reconstruction of an input signal X ( t ) and { ( ti , Kji ) } Ni=1 be the set of spikes generated . Then , for any arbitrary signal X̃ ( t ) within the span of { Kji ( ti − t ) |i ∈ { 1 , 2 , ... , N } } , i.e. , the set of shifted kernels at respective spike times , given by X̃ ( t ) =∑N i=1 aiK ji ( ti − t ) the following inequality holds : ||X ( t ) −X∗ ( t ) || ≤ ||X ( t ) − X̃ ( t ) || Proof : ||X ( t ) − X̃ ( t ) || = ||X ( t ) −X∗ ( t ) ︸ ︷︷ ︸ A +X∗ ( t ) − X̃ ( t ) ︸ ︷︷ ︸ B || First , 〈A , Kji ( ti − t ) 〉 = 〈X ( t ) , Kji ( ti − t ) 〉 − 〈X∗ ( t ) , Kji ( ti − t ) 〉 , ∀i ∈ { 1 , 2 , .. , N } = T ji ( ti ) − T ji ( ti ) = 0 ( Using the constraints in ( 2 ) & ( 2 ) ) Second , 〈A , B〉 = 〈A , N∑ i=1 ( αi − ai ) Kji ( ti − t ) 〉 ( By Lemma 1 X∗ ( t ) = N∑ i=1 αiK ji ( ti − t ) ) = N∑ i=1 ( αi − ai ) 〈A , Kji ( ti − t ) 〉 = 0 =⇒ ||X ( t ) − X̃ ( t ) ||2 = ||A+B||2 = ||A||2 + ||B||2 ≥ ||A||2 = ||X ( t ) −X∗ ( t ) ||2 =⇒ ||X ( t ) − X̃ ( t ) || ≥ ||X ( t ) −X∗ ( t ) || 2 Import : The implication of the above lemma is quite remarkable . The objective defined in ( 2 ) chooses a signal with minimum energy satisfying the constraints , deemed the reconstructed signal . However as the lemma demonstrates , this signal also has the minimum error with respect to the input signal in the span of the shifted kernels . This signifies that our choice of the objective in the decoding module not only draws from biologically motivated energy optimization principles , but also performs optimally in terms of reconstructing the original input signal within the span of the appropriately shifted spike generating kernels . Corollary : An important consequence of Lemma 3 is that additional spikes in the system do not worsen the reconstruction . For a given input signal X ( t ) if S1 and S2 are two sets of spike trains where S1 ⊂ S2 , the second a superset of the first , then Lemma 3 implies that the reconstruction due to S2 is at least as good as the reconstruction due to S1 because the reconstruction due to S1 is in the span of the shifted kernel functions of S2 as S1 ⊂ S2 . This immediately leads to the conclusion that for a given input signal the more kernels we add to the ensemble the better the reconstruction . Proof of the Theorem : The proof of the theorem follows directly from Lemma 3 . Since the input signalX ( t ) ∈ G , letX ( t ) be given by : X ( t ) = ∑N p=1 αpK jp ( tp−t ) ( αp ∈ R , tp ∈ R+ , N ∈ Z+ ) Assume that the time varying thresholds of the kernels in our kernel ensemble K are set in such a manner that the following conditions are satisfied : 〈X ( t ) , Kjp ( tp− t ) 〉 = T jp ( tp ) ∀p ∈ { 1 , ... , N } i.e. , each of the kernels Kjp at the very least produces a spike at time tp against X ( t ) ( regardless of other spikes at other times ) . Clearly then X ( t ) lies in the span of the appropriately shifted response functions of the spike generating kernels . Applying Lemma 3 it follows that : ||X ( t ) −X∗ ( t ) ||2 ≤ ||X ( t ) −X ( t ) ||2 = 0 2 Import : In addition to demonstrating the potency of the coding-decoding scheme , this theorem frames Barlow ’ s efficient coding hypothesis ( 1 ) —that the coding strategy of sensory neurons be adapted to the statistics of the stimuli—in mathematically concrete terms . Going by the theorem , the spike based encoding necessitates the signals to be in the span of the encoding kernels for perfect reconstruction . Inverting the argument , kernels must learn to adapt to the basis elements that generate the signal corpora for superior reconstruction .
The authors describe a method for representing a continuous signal by a pulse code, in a manner inspired by auditory processing in the brain. The resulting framework is somewhat like matching pursuit except that filters are run a single time in a causal manner to find the spike times (which would be faster than MP), and then a N*N least squares problem is solved (which makes it slower). The authors claim that their method will perfectly reconstruct signals of finite innovation rate, however there appear to be mathematical errors in the proof.
SP:540d8c615b5193239aa43717de8cacc749ccc4c6
Improved Contrastive Divergence Training of Energy Based Models
1 INTRODUCTION . Energy-Based models ( EBMs ) have received an influx of interest recently and have been applied to realistic image generation ( Han et al. , 2019 ; Du & Mordatch , 2019 ) , 3D shapes synthesis ( Xie et al. , 2018b ) , out of distribution and adversarial robustness ( Lee et al. , 2018 ; Du & Mordatch , 2019 ; Grathwohl et al. , 2019 ) , compositional generation ( Hinton , 1999 ; Du et al. , 2020a ) , memory modeling ( Bartunov et al. , 2019 ) , text generation ( Deng et al. , 2020 ) , video generation ( Xie et al. , 2017 ) , reinforcement learning ( Haarnoja et al. , 2017 ; Du et al. , 2019 ) , protein design and folding ( Ingraham et al . ; Du et al. , 2020b ) and biologically-plausible training ( Scellier & Bengio , 2017 ) . Contrastive divergence is a popular and elegant procedure for training EBMs proposed by ( Hinton , 2002 ) which lowers the energy of the training data and raises the energy of the sampled confabulations generated by the model . The model confabulations are generated via an MCMC process ( commonly Gibbs sampling or Langevin dynamics ) , leveraging the extensive body of research on sampling and stochastic optimization . The appeal of contrastive divergence is its simplicity and extensibility . It does not require training additional auxiliary networks ( Kim & Bengio , 2016 ; Dai et al. , 2019 ) ( which introduce additional tuning and balancing demands ) , and can be used to compose models zero-shot . Despite these advantages , training EBMs with contrastive divergence has been challenging due to training instabilities . Ensuring training stability required either combinations of spectral normalization and Langevin dynamics gradient clipping ( Du & Mordatch , 2019 ) , parameter tuning ( Grathwohl et al. , 2019 ) , early stopping of MCMC chains ( Nijkamp et al. , 2019b ) , or avoiding the use of modern deep learning components , such as self-attention or layer normalization ( Du & Mordatch , 2019 ) . These requirements limit modeling power , prevent the compatibility with modern deep learning architectures , and prevent long-running training procedures required for scaling to larger datasets . With this work , we aim to maintain the simplicity and advantages of contrastive divergence training , while resolving stability issues and incorporating complementary deep learning advances . An often overlooked detail of contrastive divergence formulation is that changes to the energy function change the MCMC samples , which introduces an additional gradient term in the objective function ( see Section 2.1 for details ) . This term was claimed to be empirically negligible in the original formulation and is typically ignored ( Hinton , 2002 ; Liu & Wang , 2017 ) or estimated via highvariance likelihood ratio approaches ( Ruiz & Titsias , 2019 ) . We show that this term can be efficiently estimated for continuous data via a combination of auto-differentiation and nearest-neighbor entropy estimators . We also empirically show that this term contributes significantly to the overall training gradient and has the effect of stabilizing training . It enables inclusion of self-attention blocks into network architectures , removes the need for capacity-limiting spectral normalization , and allows us to train the networks for longer periods . We do not introduce any new objectives or complexity - our procedure is simply a more complete form of the original formulation . We further present techniques to improve mixing and mode exploration of MCMC transitions in contrastive divergence . We propose data augmentation as a useful tool to encourage mixing in MCMC by directly perturbing input images to related images . By incorporating data augmentation as semantically meaningful perturbations , we are able to greatly improve mixing and diversity of MCMC chains . We further propose to maintain a reservoir sample of past samples , improving the diversity of MCMC chain initialization in contrastive divergence . We also leverage compositionality of EBMs to evaluate an image sample at multiple image resolutions when computing energies . Such evaluation and coarse and fine scales leads to samples with greater spatial coherence , but leaves MCMC generation process unchanged . We note that such hierarchy does not require specialized mechanisms such as progressive refinement ( Karras et al. , 2017 ) Our contributions are as follows : firstly , we show that a gradient term neglected in the popular contrastive divergence formulation is both tractable to estimate and is important in avoiding training instabilities that previously limited applicability and scalability of energy-based models . Secondly , we highlight how data augmentation and multi-scale processing can be used to improve model robustness and generation quality . Thirdly , we empirically evaluate stability of model architectures and show improved performance on a host of benchmarks and use cases , such as image generation , OOD detection , and compositional generation . 2 AN IMPROVED CONTRASTIVE DIVERGENCE FRAMEWORK FOR ENERGY BASED MODELS . Energy based models ( EBMs ) represent the likelihood of a probability distribution for x ∈ RD as pθ ( x ) = exp ( −Eθ ( x ) ) Z ( θ ) where the function Eθ ( x ) : R D → R , is known as the energy function , and Z ( θ ) = ∫ x exp−Eθ ( x ) is known as the partition function . Thus an EBM can be represented by an neural network that takes x as input and outputs a scalar . Training an EBM through maximum likelihood ( ML ) is not straightforward , as Z ( θ ) can not be reliably computed , since this involves integration over the entire input domain of x . However , the gradient of log-likelihood with respect to a data sample x can be represented as ∂ log pθ ( x ) ∂θ = − ( ∂Eθ ( x ) ∂θ − Epθ ( x′ ) [ ∂Eθ ( x ′ ) ∂θ ] ) . ( 1 ) Note that Equation 1 is still not tractable , as it requires using Markov Chain Monte Carlo ( MCMC ) to draw samples from the model distribution pθ ( x ) , which often takes exponentially long to mix . As a practical approximation to the above objective , ( Hinton , 2002 ) proposes the contrastive divergence objective KL ( p ( x ) || pθ ( x ) ) − KL ( Πtθ ( p ( x ) ) || pθ ( x ) ) , ( 2 ) where Πθ represents a MCMC transition kernel for pθ , and Πtθ ( p ( x ) ) represents t sequential MCMC transitions starting from p ( x ) . The above objective can be seen as an improvement operator , where KL ( p ( x ) || pθ ( x ) ) ≥ KL ( Πtθ ( p ( x ) ) || pθ ( x ) ) , because Πθ is converging to equilibrium distribution pθ ( x ) ( Lyu , 2011 ) . Furthermore , the above objective is only zero ( at its fixed point ) , when Πθ does not change the distribution of p ( x ) , which corresponds to pθ ( x ) = p ( x ) . 2.1 A MISSING TERM IN CONTRASTIVE DIVERGENCE . When taking the negative gradient of the contrastive divergence objective ( Equation 2 ) , we obtain the expression − ( Ep ( x ) [ ∂Eθ ( x ) ∂θ ] − Eqθ ( x′ ) [ ∂Eθ ( x ′ ) ∂θ ] + ∂q ( x′ ) ∂θ ∂KL ( qθ ( x ) || pθ ( x ) ) ∂qθ ( x ) ) , ( 3 ) where for brevity , we summarize Πtθ ( p ( x ) ) = qθ ( x ) . The first two terms are identical to those of Equation 1 and the third gradient term ( which we refer to as the KL divergence term ) corresponds to minimizing the divergence between qθ ( x ) and pθ ( x ) . In practice , past contrastive divergence approaches have ignored the third gradient term , which was difficult to estimate and claimed to be empirically negligible ( Hinton , 1999 ) . These gradients correspond to a joint loss expression LFull , consisting of traditional contrastive loss LCD and a new loss expression LKL . Specifically , we have LFull = LCD + LKL where LCD is LCD = Ep ( x ) [ Eθ ( x ) ] − Estop gradient ( qθ ( x′ ) ) [ Eθ ( x ′ ) ] , ( 4 ) and the ignored KL divergence term corresponding to the loss LKL = Eqθ ( x ) [ Estop gradient ( θ ) ( x ) ] + Eqθ ( x ) [ log ( qθ ( x ) ) ] . ( 5 ) Despite being difficult to estimate , we show that LKL is a useful tool for both speeding up and stabilizing training of EBMs . Figure 2 illustrates the overall effects of both losses . Equation 4 encourage the energy function to assign low energy to real samples and high energy for generated samples . However , only optimizing Equation 4 often leads to an adversarial mode where the energy function learns to simply generate an energy landscape that makes sampling difficult . The KL divergence term counteracts this effect , and encourages sampling to closely approximate the underlying distribution pθ ( x ) , by encouraging samples to be both low energy under the energy function as well as diverse . Next , we discuss our approach towards estimating this KL divergence , and show that it significantly improves the stability when training EBMs . 2.2 ESTIMATING THE MISSING GRADIENT TERM . Estimating LKL can further be decomposed into two separate objectives , minimizing the energy of samples from qθ ( x ) , which we refer to as Lopt ( Equation 6 ) and maximizing the entropy of samples from qθ ( x ) which we refer to as Lent ( Equation 7 ) . Minimizing Sampler Energy . To minimize the energy of samples from qθ ( x ) we can directly differentiate through both the energy function and MCMC sampling . We follow recent work in EBMs and utilize Langevin dynamics ( Du & Mordatch , 2019 ; Nijkamp et al. , 2019b ; Grathwohl et al. , 2019 ) for our MCMC transition kernel , and note that each step of Langevin sampling is fully differentiable with respect to underlying energy function parameters . Precisely , gradient of Lopt becomes ∂Lopt ∂θ = Eqθ ( x′0 , x′1 , ... , x′t ) [ ∂Estop gradient ( θ ) ( x ′ t−1 −∇x′t−1Eθ ( x ′ t−1 ) + ω ) ∂θ ] , ω ∼ N ( 0 , λ ) ( 6 ) where x′i represents the i th step of Langevin sampling . To reduce to memory overhead of this differentiation procedure , we only differentiate through the last step of Langevin sampling ( though we show it the appendix that leads to the same effect as differentiation through Langevin sampling ) . Entropy Estimation . To maximize the entropy of samples from qθ ( x ) , we use a non-parametric nearest neighbor entropy estimator ( Beirlant et al. , 1997 ) , which is shown to be mean square consistent ( Kozachenko & Leonenko , 1987 ) with root-n convergence rate ( Tsybakov & Van der Meulen , 1996 ) . The entropy H of a distribution p ( x ) can be estimated through a set X = x1 , x2 , . . . , xn of n different points sampled from p ( x ) as H ( pθ ( x ) ) = 1n ∑n i=1 ln ( n · NN ( xi , X ) ) + O ( 1 ) where the function NN ( xi , X ) denotes the nearest neighbor distance of xi to any other data point in X . Based off the above entropy estimator , we write Lent as Lent = Eq ( x ) [ log ( NN ( x , B ) ) ] ( 7 ) where we measure the nearest neighbor with respect to a set B of 1000 past samples from MCMC chains ( see Section 2.5 for more details ) . We utilize L2 distance as the metric for computing nearest neighbors . Alternatively , Stein ’ s identity may also be used to estimate entropy , but this requires considering all samples , as opposed to the nearest , becoming computationally intractable . Our entropy estimator serves a simple , quick to compute estimator of entropy , that prevents sampling from collapsing . Empirically , we find that the combination of the above terms in LKL significantly improves both the stability and generation quality of EBMs , improving robustness across different model architectures .
Review: This paper studies how to improve contrastive divergence (CD) training of energy-based models (EBMs) by revisiting the gradient term neglected in the traditional CD learning. This paper also introduces some useful techniques, such as data augmentation, multi-scale energy design, and reservoir sampling to improve the training of energy-based model. Empirical studies are performed to validate the proposed learning strategy on the task of image generation, OOD detection, and compositional generation.
SP:725d036c0863e59f6bb0b0bb22cc0ad3a0988126
Efficient Architecture Search for Continual Learning
1 INTRODUCTION . Continual learning , or lifelong learning , refers to the ability of continually learning new tasks and also performing well on learned tasks . It has attracted enormous attention in AI as it mimics a human learning process - constantly acquiring and accumulating knowledge throughout their lifetime ( Parisi et al. , 2019 ) . Continual learning often works with deep neural networks ( Javed & White , 2019 ; Nguyen et al. , 2017 ; Xu & Zhu , 2018 ) as the flexibility in a network design can effectively allow knowledge transfer and knowledge acquisition . However , continual learning with neural networks usually faces three challenges . The first one is to overcome the so-called catastrophic forgetting problem ( Kirkpatrick et al. , 2017 ) , which states that the network may forget what has been learned on previous tasks . The second one is to effectively adapt the current network parameters or architecture to fit a new task , and the last one is to control the network size so as not to generate an overly complex network . In continual learning , there are two main categories of strategies that attempt to solve the aforementioned challenges . The first category is to train all tasks within a network with fixed capacity . For example , ( Rebuffi et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Aljundi et al. , 2018 ) replay some old samples with the new task samples and then learn a new network from the combined training set . The drawback is that they typically require a memory system that stores past data . ( Kirkpatrick et al. , 2017 ; Liu et al. , 2018 ) employ some regularization terms to prevent the re-optimized parameters from deviating too much from the previous ones . Approaches using fixed network architecture , however , can not avoid a fundamental dilemma - they must either choose to retain good model performances on learned tasks , leaving little room for learning new tasks , or compromise the learned model performances to allow learning new tasks better . To overcome such a dilemma , the second category is to expand the neural networks dynamically ( Rusu et al. , 2016 ; Yoon et al. , 2018 ; Xu & Zhu , 2018 ) . They typically fix the parameters of the old neurons ( partially or fully ) in order to eliminate the forgetting problem , and also permit adding new neurons to adapt to the learning of a new task . In general , expandable networks can achieve better model performances on all tasks than the non-expandable ones . However , a new issue appears : expandable networks can gradually become overly large or complex , which may break the limits of the available computing resources and/or lead to over-fitting . In this paper , we aim to solve the continual learning problems by proposing a new approach that only requires minimal expansion of a network so as to achieve high model performances on both learned tasks and the new task . At the heart of our approach we leverage Neural Architecture Search ( NAS ) to find a very concise architecture to fit each new task . Most notably , we design NAS to provide a neuron-level control . That is , NAS selects two types of individual neurons to compose a new architecture : ( 1 ) a subset of the previous neurons that are most useful to modeling the new task ; and ( 2 ) a minimal number of new neurons that should be added . Reusing part of the previous neurons allows efficient knowledge transfer ; and adding new neurons provides additional room for learning new knowledge . Our approach is named as Continual Learning with Efficient Architecture Search , or CLEAS in short . Below are the main features and contributions of CLEAS . • CLEAS dynamically expands the network to adapt to the learning of new tasks and uses NAS to determine the new network architecture ; • CLEAS achieves zero forgetting of the learned knowledge by keeping the parameters of the previous architecture unchanged ; • NAS used in CLEAS is able to provide a neuron-level control which expands the network minimally . This leads to an effective control of network complexity ; • The RNN-based controller behind CLEAS is using an entire network configuration ( with all neurons ) as a state . This state definition deviates from the current practice in related problems that would define a state as an observation of a single neuron . Our state definition leads to improvements of 0.31 % , 0.29 % and 0.75 % on three benchmark datasets . • If the network is a convolutional network ( CNN ) , CLEAS can even decide the best filter size that should be used in modeling the new task . The optimized filter size can further improve the model performance . We start the rest of the paper by first reviewing the related work in Section 2 . Then we detail our CLEAS design in Section 3 . Experimental evaluations and the results are presented in Section 4 . 2 RELATED WORK . Continual Learning Continual learning is often considered as an online learning paradigm where new skills or knowledge are constantly acquired and accumulated . Recently , there are remarkable advances made in many applications based on continual learning : sequential task processing ( Thrun , 1995 ) , streaming data processing ( Aljundi et al. , 2019 ) , self-management of resources ( Parisi et al. , 2019 ; Diethe et al. , 2019 ) , etc . A primary obstacle in continual learning , however , is the catastrophic forgetting problem and many previous works have attempted to alleviate it . We divide them into two categories depending on whether their networks are expandable . The first category uses a large network with fixed capacity . These methods try to retain the learned knowledge by either replaying old samples ( Rebuffi et al. , 2017 ; Rolnick et al. , 2019 ; Robins , 1995 ) or enforcing the learning with regularization terms ( Kirkpatrick et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ; Liu et al. , 2018 ; Zhang et al. , 2020 ) . Sample replaying typically requires a memory system which stores old data . When learning a new task , part of the old samples are selected and added to the training data . As for regularized learning , a representative approach is Elastic Weight Consolidation ( EWC ) ( Kirkpatrick et al. , 2017 ) which uses the Fisher information matrix to regularize the optimization parameters so that the important weights for previous tasks are not altered too much . Other methods like ( Lopez-Paz & Ranzato , 2017 ; Liu et al. , 2018 ; Zhang et al. , 2020 ) also address the optimization direction of weights to prevent the network from forgetting the previously learned knowledge . The major limitation of using fixed networks is that it can not properly balance the learned tasks and new tasks , resulting in either forgetting old knowledge or acquiring limited new knowledge . To address the above issue , another stream of works propose to dynamically expand the network , providing more room for obtaining new knowledge . For example , Progressive Neural Network ( PGN ) ( Rusu et al. , 2016 ) allocates a fixed number of neurons and layers to the current model for a new task . Apparently , PGN may end up generating an overly complex network that has high redundancy and it can easily crash the underlying computing system that has only limited resources . Another approach DEN ( Dynamically Expandable Network ) ( Yoon et al. , 2018 ) partially mitigates the issue of PGN by using group sparsity regularization techniques . It strategically selects some old neurons to retrain , and adds new neurons only when necessary . However , DEN can have the forgetting problem due to the retraining of old neurons . Another drawback is that DEN has very sensitive hyperparameters that need sophisticated tuning . Both of these algorithms only grow the network and do not have a neuron level control which is a significant departure from our work . Most recently , a novel method RCL ( Reinforced Continual Learning ) ( Xu & Zhu , 2018 ) also employs NAS to expand the network and it can further decrease model complexity . The main difference between RCL and CLEAS is that RCL blindly reuses all the neurons from all of the previous tasks and only uses NAS to decide how many new neurons should be added . However , reusing all the old neurons has two problems . First , it creates a lot of redundancy in the new network and some old neurons may even be misleading and adversarial ; second , excessively many old neurons reused in the new network can dominate its architecture , which may significantly limit the learning ability of the new network . Therefore , RCL does not really optimize the network architecture , thus it is unable to generate an efficient and effective network for learning a new task . By comparison , CLEAS designs a fine-grained NAS which provides neuron-level control . It optimizes every new architecture by determining whether to reuse each old neuron and how many new neurons should be added to each layer . Neural Architecture Search NAS is another promising research topic in the AI community . It employs reinforcement learning techniques to automatically search for a desired network architecture for modeling a specific task . For instance , Cai et al . ( Cai et al. , 2018 ) propose EAS to discover a superb architecture with a reinforced meta-controller that can grow the depth or width of a network ; Zoph et al . ( Zoph & Le , 2016 ) propose an RNN-based controller to generate the description of a network , and the controller is reinforced by the predicting accuracy of a candidate architecture . Pham et al . ( Pham et al. , 2018 ) propose an extension of NAS , namely ENAS , to speed up training processing by forcing all child networks to share weights . Apart from algorithms , NAS also has many valuable applications such as image classification ( Real et al. , 2019 ; Radosavovic et al. , 2019 ) , video segmentation ( Nekrasov et al. , 2020 ) , text representation ( Wang et al. , 2019 ) and etc . Hence , NAS is a demonstrated powerful tool and it is especially useful in continual learning scenarios when one needs to determine what is a good architecture for the new task . 3 METHODOLOGY . There are two components in the CLEAS framework : one is the task network that continually learns a sequence of tasks ; and the other is controller network that dynamically expands the task network . The two components interact with each other under the reinforcement learning context - the task network sends the controller a reward signal which reflects the performance of the current architecture design ; the controller updates its policy according to the reward , and then generates a new architecture for the task network to test its performance . Such interactions repeat until a good architecture is found . Figure 1 illustrates the overall structure of CLEAS . On the left is the task network , depicting an optimized architecture for task t− 1 ( it is using gray and pink neurons ) and a candidate architecture for task t. They share the same input neurons but use their own output neurons . Red circles are newly added neurons and pink ones are reused neurons from task t− 1 ( or any previous task ) . To train the network , only the red weights that connect new-old or new-new neurons are optimized . On the right is the controller network which implements an RNN . It provides a neuron-level control to generate a description of the task network design . Each blue square is an RNN cell that decides to use or drop a certain neuron in the task network .
This paper falls into a class of continual learning methods which accommodate for new tasks by expanding the network architecture, while freezing existing weights. This freezing trivially resolves forgetting. The (hard) problem of determining how to expand the network is tackled with reinforcement learning, largely building upon a previous approach (reinforced continual learning, RCL). Apart from some RL-related implementation choices that differ here, the main difference to RCL is that the present method learns a mask which determines which neurons to reuse, while RCL only uses RL to determine how many neurons to add. Experiments demonstrate that this allows reducing network size while significantly improving accuracy on Split CIFAR-100. The runtime is, however, increased here.
SP:6d6e083899bc17a2733aa16efd259ad4ed2076d6
Play to Grade: Grading Interactive Coding Games as Classifying Markov Decision Process
1 INTRODUCTION . The rise of online coding education platforms accelerates the trend to democratize high quality computer science education for millions of students each year . Corbett ( 2001 ) suggests that providing feedback to students can have an enormous impact on efficiently and effectively helping students learn . Unfortunately contemporary coding education has a clear limitation . Students are able to get automatic feedback only up until they start writing interactive programs . When a student authors a program that requires user interaction , e.g . where a user interacts with the student ’ s program using a mouse , or by clicking on button it becomes exceedingly difficult to grade automatically . Even for well defined challenges , if the user has any creative discretion , or the problem involves any randomness , the task of automatically assessing the work is daunting . Yet creating more open-ended assignments for students can be particularly motivating and engaging , and also help allow students to practice key skills that will be needed in commercial projects . Generating feedback on interactive programs from humans is more laborious than it might seem . Though the most common student solution to an assignment may be submitted many thousands of times , even for introductory computer science education , the probability distribution of homework submissions follows the very heavy tailed Zipf distribution – the statistical distribution of natural language . This makes grading exceptionally hard for contemporary AI ( Wu et al. , 2019 ) as well as massive crowd sourced human efforts ( Code.org , 2014 ) . While code as text has proved difficult to grade , actually running student code is a promising path forward ( Yan et al. , 2019 ) . We formulate the grading via playing task as equivalent to classifying whether an ungraded student program – a new Markov Decision Process ( MDP ) – belongs to a latent class of correct Markov Decision Processes ( representing correct programming solutions to the assignment ) . Given a discrete set of environments E = { en = ( Sn , A , Rn , Pn ) : n = 1 , 2 , 3 , ... } , we can partition them into E ? and E ′ . E ? is the set of latent MDPs . It includes a handful of reference programs that a teacher has implemented or graded . E ′ is the set of environments specified by student submitted programs . We are building a classifier that determines whether e , a new input decision process is behaviorally identical to the latent decision process . Prior work on providing feedback for code has focused on text-based syntactic analysis and automatically constructing solution space ( Rivers & Koedinger , 2013 ; Ihantola et al. , 2015 ) . Such feedback orients around providing hints and unable to determine an interactive program ’ s correctness . Other intelligent tutoring systems focused on math or other skills that don ’ t require creating interactive programs ( Ruan et al. , 2019 ; 2020 ) . Note that in principle one could analyze the raw code and seek to understand if the code produces a dynamics and reward model that is isomorphic to the dynamics and reward generated by a correct program . However , there are many different ways to express the same correct program and classifying such text might require a large amount of data : as a first approach , we avoid this by instead deploying a policy and observing the resulting program behavior , thereby generating execution traces of the student ’ s implicitly specified MDP that can be used for classification . Main contributions in this paper : • We introduce the reinforcement learning challenge of Play to Grade . • We propose a baseline algorithm where an agent learns to play a game and use features such as total reward and anticipated reward to determine correctness . • Our classifier obtains 93.1 % accuracy on 8359 most frequent programs that cover 50 % of the overall submissions and achieve 89.0 % accuracy on programs that are submitted by less than 5 times . We gained 14-19 % absolute improvement over grading programs via code text . • We will release a dataset of over 700k student submissions to support further research . 2 THE PLAY TO GRADE CHALLENGE . We formulate the challenge with constraints that are often found in the real world . Given an interactive coding assignment , teacher often has a few reference implementations of the assignment . Teachers use them to show students what a correct solution should look like . We also assume that the teacher can prepare a few incorrect implementations that represent their “ best guesses ” of what a wrong program should look like . To formalize this setting , we consider a set of programs , each fully specifies an environment and its dynamics : E = { en = ( Sn , A , Rn , Pn ) : n = 1 , 2 , 3 , ... } . A subset of these environments are reference environments that are accessible during training , we refer to them as E ? , and we also have a set of environments that are specified by student submitted programs E ′ . We can further specify a training set D = { ( τ i , yi ) ; y ∈ { 0 , 1 } } where τ i ∼ π ( e ( i ) ) and e ( i ) ∼ E ? , and a test set Dtest where e ( i ) ∼ E ′ . The overall objective of this challenge is : minL ( θ ) = min θ min π Ee∼E [ Eτ ′∼π ( e ) [ L ( pθ ( φ ( τ ′ , π ) ) , y ) ] ] ( 1 ) We want a policy that can generate trajectory τ that can help a classifier easily distinguish between an input environment that is correctly implemented and one that is not . We also allow a feature mapping function φ that takes the trajectory and estimations from the agent as intput and output features for the classifier . We can imagine a naive classifier that labels any environment that is playable ( defined by being able to obtain rewards ) by our agent as correct . A trivial failure case for this classifier would be that if the agent is badly trained and fails to play successfully in a new environment ( returning zero reward ) , we would not know if zero reward indicates the wrongness of the program or the failure of our agent . Generalization challenge In order to avoid the trivial failure case described above – the game states observed are a result of our agent ’ s failure to play the game , not a result of correctness or wrongness of the program , it is crucial that the agent operates successfully under different correct environments . For any correct environment , E+ = { E ? + , E ′+ } , the goal is for our agent to obtain the high expected reward . π ? = argmax π Ee∼E+ [ Eτ∼π ( e ) [ R ( τ ) ] ] ( 2 ) Additionally , we choose the state space to be the pixel-based screenshot of the game . This assumption imposes the least amount of modification on thousands of games that teaching platforms have created for students over the years . This decision poses a great challenge to our agent . When students create a game , they might choose to express their creativity through a myriad of ways , including but not limited to using exciting background pictures , changing shape or color or moving speed of the game objects , etc . Some of these creative expressions only affect game aesthetics , but other will affect how the game is played ( i.e. , changing the speed of an object ) . The agent needs to succeed in these creative settings so that the classifier will not treat creativity as incorrect . 2.1 BOUNCE GAME SIMULATOR . We pick the coding game Bounce to be the main game for this challenge . Bounce is a block-based educational game created to help students understand conditionals1 . We show actual game scenes in Figure 1 , and the coding area in Figure 2a . 1https : //studio.code.org/s/course3/stage/15/puzzle/10 The choice gives us three advantages . First , the popularity of this game on Code.org gives us an abundance of real student submissions over the years , allowing us to compare algorithms with real data . Second , a block-based program can be easily represented in a structured format , eliminating the need to write a domain-specific parser for student ’ s program . Last , in order to measure real progress of this challenge , we need gold labels for each submission . Block-based programming environment allows us to specify a list of legal and illegal commands under each condition which will provide perfect gold labels . The Bounce exercise does not have a bounded solution space , similar to other exercises developed at Code.org . This means that the student can produce arbitrarily long programs , such as repeating the same command multiple times ( Figure 3 ( b ) ) or changing themes whenever a condition is triggered ( Figure 3 ( a ) ) . These complications can result in very different game dynamics . We created a simulator faithfully executes command under each condition and will return positive reward when “ Score point ” block is activated , and negative reward when “ Socre opponent point ” block is activated . In deployment , such simulator needs not be created because coding platforms have already created simulators to run and render student programs . 2.2 CODE.ORG BOUNCE DATASET . Code.org is an online computer science education platform that teaches beginner programming . They designed a drag-and-drop interface to teach K-12 students basic programming concepts . Our dataset is compiled from 453,211 students . Each time a student runs their code , the submission is saved . In total , there are 711,274 submissions , where 323,516 unique programs were submitted . Evaluation metric In an unbounded solution space , the distribution of student submissions incur a heavy tail , observed by Wu et al . ( 2019 ) . We show that the distribution of submission in dataset conforms to a Zipf distribution . This suggests that we can partition this dataset into two sections , as seen in Figure 2b . Head + Body : the 8359 most frequently submitted programs that covers 50.5 % of the total submissions ( 359,266 ) . This set contains 4,084 correct programs ( 48.9 % ) and 4,275 incorrect programs ( 51.1 % ) . Tail : This set represents any programs that are submitted less than 5 times . There are 315,157 unique programs in this set and 290,953 of them ( 92.3 % ) were only submitted once . We sample 250 correct and 250 incorrect programs uniformly from this set for evaluation . Reference programs Before looking at the student submitted programs , we attempted to solve the assignment ourselves . Through our attempt , we form an understanding of where the student might make a mistake and what different variations of correct programs could look like . Our process can easily be replicated by teachers . We come up with 8 correct reference programs and 10 incorrect reference programs . This can be regarded as our training data . Gold annotations We generate the ground-truth gold annotations by defining legal or illegal commands under each condition . For example , having more than one “ launch new ball ” under “ when run ” is incorrect . Placing “ score opponent point ” under “ when run ” is also incorrect . Abiding by this logic , we put down a list of legal and illegal commands for each condition . We note that , we intentionally chose the bounce program as it was amenable to generating gold annotations due to the API that code.org exposed to students . While our methods apply broadly , this gold annotation system will not scale to other assignments . The full annotation schema is in Appendix A.5 .
The authors contribute an approach to automatically distinguish between good and bad student assignment submissions by modeling the assignment submissions as MDPs. The authors hypothesize that satisfactory assignments modeled as MDPs will be more alike than they are to unsatisfactory assignments. Therefore this can potentially be used as part of some kind of future automated feedback system. The authors demonstrate this approach on an assignment for students to recreate a simple pong-like environment. They are able to achieve high accuracy over the most common submissions.
SP:047761908963bea6350f5d65a253c09f1a626093
Hybrid and Non-Uniform DNN quantization methods using Retro Synthesis data for efficient inference
1 INTRODUCTION . Quantization is a widely used and necessary approach to convert heavy Deep Neural Network ( DNN ) models in Floating Point ( FP32 ) format to a light-weight lower precision format , compatible with edge device inference . The introduction of lower precision computing hardware like Qualcomm Hexagon DSP ( Codrescu , 2015 ) resulted in various quantization methods ( Morgan et al. , 1991 ; Rastegari et al. , 2016 ; Wu et al. , 2016 ; Zhou et al. , 2017 ; Li et al. , 2019 ; Dong et al. , 2019 ; Krishnamoorthi , 2018 ) compatible for edge devices . Quantizing a FP32 DNN to INT8 or lower precision results in model size reduction by at least 4X based on the precision opted for . Also , since the computations happen in lower precision , it implicitly results in faster inference time and lesser power consumption . The above benefits with quantization come with a caveat of accuracy loss , due to noise introduced in the model ’ s weights and activations . In order to reduce this accuracy loss , quantization aware fine-tuning methods are introduced ( Zhu et al. , 2016 ; Zhang et al. , 2018 ; Choukroun et al. , 2019 ; Jacob et al. , 2018 ; Baskin et al. , 2019 ; Courbariaux et al. , 2015 ) , wherein the FP32 model is trained along with quantizers and quantized weights . The major disadvantages of these methods are , they are computationally intensive and time-consuming since they involve the whole training process . To address this , various post-training quantization methods ( Morgan et al. , 1991 ; Wu et al. , 2016 ; Li et al. , 2019 ; Banner et al. , 2019 ) are developed that resulted in trivial to heavy accuracy loss when evaluated on different DNNs . Also , to determine the quantized model ’ s weight and activation ranges most of these methods require access to training data , which may not be always available in case of applications with security and privacy constraints which involve card details , health records , and personal images . Contemporary research in post-training quantization ( Nagel et al. , 2019 ; Cai et al. , 2020 ) eliminated the need for training data for quantization by estimating the quantization parameters from the Batch-Normalization ( BN ) layer statistics of the FP32 model but fail to produce better accuracy when BN layers are not present in the model . To address the above mentioned shortcomings , this paper proposes a data-independent post-training quantization method that estimates the quantization ranges by leveraging on ‘ retro-synthesis ’ data generated from the original FP32 model . This method resulted in better accuracy as compared to both data-independent and data-dependent state-of-the-art quantization methods on models ResNet18 , ResNet50 ( He et al. , 2016 ) , MobileNetV2 ( Sandler et al. , 2018 ) , AlexNet ( Krizhevsky et al. , 2012 ) and ISONet ( Qi et al. , 2020 ) on ImageNet dataset ( Deng et al. , 2009 ) . It also outperformed state-of-the-art methods even for lower precision such as 6 and 4 bit on ImageNet and CIFAR-10 datasets . The ‘ retro-synthesis ’ data generation takes only 10 to 12 sec of time to generate the entire dataset which is a minimal overhead as compared to the benefit of data independence it provides . Additionally , this paper introduces two variants of post-training quantization methods namely ‘ Hybrid Quantization ’ and ‘ Non-Uniform Quantization ’ . 2 PRIOR ART . 2.1 QUANTIZATION AWARE TRAINING BASED METHODS . An efficient integer only arithmetic inference method for commonly available integer only hardware is proposed in Jacob et al . ( 2018 ) , wherein a training procedure is employed which preserves the accuracy of the model even after quantization . The work in Zhang et al . ( 2018 ) trained a quantized bit compatible DNN and associated quantizers for both weights and activations instead of relying on handcrafted quantization schemes for better accuracy . A ‘ Trained Ternary Quantization ’ approach is proposed in Zhu et al . ( 2016 ) wherein the model is trained to be capable of reducing the weights to 2-bit precision which achieved model size reduction by 16x without much accuracy loss . Inspired by other methods Baskin et al . ( 2019 ) proposes a ‘ Compression Aware Training ’ scheme that trains a model to learn compression of feature maps in a better possible way during inference . Similarly , in binary connect method ( Courbariaux et al. , 2015 ) the network is trained with binary weights during forward and backward passes that act as a regularizer . Since these methods majorly adopt training the networks with quantized weights and quantizers , the downside of these methods is not only that they are time-consuming but also they demand training data which is not always accessible . 2.2 POST TRAINING QUANTIZATION BASED METHODS . Several post-training quantization methods are proposed to replace time-consuming quantization aware training based methods . The method in Choukroun et al . ( 2019 ) avoids full network training , by formalizing the linear quantization as ‘ Minimum Mean Squared Error ’ and achieves better accuracy without retraining the model . ‘ ACIQ ’ method ( Banner et al. , 2019 ) achieved accuracy close to FP32 models by estimating an analytical clipping range of activations in the DNN . However , to compensate for the accuracy loss , this method relies on a run-time per-channel quantization scheme for activations which is inefficient and not hardware friendly . In similar lines , the OCS method ( Zhao et al. , 2019 ) proposes to eliminate the outliers for better accuracy with minimal overhead . Though these methods considerably reduce the time taken for quantization , they are unfortunately tightly coupled with training data for quantization . Hence they are not suitable for applications wherein access to training data is restricted . The contemporary research on data free post-training quantization methods was successful in eliminating the need for accessing training data . By adopting a per-tensor quantization approach , the DFQ method ( Nagel et al. , 2019 ) achieved accuracy similar to the perchannel quantization approach through cross layer equalization and bias correction . It successfully eliminated the huge weight range variations across the channels in a layer by scaling the weights for cross channels . In contrast ZeroQ ( Cai et al. , 2020 ) proposed a quantization method that eliminated the need for training data , by generating distilled data with the help of the Batch-Normalization layer statistics of the FP32 model and using the same for determining the activation ranges for quantization and achieved state-of-the-art accuracy . However , these methods tend to observe accuracy degradation when there are no Batch-Normalization layers present in the FP32 model . To address the above shortcomings the main contributions in this paper are as follows : • A data-independent post-training quantization method by generating the ‘ Retro Synthesis ’ data , for estimating the activation ranges for quantization , without depending on the BatchNormalization layer statistics of the FP32 model . • Introduced a ‘ Hybrid Quantization ’ method , a combination of Per-Tensor and Per-Channel schemes , that achieves state-of-the-art accuracy with lesser inference time as compared to fully per-channel quantization schemes . • Recommended a ‘ Non-Uniform Quantization ’ method , wherein the weights in each layer are clustered and then allocated with a varied number of bins to each cluster , that achieved ‘ 1 % ’ better accuracy against state-of-the-art methods on ImageNet dataset . 3 METHODOLOGY . This section discusses the proposed data-independent post-training quantization methods namely ( a ) Quantization using retro-synthesis data , ( b ) Hybrid Quantization , and ( c ) Non-Uniform Quantization . 3.1 QUANTIZATION USING RETRO SYNTHESIS DATA . In general , post-training quantization schemes mainly consist of two parts - ( i ) quantizing the weights that are static in a given trained FP32 model and ( ii ) determining the activation ranges for layers like ReLU , Tanh , Sigmoid that vary dynamically for different input data . In this paper , asymmetric uniform quantization is used for weights whereas the proposed ‘ retro-synthesis ’ data is used to determine the activation ranges . It should be noted that we purposefully chose to resort to simple asymmetric uniform quantization to quantize the weights and also have not employed any advanced techniques such as outlier elimination of weight clipping for the reduction of quantization loss . This is in the interest of demonstrating the effectiveness of ‘ retro-synthesis ’ data in accurately determining the quantization ranges of activation outputs . However , in the other two proposed methods ( b ) , and ( c ) we propose two newly developed weight quantization methods respectively for efficient inference with improved accuracy . 3.1.1 RETRO-SYNTHESIS DATA GENERATION . Aiming for a data-independent quantization method , it is challenging to estimate activation ranges without having access to the training data . An alternative is to use “ random data ” having Gaussian distribution with ‘ zero mean ’ and ‘ unit variance ’ which results in inaccurate estimation of activation ranges thereby resulting in poor accuracy . The accuracy degrades rapidly when quantized for lower precisions such as 6 , 4 , and 2 bit . Recently ZeroQ ( Cai et al. , 2020 ) proposed a quantization method using distilled data and showed significant improvement , with no results are showcasing the generation of distilled data for the models without Batch-Normalization layers and their corresponding accuracy results . In contrast , inspired by ZeroQ ( Cai et al. , 2020 ) we put forward a modified version of the data generation approach by relying on the fact that , DNNs which are trained to discriminate between different image classes embeds relevant information about the images . Hence , by considering the class loss for a particular image class and traversing through the FP32 model backward , it is possible to generate image data with similar statistics of the respective class . Therefore , the proposed “ retrosynthesis ” data generation is based on the property of the trained DNN model , where the image data that maximizes the class score is generated , by incorporating the notion of the class features captured by the model . Like this , we generate a set of images corresponding to each class using which the model is trained . Since the data is generated from the original model itself we named the data as “ retro-synthesis ” data . It should be observed that this method has no dependence on the presence of Batch-Normalization layers in the FP32 model , thus overcoming the downside of ZeroQ . It is also evaluated that , for the models with Batch-Normalization layers , incorporating the proposed “ class-loss ” functionality to the distilled data generation algorithm as in ZeroQ results in improved accuracy . The proposed “ retro-synthesis ” data generation method is detailed in Algorithm 1 . Given , a fully trained FP32 model and a class of interest , our aim is to empirically generate an image that is representative of the class in terms of the model class score . More formally , let P ( C ) be the soft-max of the class C , computed by the final layer of the model for an image I . Thus , the aim is , to generate an image such that , this image when passed to the model will give the highest softmax value for class C. Algorithm 1 Retro synthesis data generation Input : Pre-determined FP32 model ( M ) , Target class ( C ) . Output : A set of retro-synthesis data corresponding to Target class ( C ) . 1 . Init : I ← random gaussian ( batch-size , input shape ) 2 . Init : Target←rand ( No . of classes ) 3 argmax ( Target ) = C 3 . Init : µ0 = 0 , σ0 = 1 4 . Get ( µi , σi ) from batch norm layers of M ( if present ) , i ∈ 0 , 1 , . . . , n where n → No.of batch norm layers 5. for j = 0 , 1 , . . . , No . of Epochs ( a ) Forward propagate I and gather intermediate activation statistics ( b ) Output = M ( I ) ( c ) LossBN=0 ( d ) for k = 0 , 1 , . . . , n i . Get ( µk , σk ) ii . LossBN ← LossBN+L ( ( µk , σk ) , ( µBNk , σBNk ) ) ( e ) Calculate ( µ′0 , σ′0 ) of I ( f ) LossG ← L ( ( µ0 , σ0 ) , ( µ′0 , σ′0 ) ) ( g ) LossC ← L ( Target , Output ) ( h ) Total loss = LossBN + LossG + LossC ( i ) Update I ← backward ( Total loss ) The “ retro-synthesis ” data generation for a target class C starts with random data of Gaussian distribution I and performing a forward pass on I to obtain intermediate activations and output labels . Then we calculate the aggregated loss that occurs between , stored batch norm statistics and the intermediate activation statistics ( LBN ) , the Gaussian loss ( LG ) , and the class loss ( LC ) between the output of the forward pass and our target output . The L2 loss formulation as in equation 1 is used for LBN and LG calculation whereas mean squared error is used to compute LC . The calculated loss is then backpropagated till convergence thus generating a batch of retro-synthesis data for a class C. The same algorithm is extendable to generate the retro-synthesis data for all classes as well . L ( ( µk , σk ) , ( µ BN k , σ BN k ) ) = ‖µk − µBNk ‖2 2 + ‖σk − σBNk ‖2 2 ( 1 ) Where L is the computed loss , µk , σk , and µBNk , σ BN k are the mean and standard deviation of the kth activation layer and the Batch-Normalization respectively . By observing the sample visual representation of the retro-synthesis data comparing against the random data depicted in Fig . 1 , it is obvious that the retro-synthesis data captures relevant features from the respective image classes in a DNN understandable format . Hence using the retro-synthesis data for the estimation of activation ranges achieves better accuracy as compared to using random data . Also , it outperforms the state-of-the-art data-free quantization methods ( Nagel et al. , 2019 ; Cai et al. , 2020 ) with a good accuracy margin when validated on models with and without BatchNormalization layers . Therefore , the same data generation technique is used in the other two proposed quantization methods ( b ) and ( c ) as well .
This paper considers the problem of data-free post-training quantization of classfication networks. It proposes three extensions of an existing framework ZeroQ (Cai et al., 2020): (1). in order to generate distilled data for network sensitivity analysis, the "Retro Synthesis" method is proposed to turn a random image into a one that represents a desired class label without relying on batch norm statistics like in ZeroQ; (2). a hybrid quantization strategy is proposed to optionally provide finer-grained per-channel quantization instead of the typical per-layer quantization; (3). a non-uniform quantization grid is proposed to better represent quantized weights, instead of uniform quantization as in ZeroQ. Empirical evaluation demonstrate the effectiveness of the proposed approach.
SP:2eed06887f51560197590d617b1a37ec6d22e943
The Deep Bootstrap Framework: Good Online Learners are Good Offline Generalizers
1 INTRODUCTION . The goal of a generalization theory in supervised learning is to understand when and why trained models have small test error . The classical framework of generalization decomposes the test error of a model ft as : TestError ( ft ) = TrainError ( ft ) + [ TestError ( ft ) − TrainError ( ft ) ] ︸ ︷︷ ︸ Generalization gap ( 1 ) and studies each part separately ( e.g . Vapnik and Chervonenkis ( 1971 ) ; Blumer et al . ( 1989 ) ; ShalevShwartz and Ben-David ( 2014 ) ) . Many works have applied this framework to study generalization of deep networks ( e.g . Bartlett ( 1997 ) ; Bartlett et al . ( 1999 ) ; Bartlett and Mendelson ( 2002 ) ; Anthony and Bartlett ( 2009 ) ; Neyshabur et al . ( 2015b ) ; Dziugaite and Roy ( 2017 ) ; Bartlett et al . ( 2017 ) ; Neyshabur et al . ( 2017 ) ; Harvey et al . ( 2017 ) ; Golowich et al . ( 2018 ) ; Arora et al . ( 2018 ; 2019 ) ; Allen-Zhu et al . ( 2019 ) ; Long and Sedghi ( 2019 ) ; Wei and Ma ( 2019 ) ) . However , there are at least two obstacles to understanding generalization of modern neural networks via the classical approach . 1 . Modern methods can interpolate , reaching TrainError ≈ 0 , while still performing well . In these settings , the decomposition of Equation ( 1 ) does not actually reduce test error into two different subproblems : it amounts to writing TestError = 0 + TestError . That is , understanding the generalization gap here is exactly equivalent to understanding the test error itself . 2 . Most if not all techniques for understanding the generalization gap ( e.g . uniform convergence , VC-dimension , regularization , stability , margins ) remain vacuous ( Zhang et al. , 2017 ; Belkin et al. , 2018a ; b ; Nagarajan and Kolter , 2019 ) and not predictive ( Nagarajan and Kolter , 2019 ; Jiang et al. , 2019 ; Dziugaite et al. , 2020 ) for modern networks . In this work , we propose an alternate approach to understanding generalization to help overcome these obstacles . The key idea is to consider an alternate decomposition : TestError ( ft ) = TestError ( f iidt ) ︸ ︷︷ ︸ A : Online Learning + [ TestError ( ft ) − TestError ( f iidt ) ] ︸ ︷︷ ︸ B : Bootstrap error ( 2 ) where ft is the neural-network after t optimization steps ( the “ Real World ” ) , and f iidt is a network trained identically to ft , but using fresh samples from the distribution in each mini-batch step ( the “ Ideal World ” ) . That is , f iidt is the result of optimizing on the population loss for t steps , while ft is the result of optimizing on the empirical loss as usual ( we define this more formally later ) . This leads to a different decoupling of concerns , and proposes an alternate research agenda to understand generalization . To understand generalization in the bootstrap framework , it is sufficient to understand : ( A ) Online Learning : How quickly models optimize on the population loss , in the infinite-data regime ( the Ideal World ) . ( B ) Finite-Sample Deviations : How closely models behave in the finite-data vs. infinite-data regime ( the bootstrap error ) . Although neither of these points are theoretically understood for deep networks , they are closely related to rich areas in optimization and statistics , whose tools have not been brought fully to bear on the problem of generalization . The first part ( A ) is purely a question in online stochastic optimization : We have access to a stochastic gradient oracle for a population loss function , and we are interested in how quickly an online optimization algorithm ( e.g . SGD , Adam ) reaches small population loss . This problem is well-studied in the online learning literature for convex functions ( Bubeck , 2011 ; Hazan , 2019 ; Shalev-Shwartz et al. , 2011 ) , and is an active area of research in nonconvex settings ( Jin et al. , 2017 ; Lee et al. , 2016 ; Jain and Kar , 2017 ; Gao et al. , 2018 ; Yang et al. , 2018 ; Maillard and Munos , 2010 ) . In the context of neural networks , optimization is usually studied on the empirical loss landscape ( Arora et al. , 2019 ; Allen-Zhu et al. , 2019 ) , but we propose studying optimization on the population loss landscape directly . This highlights a key difference in our approach : we never compare test and train quantities— we only consider test quantities . The second part ( B ) involves approximating fresh samples with “ reused ” samples , and reasoning about behavior of certain functions under this approximation . This is closely related to the nonparametric bootstrap in statistics ( Efron , 1979 ; Efron and Tibshirani , 1986 ) , where sampling from the population distribution is approximated by sampling with replacement from an empirical distribution . Bootstrapped estimators are widely used in applied statistics , and their theoretical properties are known in certain cases ( e.g . Hastie et al . ( 2009 ) ; James et al . ( 2013 ) ; Efron and Hastie ( 2016 ) ; Van der Vaart ( 2000 ) ) . Although current bootstrap theory does not apply to neural networks , it is conceivable that these tools could eventually be extended to our setting . Experimental Validation . Beyond the theoretical motivation , our main experimental claim is that the bootstrap decomposition is actually useful : in realistic settings , the bootstrap error is often small , and the performance of real classifiers is largely captured by their performance in the Ideal World . Figure 1 shows one example of this , as a preview of our more extensive experiments in Section 4 . We plot the test error of a ResNet ( He et al. , 2016a ) , an MLP , and a Vision Transformer ( Dosovitskiy et al. , 2020 ) on a CIFAR-10-like task , over increasing minibatch SGD iterations . The Real World is trained on 50K samples for 100 epochs . The Ideal World is trained on 5 million samples with a single pass . Notice that the bootstrap error is small for all architectures , although the generalization gap can be large . In particular , the convnet generalizes better than the MLP on finite data , but this is “ because ” it optimizes faster on the population loss with infinite data . See Appendix D.1 for details . Our Contributions .. • Framework : We propose the Deep Bootstrap framework for understanding generalization in deep learning , which connects offline generalization to online optimization . ( Section 2 ) . • Validation : We give evidence that the bootstrap error is small in realistic settings for supervised image classification , by conducting extensive experiments on large-scale tasks ( including variants of CIFAR-10 and ImageNet ) for many architectures ( Section 4 ) . Thus , The generalization of models is largely determined by their optimization speed in online and offline learning . • Implications : We highlight how our framework can unify and yield insight into important phenomena in deep learning , including implicit bias , model selection , data-augmentation and pretraining ( Section 5 ) . In particular : Good models and training procedures are those which ( 1 ) optimize quickly in the Ideal World and ( 2 ) do not optimize too quickly in the Real World . Additional Related Work . The bootstrap error is also related to algorithmic stability ( e.g . Bousquet and Elisseeff ( 2001 ) ; Hardt et al . ( 2016 ) ) , since both quantities involve replacing samples with fresh samples . However , stability-based generalization bounds can not tightly bound the bootstrap error , since there are many settings where the generalization gap is high , but bootstrap error is low . 2 THE DEEP BOOSTRAP . Here we more formally describe the Deep Bootstrap framework and our main claims . LetF denote a learning algorithm , including architecture and optimizer . We consider optimizers which can be used in online learning , such as stochastic gradient descent and variants . Let TrainF ( D , n , t ) denote training in the “ Real World ” : using the architecture and optimizer specified by F , on a train set of n samples from distribution D , for t optimizer steps . Let TrainF ( D , ∞ , t ) denote this same optimizer operating on the population loss ( the “ Ideal World ” ) . Note that these two procedures use identical architectures , learning-rate schedules , mini-batch size , etc – the only difference is , the Ideal World optimizer sees a fresh minibatch of samples in each optimization step , while the Real World reuses samples in minibatches . Let the Real and Ideal World trained models be : Real World : ft ← TrainF ( D , n , t ) Ideal World : f iidt ← TrainF ( D , ∞ , t ) We now claim that for all t until the Real World converges , these two models ft , f iidt have similar test performance . In our main claims , we differ slightly from the presentation in the Introduction in that we consider the “ soft-error ” of classifiers instead of their hard-errors . The soft-accuracy of classifiers is defined as the softmax probability on the correct label , and ( soft-error ) : = 1 − ( soft-accuracy ) . Equivalently , this is the expected error of temperature-1 samples from the softmax distribution . Formally , define ε as the bootstrap error – the gap in soft-error between Real and Ideal worlds at time t : TestSoftErrorD ( ft ) = TestSoftErrorD ( f iidt ) + ε ( n , D , F , t ) ( 3 ) Our main experimental claim is that the bootstrap error ε is uniformly small in realistic settings . Claim 1 ( Bootstrap Error Bound , informal ) For choices of ( n , D , F ) corresponding to realistic settings in deep learning for supervised image classification , the bootstrap error ε ( n , D , F , t ) is small for all t ≤ T0 . The “ stopping time ” T0 is defined as the time when the Real World reaches small training error ( we use 1 % ) – that is , when Real World training has essentially converged . The restriction on t ≤ T0 is necessary , since as t→∞ the Ideal World will continue to improve , but the Real World will at some point essentially stop changing ( when train error ≈ 0 ) . However , we claim that these worlds are close for “ as long as we can hope ” — as long as the Real World optimizer is still moving significantly . Error vs. Soft-Error . We chose to measure soft-error instead of hard-error in our framework for both empirical and theoretically-motivated reasons . Empirically , we found that the bootstrap gap is often smaller with respect to soft-errors . Theoretically , we want to define the bootstrap gap such that it converges to 0 as data and model size are scaled to infinity . Specifically , if we consider an overparameterized scaling limit where the Real World models always interpolate the train data , then Distributional Generalization ( Nakkiran and Bansal , 2020 ) implies that the bootstrap gap for test error will not converge to 0 on distributions with non-zero Bayes risk . Roughly , this is because the Ideal World classifier will converge to the Bayes optimal one ( argmaxy p ( y|x ) ) , while the Real World interpolating classifier will converge to a sampler from p ( y|x ) . Considering soft-errors instead of errors nullifies this issue . We elaborate further on the differences between the worlds in Section 6 . See also Appendix C for relations to the nonparametric bootstrap ( Efron , 1979 ) .
The authors propose a bootstrap framework for understanding generalization in deep learning. In particular, instead of the usual decomposition of test error as training error plus the generalization gap, the bootstrap framework decomposes the empirical test error as online error plus the bootstrap error (the gap between the population and empirical error). The authors then demonstrate empirically on variants of CIFAR10 and a subset of ImageNet that the bootstrap error is small on several common architectures. Hence, the empirical test error is controlled by the online error (i.e. a rapid decrease in the error in the online setting leads to low test error). The authors then provide empirical evidence to demonstrate that same techniques perform well in both over and under-parameterized regimes.
SP:259b64e62b640ccba4bc82c50e59db7662677e6b
Self-Supervised Time Series Representation Learning by Inter-Intra Relational Reasoning
1 INTRODUCTION . Time series data is ubiquitous and there has been significant progress for time series analysis ( Das , 1994 ) in machine learning , signal processing , and other related areas , with many real-world applications such as healthcare ( Stevner et al. , 2019 ) , industrial diagnosis ( Kang et al. , 2015 ) , and financial forecasting ( Sen et al. , 2019 ) . Deep learning models have emerged as successful models for time series analysis ( Hochreiter & Schmidhuber , 1997 ; Graves et al. , 2013 ; Shukla & Marlin , 2019 ; Fortuin et al. , 2019 ; Oreshkin et al. , 2020 ) . Despite their fair share of success , the existing deep supervised models are not suitable for high-dimensional time series data with a limited amount of training samples as those data-driven approaches rely on finding ground truth for supervision , where data labeling is a labor-intensive and time-consuming process , and sometimes impossible for time series data . One solution is to learn useful representations from unlabeled data , which can substantially reduce dependence on costly manual annotation . Self-supervised learning aims to capture the most informative properties from the underlying structure of unlabeled data through the self-generated supervisory signal to learn generalized representations . Recently , self-supervised learning has attracted more and more attention in computer vision by designing different pretext tasks on image data such as solving jigsaw puzzles ( Noroozi & Favaro , 2016 ) , inpainting ( Pathak et al. , 2016 ) , rotation prediction ( Gidaris et al. , 2018 ) , and contrastive learning of visual representations ( Chen et al. , 2020 ) , and on video data such as object tracking ( Wang & Gupta , 2015 ) , and pace prediction ( Wang et al. , 2020 ) . Although some video-based ap- 1Anonymous repository link . proaches attempt to capture temporal information in the designed pretext task , time series is far different structural data compared with video . More recently , in the time series analysis domain , some metric learning based self-supervised methods such as triplet loss ( Franceschi et al. , 2019 ) and contrastive loss ( Schneider et al. , 2019 ; Saeed et al. , 2020 ) , or multi-task learning based self-supervised methods that predict different handcrafted features ( Pascual et al. , 2019a ; Ravanelli et al. , 2020 ) and different signal transformations ( Saeed et al. , 2019 ; Sarkar & Etemad , 2020 ) have emerged . However , few of those works consider the intra-temporal structure of time series . Therefore , how to design an efficient pretext task in a self-supervised manner for time series representation learning is still an open problem . In this work , we present SelfTime : a general self-supervised time series representation learning framework . Inspired by relational discovery during self-supervised human learning , which attempts to discover new knowledge by reasoning the relation among entities ( Goldwater et al. , 2018 ; Patacchiola & Storkey , 2020 ) , we explore the inter-sample relation reasoning and intra-temporal relation reasoning of time series to capture the underlying structure pattern of the unlabeled time series data . Specifically , as shown in Figure 1 , for inter-sample relation reasoning , given an anchor sample , we generate from its transformation counterpart and another individual sample as the positive and negative samples respectively . For intra-temporal relation reasoning , we firstly generate an anchor piece , then , several reference pieces are sampled to construct different scales of temporal relation between the anchor piece and the reference piece , where relation scales are determined based on the temporal distance . Note that in Figure 1 , we only show an example of 3-scale temporal relations including short-term , middle-term , and long-term relation for an illustration , whereas in different scenarios , there could be different temporal relation scale candidates . Based on the sampled relation , a shared feature extraction backbone combined with two separate relation reasoning heads are employed to quantify the relationships between the sample pairs or the time piece pairs for inter-sample relation reasoning or intra-temporal relation reasoning , respectively . Finally , the useful representations of time series are extracted from the backbone under the supervision of relation reasoning heads on the unlabeled data . Overall , SelfTime is simple yet effective by conducting the designed pretext tasks directly on the original input signals . Our main contributions are three-fold : ( 1 ) we present a general self-supervised time series representation learning framework by investigating different levels of relations of time series data including inter-sample relation and intra-temporal relation . ( 2 ) We design a simple and effective intra-temporal relation sampling strategy to capture the underlying temporal patterns of time series . ( 3 ) We conduct extensive experiments on different categories of real-world time series data , and systematically study the impact of different data augmentation strategies and temporal relation sampling strategies on self-supervised learning of time series . By comparing with multiple state-of-the-art baselines , experimental results show that SelfTime builds new state-of-the-art on self-supervised time series representation learning . 2 RELATED WORK . Time Series Modeling . In the last decades , time series modeling has been paid close attention with numerous efficient methods , including distance-based methods , feature-based methods , ensemblebased methods , and deep learning based methods . Distance-based methods ( Berndt & Clifford , 1994 ; Górecki & Łuczak , 2014 ) try to measure the similarity between time series using Euclidean distance or Dynamic Time Warping distance , and then conduct classification based on 1-NN classifiers . Feature-based methods aim to extract useful feature for time series representation . Two typical types including bag-of-feature based methods ( Baydogan et al. , 2013 ; Schäfer , 2015 ) and shapelet based methods ( Ye & Keogh , 2009 ; Hills et al. , 2014 ) . Ensemble-based methods ( Lines & Bagnall , 2015 ; Bagnall et al. , 2015 ) aims at combining multiple classifiers for higher classification performance . More recently , deep learning based methods ( Karim et al. , 2017 ; Ma et al. , 2019 ; Cheng et al. , 2020 ) conduct classification by cascading the feature extractor and classifier based on MLP , RNN , and CNN in an end-to-end manner . Our approach focuses instead on self-supervised representation learning of time series on unlabeled data , exploiting inter-sample relation and intra-temporal relation of time series to guide the generation of useful feature . Relational Reasoning . Reasoning the relations between entities and their properties makes significant sense to generally intelligent behavior ( Kemp & Tenenbaum , 2008 ) . In the past decades , there has been an extensive researches about relational reasoning and its applications including knowledge base ( Socher et al. , 2013 ) , question answering ( Johnson et al. , 2017 ; Santoro et al. , 2017 ) , video action recognition ( Zhou et al. , 2018 ) , reinforcement learning ( Zambaldi et al. , 2019 ) , and graph representation ( Battaglia et al. , 2018 ) , which perform relational reasoning directly on the constructed sets or graphs that explicitly represent the target entities and their relations . Different from those previous works that attempt to learn a relation reasoning head for a special task , inter-sample relation reasoning based on unlabeled image data is employed in ( Patacchiola & Storkey , 2020 ) to learn useful visual representation in the underlying backbone . Inspired by this , in our work , we focus on time series data by exploring both inter-sample and intra-temporal relation for time series representation in a self-supervised scenario . Self-supervised Learning . Self-supervised learning has attracted lots of attention recently in different domains including computer vision , audio/speech processing , and time series analysis . For image data , the pretext tasks including solving jigsaw puzzles ( Noroozi & Favaro , 2016 ) , rotation prediction ( Gidaris et al. , 2018 ) , and visual contrastive learning ( Chen et al. , 2020 ) are designed for self-supervised visual representation . For video data , the pretext tasks such as frame order validation ( Misra et al. , 2016 ; Wei et al. , 2018 ) , and video pace prediction ( Wang et al. , 2020 ) are designed which considering additional temporal signal of video . Different from video signal that includes plenty of raw feature in both spatial and temporal dimension , time series is far different structural data with less raw features at each time point . For time series data such as audio and ECG , the metric learning based methods such as triplet loss ( Franceschi et al. , 2019 ) and contrastive loss ( Schneider et al. , 2019 ; Saeed et al. , 2020 ) , or multi-task learning based methods that predict different handcrafted features such as MFCCs , prosody , and waveform ( Pascual et al. , 2019a ; Ravanelli et al. , 2020 ) , and different transformations of raw signal ( Sarkar & Etemad , 2020 ; Saeed et al. , 2019 ) have emerged recently . However , few of those works consider the intra-temporal structure of time series . Therefore , how to design an efficient self-supervised pretext task to capture the underlying structure of time series is still an open problem . 3 METHOD . Given an unlabeled time series set T = { tn } Nn=1 , where each time series tn = ( tn,1 , ... tn , T ) T contains T ordered real values . We aim to learn a useful representation zn = fθ ( tn ) from the backbone encoder fθ ( · ) where θ is the learnable weights of the neural networks . The architecture of the proposed SelfTime is shown in Figure 2 , which consists of an inter-sample relational reasoning branch and an intra-temporal relational reasoning branch . Firstly , taking the original time series signals and their sampled time pieces as the inputs , a shared backbone encoder fθ ( · ) extracts time series feature and time piece feature to aggregate the inter-sample relation feature and intra-temporal relation feature respectively , and then feeds them to two separate relation reasoning heads rµ ( · ) and rϕ ( · ) to reason the final relation score of inter-sample relation and intra-temporal relation . 3.1 INTER-SAMPLE RELATION REASONING . Formally , given any two different time series samples tm and tn from T , we randomly generate two sets of K augmentationsA ( tm ) = { t ( i ) m } Ki=1 andA ( tn ) = { t ( i ) n } Ki=1 , where t ( i ) m and t ( i ) n are the i-th augmentations of tm and tn respectively . Then , we construct two types of relation pairs : positive relation pairs and negative relation pairs . A positive relation pair is ( t ( i ) m , t ( j ) m ) sampled from the same augmentation set A ( tm ) , while a negative relation pair is ( t ( i ) m , t ( j ) n ) sampled from different augmentation sets A ( tm ) and A ( tn ) . Based on the sampled relation pairs , we use the backbone encoder fθ to learn the relation representation as follows : Firstly , we extract sample representations z ( i ) m = fθ ( t ( i ) m ) , z ( j ) m = fθ ( t ( j ) m ) , and z ( j ) n = fθ ( t ( j ) n ) . Then , we construct the positive relation representation [ z ( i ) m , z ( j ) m ] , and the negative relation representation [ z ( i ) m , z ( j ) n ] , where [ · , · ] denotes the vector concatenation operation . Next , the inter-sample relation reasoning head rµ ( · ) takes the generated relation representation as input to reason the final relation score h ( i , j ) 2m−1 = rµ ( [ z ( i ) m , z ( j ) m ] ) for positive relation and h ( i , j ) 2m = rµ ( [ z ( i ) m , z ( j ) n ] ) for negative relation , respectively . Finally , the inter-sample relation reasoning task is formulated as a binary classification task and the model is trained with binary cross-entropy loss Linter as follows : Linter = − 2N∑ n=1 K∑ i=1 K∑ j=1 ( y ( i , j ) n · log ( h ( i , j ) n ) + ( 1− y ( i , j ) n ) · log ( 1− h ( i , j ) n ) ) ( 1 ) where y ( i , j ) n = 1 for the positive relation and y ( i , j ) n = 0 for the negative relation .
This paper presents a general Self-supervised Time Series representation learning framework. It explores the inter-sample relation reasoning and intra-temporal relation reasoning of time series to capture the underlying structure pattern of the unlabeled time series data. The proposed method achieves new state-of-the-art results and outperforms existing methods by a significant margin on multiple real-world time-series datasets for the classification tasks.
SP:1b984693f1a64c86306aff37d58f9ff188bcf67e
Reviving Autoencoder Pretraining
1 INTRODUCTION . While approaches such as greedy layer-wise autoencoder pretraining ( Bengio et al. , 2007 ; Vincent et al. , 2010 ; Erhan et al. , 2010 ) arguably paved the way for many fundamental concepts of today ’ s methodologies in deep learning , the pressing need for pretraining neural networks has been diminished in recent years . This was primarily caused by numerous advances in terms of regularization ( Srivastava et al. , 2014 ; Hanson & Pratt , 1989 ; Weigend et al. , 1991 ) , network architectures ( Ronneberger et al. , 2015 ; He et al. , 2016 ; Vaswani et al. , 2017 ) , and improved optimization algorithms ( Kingma & Ba , 2014 ; Loshchilov & Hutter , 2017 ; Reddi et al. , 2019 ) . Despite these advances , training deep neural networks that generalize well to a wide range of previously unseen tasks remains a fundamental challenge ( Neyshabur et al. , 2017 ; Kawaguchi et al. , 2017 ; Frankle & Carbin , 2018 ) . Inspired by techniques for orthogonalization ( Ozay & Okatani , 2016 ; Jia et al. , 2017 ; Bansal et al. , 2018 ) , we re-visit the classic idea of unsupervised autoencoder pretraining in the context of reversible network architectures . Hence , we propose a modified variant that relies on a full reverse pass trained in conjunction with a given training task . A key insight is that there is no need for ” greediness ” , i.e. , layer-wise decompositions of the network structure , and it is additionally beneficial to take into account a specific problem domain at the time of pretraining . We establish links between singular value decomposition ( SVD ) and pretraining , and show how our approach yields an embedding of problem-aware dominant features in the weight matrices . An SVD can then be leveraged to conveniently gain insights about learned structures . Most importantly , we demonstrate that the proposed pretraining yields an improved performance for a variety of learning and transfer tasks . Our formulation incurs only a very moderate computational cost , is very easy to integrate , and widely applicable . The structure of our networks is influenced by invertible network architectures that have received significant attention in recent years ( Gomez et al. , 2017 ; Jacobsen et al. , 2018 ; Zhang et al. , 2018a ) . However , instead of aiming for a bijective mapping that reproduces inputs , we strive for learning a general representation by constraining the network to represent an as-reversible-as-possible process for all intermediate layer activations . Thus , even for cases where a classifier can , e.g. , rely on color for inference of an object type , the model is encouraged to learn a representation that can recover the input . Hence , not only the color of the input should be retrieved , but also , e.g. , its shape . In contrast to most structures for invertible networks , our approach does not impose architectural restrictions . We demonstrate the benefits of our pretraining for a variety of architectures , from fully connected layers to convolutional neural networks ( CNNs ) , over networks with and without batch normalization , to GAN architectures . We discuss other existing approaches and relate them to the proposed method in the appendix . Below , we will first give an overview of our formulation and its connection to singular values , before evaluating our model in the context of transfer learning . For a regular , i.e. , a non-transfer task , the goal usually is to train a network that gives optimal performance for one specific goal . During a regular training run , the network naturally exploits any observed correlations between input and output distribution . An inherent difficulty in this setting is that typically no knowledge about the specifics of the new data and task domains is available when training the source model . Hence , it is common practice to target broad and difficult tasks hoping that this will result in features that are applicable in new domains ( Zamir et al. , 2018 ; Gopalakrishnan et al. , 2017 ; Ding et al. , 2017 ) . Motivated by autoencoder pretraining , we instead leverage a pretraining approach that takes into account the data distribution of the inputs . We demonstrate the gains in accuracy for original and new tasks below for a wide range of applications , from image classification to data-driven weather forecasting . 2 METHOD . With state-of-the-art methods , there is no need for breaking down the training process into single layers . Hence , we consider approaches that target whole networks , and especially orthogonalization regularizers as a starting point ( Huang et al. , 2018 ) . Orthogonality constraints were shown to yield improved training performance in various settings ( Bansal et al. , 2018 ) , and can be formulated as : Lort = n∑ m=1 ∥∥MTmMm − I∥∥2F , ( 1 ) i.e. , enforcing the transpose of the weight matrix Mm ∈ Rs out m×s in m for all layers m to yield its inverse when being multiplied with the original matrix . I denotes the identity matrix with I = ( e1m , ... e sinm m ) , ejm denoting the jth column unit vector . Minimizing equation 1 , i.e . M T mMm− I = 0 is mathematically equivalent to : MTmMme j m − ejm = 0 , j = 1 , 2 , ... , sinm , ( 2 ) with rank ( MTmMm ) = s in m , and e j m as eigenvectors of M T mMm with eigenvalues of 1 . This formulation highlights that equation 2 does not depend on the training data , and instead only targets the content of Mm . Inspired by the classical unsupervised pretraining , we re-formulate the orthogonality constraint in a data-driven manner to take into account the set of inputs Dm for the current layer ( either activation from a previous layer or the training data D1 ) , and instead minimize LRR = n∑ m=1 ( MTmMmd i m − dim ) 2 = n∑ m=1 ( ( MTmMm − I ) dim ) 2 , ( 3 ) where dim ∈ Dm ⊂ Rs in m . Due to its reversible nature , we will denote our approach with an RR subscript in the following . In contrast to classical autoencoder pretraining , we are minimizing this loss jointly for all layers of a network , and while orthogonality only focuses onMm , our formulation allows for minimizing the loss by extracting the dominant features of the input data . Let q denote the number of linearly independent entries in Dm , i.e . its dimension , and t the size of the training data , i.e . |Dm| = t , usually with q < t. For every single datum dim , i = 1 , 2 , ... , t , equation 3 results in MTmMmd i m − dim = 0 , ( 4 ) and hence dim are eigenvectors of M T mMm with corresponding eigenvalues being 1 . Thus , instead of the generic constraint MTmMm = I that is completely agnostic to the data at hand , the proposed formulation of equation 4 is aware of the training data , which improves the generality of the learned representation , as we will demonstrate in detail below . As by construction , rank ( Mm ) = r 6 min ( sinm , s out m ) , the SVD of Mm yields : Mm = UmΣmV T m , with { Um = ( u 1 m , u 2 m , ... , u r m , u r+1 m , ... , u soutm m ) ∈ Rs out m×s out m , Vm = ( v 1 m , v 2 m , ... , v r m , v r+1 m , ... , v sinm m ) ∈ Rs in m×s in m , ( 5 ) with left and right singular vectors in Um and Vm , respectively , and Σm having square roots of the r eigenvalues of MTmMm on its diagonal . u k m and v k m ( k = 1 , ... , r ) are the eigenvectors of MmM T m and MTmMm , respectively ( Wall et al. , 2003 ) . Here , especially the right singular vectors in V T m are important , as they determine which structures of the input are processed by the transformation Mm . The original orthogonality constraint with equation 2 yields r unit vectors ejm as the eigenvectors of MTmMm . Hence , the influence of equation 2 on Vm is completely independent of training data and learning objectives . Next , we show that LRR facilitates learning dominant features from a given data set . For this , we consider an arbitrary basis for spanning the space of inputsDm for layerm . Let Bm : 〈 w1m , ... , w q m 〉 denote a set of q orthonormal basis vectors obtained via a Gram-Schmidt process , with t > q > r , and Dm denoting the matrix of the vectors in Bm . As we show in more detail in the appendix , our constraint from equation 4 requires eigenvectors of MTmMm to be w i m , with Vm containing r orthogonal vectors ( v1m , v 2 m , ... , v r m ) from Dm and ( sinm − r ) vectors from the null space of M . We are especially interested in how Mm changes w.r.t . input in terms of Dm , i.e. , we express LRR in terms of Dm . By construction , each input dim can be represented as a linear combination via a vector of coefficients cim that multiplies Dm so that d i m =Dmc i m. Since Mmdm = UmΣmV T mdm , the loss LRR of layer m can be rewritten as LRRm = ( MTmMmdm − dm ) 2 = ( VmΣTmΣmV Tmdm − dm ) 2 = ( VmΣ T mΣmV T mDmcm −Dmcm ) 2 , ( 6 ) where we can assume that the coefficient vector cm is accumulated over the training data set size t via cm = ∑t i=1 c i m , since eventually every single datum inDm will contribute to LRRm . The central component of equation 6 is V TmDm . For a successful minimization , Vm needs to retain those w i m with the largest cm coefficients . As Vm is typically severely limited in terms of its representational capabilities by the number of adjustable weights in a network , it needs to focus on the most important eigenvectors in terms of cm in order to establish a small distance to Dmcm . Thus , features that appear multiple times in the input data with a corresponding factor in cm will more strongly contribute to minimizing LRRm . To summarize , Vm is driven towards containing r orthogonal vectors wim that represent the most frequent features of the input data , i.e. , the dominant features . Additionally , due to the column vectors of Vm being mutually orthogonal , Mm is encouraged to extract different features from the input . By the sake of being distinct and representative for the data set , these features have the potential to be useful for new inference tasks . The feature vectors embedded inMm can be extracted from the network weights in practical settings , as we will demonstrate below . Realization in Neural Networks Calculating MTmMm is usually very expensive due to the dimensionality of Mm . Instead of building it explicitly , we constrain intermediate results to realize equation 3 when training . A regular training typically starts with a chosen network structure and Under review as a conference paper at ICLR 2021 trains the model weights for a given task via a suitable loss function . Our approach fully retains this setup and adds a second pass that reverses the initial structure while reusing all weights and biases . E.g. , for a typical fully connected layer in the forward pass with dm+1 = Mmdm +bm , the reverse pass operation is given by d ′ m = M T m ( dm+1 − bm ) , where d ′ m denotes the reconstructed input . a m o un t o f s h a re d in fo rm a tio n b et w e e n la ye r a n d amount of shared information between layer ℒ and 𝑋 𝐿𝑎𝑦𝑒𝑟 1 2 3 𝑚 𝑛 𝐿𝑎𝑦𝑒𝑟 𝑚 𝒅𝒎 𝒅𝒎 𝟏 𝒅𝒎 𝟏𝒅𝒎 𝒅𝟏 𝒅𝟏 | 𝑓 𝐵𝑁 𝑐𝑜𝑛𝑣 𝑓 ( 𝑀 , 𝒃𝒎 ) 𝑑𝑒𝑐𝑜𝑛𝑣 ( 𝑀 , −𝒃𝒎 ) 𝐵𝑁 𝑤𝑒𝑖𝑔ℎ𝑡 𝑠ℎ𝑎𝑟𝑖𝑛𝑔 𝒗 𝒗 𝒗 0.427 0.400 0.489 0.420 0.394 0.445 0.213 0.216 0.153 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 𝒗 𝒗 𝒅 … … ( 50 images ) ( 50 images ) ( 1,0 ) ( 0,1 ) 𝒅 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 𝑎 𝑀𝑁𝐼𝑆𝑇 𝑇𝑒𝑠𝑡 𝑤𝑖𝑡ℎ 𝑙𝑖𝑛𝑒𝑎𝑟 𝑚𝑜𝑑𝑒𝑙 𝑏 𝑃𝑒𝑎𝑘 𝑇𝑒𝑠𝑡 𝑤𝑖𝑡ℎ 𝐵𝑁 𝑎𝑛𝑑 𝑅𝑒𝐿𝑈 𝑎𝑐𝑡𝑖𝑣𝑎𝑡𝑖𝑜𝑛 ( 0,0,0,0,0,0,0,1,0,0 ) ( 0,0,1,0,0,0,0,0,0,0 ) 0.499 0.502 0.499 0.491 0.233 0.202 𝒅 𝒅 𝑆𝑡𝑑 𝑂𝑟𝑡 𝑅𝑅 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 0 2 4 6 8 10 12 more information about both 𝑋 & 𝑌 more information about 𝑌 only 𝐼 ( 𝑋 ; 𝒟 ) 𝐼 ( 𝒟 ; 𝑌 ) less shared information between layer ℒ and 𝑋 le s s sh ar ed in fo rm at io n b et w ee n la ye r ℒ a n d 𝑌 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 0 2 4 6 8 10 12 𝑂𝑟𝑡 𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑 𝐼 ( 𝑋 ; 𝒟 ) 𝐼 ( 𝒟 ; 𝑌 ) 0.9 0.92 0.94 0.96 0.98 1 1.02 0 2 4 6 8 10 12 𝑂𝑟𝑡 𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑 𝐼 ( 𝑋 ; 𝒟 ) 𝐼 ( 𝒟 ; 𝑌 ) 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 0 2 4 6 8 10 12 𝑂𝑟𝑡 𝑅𝑅 𝑅𝑅 𝑆𝑡𝑑 𝐼 ( 𝑋 ; 𝒟 ) 𝒟 𝒟 𝒟 𝒟 𝒟 𝒟 𝑆𝑡𝑑 𝐼 ( 𝒟 ; 𝑌 ) more information about 𝑋 only ( a ) Mutual Information Plane , How to Read ( b ) Mutual Information for Task A ( c ) After fine-tuning for A ( d ) After fine-tuning for B no information … … … … 𝒅𝟐 𝒅𝟐 | 𝒅𝟑 𝒅𝟑 | 𝒅𝒏 𝒅𝒏 | 𝒐𝒖𝒕𝒑𝒖𝒕 Layers of 𝑅𝑅 models exhibit strong MI with in- & output 𝒗 𝒗 0.43 0.49 0.42 0.45 0.21 0.22 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 𝒗 𝒗 𝒅 ( 1,0 ) ( 0,1 ) 𝒅 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 𝑎 𝐿𝑖𝑛𝑒𝑎𝑟 𝑚𝑜𝑑𝑒𝑙 𝑏 𝑊𝑖𝑡ℎ 𝐵𝑁 𝑎𝑛𝑑 𝑅𝑒𝐿𝑈 7 2 0.50 0.50 0.50 0.49 0.23 0.20 𝒅 𝒅 𝑆𝑡𝑑 𝑂𝑟𝑡 𝑅𝑅 …… ( 50 imgs ) …… ( 50 imgs ) Our goal with the reverse pass is to transpose all operations of the forward pass to obtain identical intermediate activations between the layers with matching dimensionality . We can then constrain the intermediate results of each layer of the forward pass to match the results of the backward pass , as illustrated in figure 2 . Unlike greedy layer-wise autoencoder pretraining , which trains each layer separately and only constrains d1 and d ′ 1 , we jointly train all layers and constrain all intermediate results . Due to the symmetric structure of the two passes , we can use a simple L2 difference to drive the network towards aligning the results : LRR = n∑ m=1 λm ∥∥∥dm − d′m∥∥∥2 F . ( 7 ) Here dm denotes the input of layer m in the forward pass and d ′ m the output of layer m for the reverse pass . λm denotes a scaling factor for the loss of layer m , which , however , is typically constant in our tests across all layers . Note that with our notation , d1 and d ′ 1 refer to the input data , and the reconstructed input , respectively . Next , we show how this setup realizes the regularization from equation 3 . For clarity , we use a fully connected layer with bias . In a neural network with n hidden layers , the forward process for a layer m is given by dm+1 = Mmdm +bm , , with d1 and dn+1 denoting in- and output , respectively . For our pretraining , we build a reverse pass network with transposed operations starting with the final output where dn+1 = d ′ n+1 , and the intermediate results d ′ m+1 : d ′ m = M T m ( d ′ m+1 − bm ) , ( 8 ) which yields ∥∥∥dm − d′m∥∥∥2 F = ∥∥MTmMmdm − dm∥∥2F . When this difference is minimized via equation 7 , we obtain activated intermediate content during the reverse pass that reconstructs the values computed in the forward pass , i.e . d ′ m+1 = dm+1 holds . As in equation 10 the reverse pass activation d ′ m depends on dm+1 ′ , this formulation yields a full reverse pass from output to input , which we use for most training runs below . In this case d ′ m = M T m ( d ′ m+1 − bm ) = MTm ( dm+1 − bm ) = MTmMmdm , ( 9 ) which is consistent with equation 3 , and satisfies the original constraint MTmMmdm − dm = 0 . This version is preferable if a unique path from output to input exists . For architectures where the path is not unique , e.g. , in the presence of additive residual connections , we use a local formulation d ′ m = M T m ( dm+1 − bm ) , ( 10 ) which employs dm+1 for jointly constraining all intermediate activations in the reverse pass . Up to now , the discussion focused on simplified neural networks without activation functions or extensions such as batch normalization ( BN ) . While we leave incorporating such extensions for future work , our experiments consistently show that the inherent properties of our pretraining remain valid : even with activations and BN , our approach successfully extracts dominant structures and yields improved generalization . In the appendix , we give details on how to ensure that the latent space content for forward and reverse pass is aligned such that differences can be minimized . To summarize , we realize the loss formulation of equation 7 to minimize ∑n m=1 ( ( M T mMm−I ) dm ) 2 without explicitly having to construct MTmMm . Following the notation above , we will refer to networks trained with the added reverse structure and the additional loss terms as RR variants . We consider two variants for the reverse pass : a local pretraining equation 10 using the datum dm+1 of a given layer , and a full version via equation 8 which uses d ′ m+1 incoming from the next layer during the reverse pass . Embedding Singular Values Below , Std denotes a regular training run ( in orange color in graphs below ) , while RR denotes our models ( in green ) . Pre and Ort will denote regular autoencoder pretraining and orthogonality , respectively , while a subscript will denote the task variant the model was trained for , e.g. , StdT for task T. While we typically use all layers of a network in the constraints , a reduced variant that we compare to below only applies the constraint for the input data , i.e. , m=1 . A network trained with this variant , denoted by RR1A , is effectively trained to only reconstruct the input . It contains no constraints for the inner activations and layers of the network . For the Ort models , we use the Spectral Restricted Isometry Property algorithm ( Bansal et al. , 2018 ) . We verify that the column vectors of Vm of models from RR training contain the dominant features of the input with the help of a classification test , employing a single fully connected layer , i.e . d2 = M1d1 , with batch normalization and activation . To quantify this similarity , we compute an LPIPS distance ( Zhang et al. , 2018b ) between vim and the training data ( lower values being better ) . 𝒗 𝒗 0.43 0.49 0.42 0.45 0.21 0.22 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 𝑎 𝐿𝑖𝑛𝑒𝑎𝑟 𝑚𝑜𝑑𝑒𝑙 7 2 𝒅 𝒅 𝐋𝐏𝐈𝐏𝐒𝐏𝐫𝐞 0.14 0.31 𝒗 𝒗 1,0 𝑎𝑛𝑑 ( 0,1 ) 𝐋𝐏𝐈𝐏𝐒𝐒𝐭𝐝 𝐋𝐏𝐈𝐏𝐒𝐎𝐫𝐭 𝐋𝐏𝐈𝐏𝐒𝑹𝑹 0.50 0.50 0.50 0.49 0.23 0.20 𝑆𝑡𝑑 𝑂𝑟𝑡 𝑅𝑅 …… …… 𝑃𝑟𝑒 𝐋𝐏𝐈𝐏𝐒𝐏𝐫𝐞 0.24 0.40 + We employ a training data set constructed from two dominant classes ( a peak in the top left , and bottom right quadrant , respectively ) , augmented with noise in the form of random scribbles . Based on the analysis above , we expect the RR training to extract the two dominant peaks during training . The LPIPS measurements confirm our SVD argumentation above , with average scores of 0.217±0.022 for RR , 0.319±0.114 for Pre , 0.495± 0.006 for Ort , and 0.500± 0.002 for Std . I.e. , the RR model fares significantly better than the others . At the same time , the peaks are clearly visible for RR models , an example is shown in figure 3 ( b ) , while the other models fail to extract structures that resemble the input . Thus , by training with the full network and the original training objective , our pretraining yields structures that are interpretable and be inspected by humans . The results above experimentally confirm our formulation of the RR loss and its ability to extract dominant and generalizing structures from the training data . Next , we will focus on quantified metrics and turn to measurements in terms of mutual information to illustrate the behavior of our pretraining for deeper networks .
This paper proposes to use orthogonal weight constraints for autoencoders. The authors demonstrate that under orthogonal weights (hence invertible), more features could be extracted. The theory is conducted under linear cases while the authors claim it can be applied to more complicated scenarios such as higher dimension and with nonlinearity. The experiments demonstrate the performance of proposed model on classification tasks and generative tasks. Several baselines are compared.
SP:9513f146a764d9e67b7d054692d0a923622ff007
GENERATIVE MODEL-ENHANCED HUMAN MOTION PREDICTION
1 INTRODUCTION . Human motion is naturally intelligible as a time-varying graph of connected joints constrained by locomotor anatomy and physiology . Its prediction allows the anticipation of actions with applications across healthcare ( Geertsema et al. , 2018 ; Kakar et al. , 2005 ) , physical rehabilitation and training ( Chang et al. , 2012 ; Webster & Celik , 2014 ) , robotics ( Koppula & Saxena , 2013b ; a ; Gui et al. , 2018b ) , navigation ( Paden et al. , 2016 ; Alahi et al. , 2016 ; Bhattacharyya et al. , 2018 ; Wang et al. , 2019 ) , manufacture ( Švec et al. , 2014 ) , entertainment ( Shirai et al. , 2007 ; Rofougaran et al. , 2018 ; Lau & Chan , 2008 ) , and security ( Kim & Paik , 2010 ; Ma et al. , 2018 ) . The favoured approach to predicting movements over time has been purely inductive , relying on the history of a specific class of movement to predict its future . For example , state space models ( Koller & Friedman , 2009 ) enjoyed early success for simple , common or cyclic motions ( Taylor et al. , 2007 ; Sutskever et al. , 2009 ; Lehrmann et al. , 2014 ) . The range , diversity and complexity of human motion has encouraged a shift to more expressive , deep neural network architectures ( Fragkiadaki et al. , 2015 ; Butepage et al. , 2017 ; Martinez et al. , 2017 ; Li et al. , 2018 ; Aksan et al. , 2019 ; Mao et al. , 2019 ; Li et al. , 2020b ; Cai et al. , 2020 ) , but still within a simple inductive framework . This approach would be adequate were actions both sharply distinct and highly stereotyped . But their complex , compositional nature means that within one category of action the kinematics may vary substantially , while between two categories they may barely differ . Moreover , few real-world tasks restrict the plausible repertoire to a small number of classes–distinct or otherwise–that could be explicitly learnt . Rather , any action may be drawn from a great diversity of possibilities–both kinematic and teleological–that shape the characteristics of the underlying movements . This has two crucial implications . First , any modelling approach that lacks awareness of the full space of motion possibilities will be vulnerable to poor generalisation and brittle performance in the face of kinematic anomalies . Second , the very notion of In-Distribution ( ID ) testing becomes moot , for the relations between different actions and their kinematic signatures are plausibly determinable only across the entire domain of action . A test here arguably needs to be Out-of-Distribution ( OoD ) if it is to be considered a robust test at all . These considerations are amplified by the nature of real-world applications of kinematic modelling , such as anticipating arbitrary deviations from expected motor behaviour early enough for an automatic intervention to mitigate them . Most urgent in the domain of autonomous driving ( Bhattacharyya et al. , 2018 ; Wang et al. , 2019 ) , such safety concerns are of the highest importance , and are best addressed within the fundamental modelling framework . Indeed , Amodei et al . ( 2016 ) cites the ability to recognize our own ignorance as a safety mechanism that must be a core component in safe AI . Nonetheless , to our knowledge , current predictive models of human kinematics neither quantify OoD performance nor are designed with it in mind . There is therefore a need for two frameworks , applicable across the domain of action modelling : one for hardening a predictive model to anomalous cases , and another for quantifying OoD performance with established benchmark datasets . General frameworks are here desirable in preference to new models , for the field is evolving so rapidly greater impact can be achieved by introducing mechanisms that can be applied to a breadth of candidate architectures , even if they are demonstrated in only a subset . Our approach here is founded on combining a latent variable generative model with a standard predictive model , illustrated with the current state-of-the-art discriminative architecture ( Mao et al. , 2019 ; Wei et al. , 2020 ) , a strategy that has produced state-of-the-art in the medical imaging domain Myronenko ( 2018 ) . Our aim is to achieve robust performance within a realistic , low-volume , high-heterogeneity data regime by providing a general mechanism for enhancing a discriminative architecture with a generative model . In short , our contributions to the problem of achieving robustness to distributional shift in human motion prediction are as follows : 1 . We provide a framework to benchmark OoD performance on the most widely used opensource motion capture datasets : Human3.6M ( Ionescu et al. , 2013 ) , and CMU-Mocap1 , and evaluate state-of-the-art models on it . 2 . We present a framework for hardening deep feed-forward models to OoD samples . We show that the hardened models are fast to train , and exhibit substantially improved OoD performance with minimal impact on ID performance . We begin section 2 with a brief review of human motion prediction with deep neural networks , and of OoD generalisation using generative models . In section 3 , we define a framework for benchmarking OoD performance using open-source multi-action datasets . We introduce in section 4 the discriminative models that we harden using a generative branch to achieve a state-of-the-art ( SOTA ) OoD benchmark . We then turn in section 5 to the architecture of the generative model and the overall objective function . Section 6 presents our experiments and results . We conclude in section 7 with a summary of our results , current limitations , and caveats , and future directions for developing robust and reliable OoD performance and a quantifiable awareness of unfamiliar behaviour . 2 RELATED WORK . Deep-network based human motion prediction . Historically , sequence-to-sequence prediction using Recurrent Neural Networks ( RNNs ) have been the de facto standard for human motion prediction ( Fragkiadaki et al. , 2015 ; Jain et al. , 2016 ; Martinez et al. , 2017 ; Pavllo et al. , 2018 ; Gui et al. , 2018a ; Guo & Choi , 2019 ; Gopalakrishnan et al. , 2019 ; Li et al. , 2020b ) . Currently , the SOTA is dominated by feed forward models ( Butepage et al. , 2017 ; Li et al. , 2018 ; Mao et al. , 2019 ; Wei et al. , 2020 ) . These are inherently faster and easier to train than RNNs . The jury is still out , however , on the optimal way to handle temporality for human motion prediction . Meanwhile , recent trends have overwhelmingly shown that graph-based approaches are an effective means to encode the spatial dependencies between joints ( Mao et al. , 2019 ; Wei et al. , 2020 ) , or sets of joints ( Li et al. , 2020b ) . In this study , we consider the SOTA models that have graph-based approaches with a feed forward mechanism as presented by ( Mao et al. , 2019 ) , and the subsequent extension which leverages motion attention , Wei et al . ( 2020 ) . We show that these may be augmented to improve robustness to OoD samples . Generative models for Out-of-Distribution prediction and detection . Despite the power of deep neural networks for prediction in complex domains ( LeCun et al. , 2015 ) , they face several challenges that limits their suitability for safety-critical applications . Amodei et al . ( 2016 ) list robustness to distributional shift as one of the five major challenges to AI safety . Deep generative models , have been used extensively for detection of OoD inputs and have been shown to generalise 1t http : //mocap.cs.cmu.edu/ well in such scenarios ( Hendrycks & Gimpel , 2016 ; Liang et al. , 2017 ; Hendrycks et al. , 2018 ) . While recent work has showed some failures in simple OoD detection using density estimates from deep generative models ( Nalisnick et al. , 2018 ; Daxberger & Hernández-Lobato , 2019 ) , they remain a prime candidate for anomaly detection ( Kendall & Gal , 2017 ; Grathwohl et al. , 2019 ; Daxberger & Hernández-Lobato , 2019 ) . Myronenko ( 2018 ) use a Variational Autoencoder ( VAE ) ( Kingma & Welling , 2013 ) to regularise an encoder-decoder architecture with the specific aim of better generalisation . By simultaneously using the encoder as the recognition model of the VAE , the model is encouraged to base its segmentations on a complete picture of the data , rather than on a reductive representation that is more likely to be fitted to the training data . Furthermore , the original loss and the VAE ’ s loss are combined as a weighted sum such that the discriminator ’ s objective still dominates . Further work may also reveal useful interpretability of behaviour ( via visualisation of the latent space as in Bourached & Nachev ( 2019 ) ) , generation of novel motion ( Motegi et al. , 2018 ) , or reconstruction of missing joints as in Chen et al . ( 2015 ) . 3 QUANTIFYING OUT-OF-DISTRIBUTION PERFORMANCE OF HUMAN MOTION PREDICTORS . Even a very compact representation of the human body such as OpenPose ’ s 17 joint parameterisation Cao et al . ( 2018 ) explodes to unmanageable complexity when a temporal dimension is introduced of the scale and granularity necessary to distinguish between different kinds of action : typically many seconds , sampled at hundredths of a second . Moreover , though there are anatomical and physiological constraints on the space of licit joint configurations , and their trajectories , the repertoire of possibility remains vast , and the kinematic demarcations of teleologically different actions remain indistinct . Thus , no practically obtainable dataset may realistically represent the possible distance between instances . To simulate OoD data we first need ID data that can be varied in its quantity and heterogeneity , closely replicating cases where a particular kinematic morphology may be rare , and therefore undersampled , and cases where kinematic morphologies are both highly variable within a defined class and similar across classes . Such replication needs to accentuate the challenging aspects of each scenario . We therefore propose to evaluate OoD performance where only a single action , drawn from a single action distribution , is available for training and hyperparameter search , and testing is carried out on the remaining classes . In appendix A , to show that the action categories we have chosen can be distinguished at the time scales on which our trajectories are encoded , we train a simple classifier and show it separates the selected ID action from the others with high accuracy ( 100 % precision and recall for the CMU dataset ) . Performance over the remaining set of actions may thus be considered OoD . 4 BACKGROUND . Here we describe the current SOTA model proposed by Mao et al . ( 2019 ) ( GCN ) . We then describe the extension by Wei et al . ( 2020 ) ( attention-GCN ) which antecedes the GCN prediction model with motion attention . 4.1 PROBLEM FORMULATION . We are given a motion sequence X1 : N = ( x1 , x2 , x3 , · · · , xN ) consisting of N consecutive human poses , where xi ∈ RK , with K the number of parameters describing each pose . The goal is to predict the poses XN+1 : N+T for the subsequent T time steps .
This paper raises and studies concerns about the generalization of 3D human motion prediction approaches across unseen motion categories. The authors address this problem by augmenting existing architectures with a VAE framework. More precisely, an encoder network that is responsible for summarizing the seed sequence is shared by two decoders for the reconstruction of the seed motion and prediction of the future motion. Hence, the encoder is trained by using both the ELBO of a VAE and the objective of the original motion prediction task.
SP:70fc08b1b6161c770b5019272c2eaa0d2e3c39ee
Learning Latent Topology for Graph Matching
1 INTRODUCTION . Being a long standing NP-hard problem ( Loiola et al. , 2007 ) , graph matching ( GM ) has received persistent attention from the machine learning and optimization communities for many years . Concretely , for two graphs with n nodes for each , graph matching seeks to solve1 : max z z > Mz s.t . Z ∈ { 0 , 1 } n×n , Hz = 1 ( 1 ) where the affinity matrix M ∈ Rn 2×n2 + encodes node ( diagonal elements ) and edge ( off-diagonal ) affinities/similarities and z is the column-wise vectorization form of the permutation matrix Z. H is a selection matrix ensuring each row and column of Z summing to 1 . 1 is a column vector filled with 1 . Eq . ( 1 ) is the so-called quadratic assignment problem ( QAP ) ( Cho et al. , 2010 ) . Maximizing Eq . ( 1 ) amounts to maximizing the sum of the similarity induced by matching vector Z . While Eq . ( 1 ) does not encode the topology of graphs , Zhou & Torre ( 2016 ) further propose to factorize M to explicitly incorporate topology matrix , where a connectivity matrix A ∈ { 0 , 1 } n×n is used to indicate the topology of a single graph ( Aij = 1 if there exists an edge between nodes i and j ; Aij = 0 otherwise ) . To ease the computation , Eq . ( 1 ) is typically relaxed by letting z ∈ [ 0 , 1 ] n 2 and keeping other parts of Eq . ( 1 ) intact . Traditional solvers to such relaxed problem generally fall into the categories of iterative update ( Cho et al. , 2010 ; Jiang et al. , 2017 ) or numerical continuation ( Zhou & Torre , 2016 ; Yu et al. , 2018 ) , where the solvers are developed under two key assumptions : 1 ) Affinity M is pre-computed with some non-negative metrics , e.g . Gaussian kernel , L2-distance or Manhattan distance ; 2 ) Graph topology is pre-defined as input either in dense ( Schellewald & Schnörr , 2005 ) or sparse ( Zhou & Torre , 2016 ) fashion . There have been several successful attempts towards adjusting the first assumption by leveraging the power of deep networks to learn more effective graph representation for GM ( Wang et al. , 2019a ; Yu et al. , 2020 ; Fey et al. , 2020 ) . However , to our best knowledge , there is little previous work questioning and addressing the problem regarding the second assumption in the context of learning-based graph matching2 . For example , existing 1Without loss of generality , we discuss graph matching under the setting of equal number of nodes without outliers . The unequal case can be readily handled by introducing extra constraints or dummy nodes . Bipartite matching and graph isomorphism are subsets of this quadratic formulation ( Loiola et al. , 2007 ) . 2There are some loosely related works ( Du et al. , 2019 ; 2020 ) on network alignment and link prediction without learning , which will be discussed in detail in the related works . standard pipeline of keypoint matching in computer vision will construct initial topology by Delaunay triangulation or k-nearest neighbors . Then this topology will be freezed throughout the subsequent learning and matching procedures . In this sense , the construction of graph topology is peeled from matching task as a pre-processing stage . More examples can be found beyond the vision communities such as in social network alignment ( Zhang & Tong , 2016 ; Heimann et al. , 2018 ; Xiong & Yan , 2020 ) assuming fixed network structure for individual node matching in two networks . We argue that freezing graph topology for matching can hinder the capacity of graph matching solvers . For a pre-defined graph topology , the linked nodes sometimes result in less meaningful interaction , especially under the message-passing mechanism in graph neural networks ( Kipf & Welling , 2017 ) . We give a schematic demonstration in Fig . 1 . Though some earlier attempts ( Cho & Lee , 2012 ; Cho et al. , 2013 ) seek to adjust the graph topology under traditional non-deep learning setting , such procedures can not be readily integrated into end-to-end deep learning frameworks due to undifferentiable nature . Building upon the hypothesis that there exists some latent topology better than heuristically created one for GM , our aim is to learn it ( or its distribution ) for GM . Indeed , jointly solving matching and graph topology learning can be intimidating due to the combinatorial nature , which calls for more advanced approaches . In this paper , we propose an end-to-end framework to jointly learn the latent graph topology and perform GM , termed as deep latent graph matching ( DLGM ) . We leverage the power of graph generative model to automatically produce graph topology from given features and their geometric relations , under specific locality prior . Different from generative learning on singleton graphs ( Kipf & Welling , 2016 ; Bojchevski et al. , 2018 ) , our graph generative learning is performed in a pairwise fashion , leading to a novel matching-guided generative paradigm . The source code will be made publicly available . Contributions : 1 ) We explore a new direction for more flexible GM by actively learning latent topology , in contrast to previous works using fixed topology as input ; 2 ) Under this setting , we propose a deterministic optimization approach to learn graph topology for matching ; 3 ) We further present a generative way to produce latent topology under a probabilistic interpretation by ExpectationMaximization . This framework can also adapt to other problems where graph topology is the latent structure to infer ; 4 ) Our method achieves state-ofthe-art performance on public benchmarks . 2 RELATED WORKS . In this section , we first discuss existing works for graph topology and matching updating whose motivation is a bit similar to ours while the technique is largely different . Then we discuss relevant works in learning graph matching and generative graph models from the technical perspective . Topology updating and matching . There are a few works for joint graph topology updating and matching , in the context of network alignment . Specifically , given two initial networks for matching , Du et al . ( 2019 ) show how to alternatively perform link prediction within each network and node matching across networks based on the observation that these two tasks can benefit to each other . In their extension ( Du et al. , 2020 ) , a skip-gram embedding framework is further established under the same problem setting . In fact , these works involve a random-walk based node embedding updating and classification based link prediction modules and the whole algorithm runs in a one-shot optimization fashion . There is neither explicit training dataset nor trained matching model ( except for the link classifier ) , which bears less flavor of machine learning . In contrast , our method involves training an explicit model for topology recovery and matching solving . Specifically , our deterministic technique ( see Sec . 3.4.1 ) solves graph topology and matching in one-shot , while the proposed generative method alternatively estimates the topology and matching ( see Sec . 3.4.2 ) . Our approach allows to fully leverage multiple training samples in many applications like computer vision to boost the performance on test set . Moreover , the combinatorial nature of the matching problem is not addressed in ( Du et al. , 2019 ; 2020 ) , and they adopt a greedy selection strategy instead . While we develop a principled combinatorial learning approach to this challenge . Also their methods rely on a considerable amount of seed matchings , yet this paper directly learns the latent topology from scratch which is more challenging and seldom studied . Learning of graph matching . Early non-deep learning-based methods seek to learn effective metric ( e.g . weighted Euclid distance ) for node and edge features or affinity kernel ( e.g . Gaussian kernel ) in a parametric fashion ( Caetano et al. , 2009 ; Cho et al. , 2013 ) . Recent deep graph matching methods have shown how to extracte more dedicated feature representation . The work ( Zanfir & Sminchisescu , 2018 ) adopts VGG16 ( Simonyan & Zisserman , 2014 ) as the backbone for feature extraction on images . Other efforts have been witnessed in developing more advanced pipelines , where graph embedding ( Wang et al. , 2019a ; Yu et al. , 2020 ; Fey et al. , 2020 ) and geometric learning ( Zhang & Lee , 2019 ; Fey et al. , 2020 ) are involved . Rolı́nek et al . ( 2020 ) study the way of incorporating traditional non-differentiable combinatorial solvers , by introducing a differentiatiable blackbox GM solver ( Pogancic et al. , 2020 ) . Recent works in tackling combinatorial problem with deep learning ( Huang et al. , 2019 ; Kool & Welling , 2018 ) also inspire developing combinatorial deep solvers , for GM problems formulated by both Koopmans-Beckmann ’ s QAP ( Nowak et al. , 2018 ; Wang et al. , 2019a ) and Lawler ’ s QAP ( Wang et al. , 2019b ) . Specifically , Wang et al . ( 2019a ) devise a permutation loss for supervised learning , with an improvement in Yu et al . ( 2020 ) via Hungarian attention . Wang et al . ( 2019b ) solve the most general Lawler ’ s QAP with graph embedding technique . Generative graph model . Early generative models for graph can date back to ( Erdos & Renyi , 1959 ) , in which edges are generated with fixed probability . Recently , Kipf & Welling ( 2016 ) present a graph generative model by re-parameterizing the edge probability from Gaussian noise . Johnson ( 2017 ) propose to generate graph in an incremental fashion , and in each iteration a portion of the graph is produced . Gómez-Bombarelli et al . ( 2018 ) utilized recurrent neural network to generate graph from a sequence of molecule representation . Adversarial graph generation is considered in ( Pan et al. , 2018 ; Wang et al. , 2018 ; Bojchevski et al. , 2018 ) . Specifically , Wang et al . ( 2018 ) ; Bojchevski et al . ( 2018 ) seek to unify graph generative model and generative adversarial networks . In parallel , reinforcement learning has been adopted to generate discrete graphs ( De Cao & Kipf , 2018 ) . 3 LEARNING LATENT TOPOLOGY FOR GM . In this section , we describe details of the proposed framework with two specific algorithms derived from deterministic and generative perspectives , respectively . Both algorithms are motivated by the hypothesis that there exists some latent topology more suitable for matching rather than a fixed one . Note the proposed deterministic algorithm performs a standard forward-backward pass to jointly learn the topology and matching , while our generative algorithm consists of an alternative optimization procedure between estimating latent topology and learning matching under an Expectation-Maximization ( EM ) interpretation . In general , the generative algorithm assumes that a latent topology is sampled from a latent distribution , where the expected matching accuracy sufficing this distribution is maximized . Therefore , we expect to learn a topology generator sufficing such distribution . We reformulate GM into Bayesian fashion for consistent discussion in Sec . 3.1 , detail deterministic/generative latent module in Sec . 3.2 and discuss the loss functions from a probabilistic perspective in Sec . 3.3 . We finally elaborate on the holistic framework and the optimization procedure for both algorithms ( deterministic and generative ) in Sec . 3.4 .
The authors address the problem of discrete keypoint matching. For an input pair of images, the task is to match the unannotated (but given as part of the input) keypoints. The main contribution is identifying the bottleneck of the current SOTA algorithm: a fixed connectivity construction given by Delauney triangulation. By replacing this with an end-to-end learnable algorithm, they outperform SOTA with a decent margin.
SP:8f1c7fabe235bdf095007948007509102dd0c126
Intervention Generative Adversarial Nets
1 INTRODUCTION . As one of the most important advances in generative models in recent years , Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) have been attracting great attention in the machine learning community . GANs aim to train a generator network that transforms simple vectors of noise to produce “ realistic ” samples from the data distribution . In the basic training process of GANs , a discriminator and a target generator are trained in an adversarial manner . The discriminator tries to distinguish the generated fake samples from the real ones , and the generator tries to fool the discriminator into believing the generated samples to be real . Though successful , there are two major challenges in training GANs : the instability of the training process and the mode collapse problem . To deal with these problems , one class of approaches focus on designing more informative objective functions ( Salimans et al. , 2016 ; Mao et al. , 2016 ; Kodali et al. , 2018 ; Arjovsky & Bottou ; Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ; Zhou et al. , 2019 ) . For example , Mao et al . ( 2016 ) proposed Least Squares GAN ( LSGAN ) that uses the least squares loss to penalize the outlier point more harshly . Arjovsky & Bottou discussed the role of the Jensen-Shannon divergence in training GANs and proposed WGAN ( Arjovsky et al. , 2017 ) and WGAN-GP ( Gulrajani et al. , 2017 ) that use the more informative Wasserstein distance instead . Other approaches enforce proper constraints on latent space representations to better capture the data distribution ( Makhzani et al. , 2015 ; Larsen et al. , 2015 ; Che et al. , 2016 ; Tran et al. , 2018 ) . A representative work is the Adversarial Autoencoders ( AAE ) ( Makhzani et al. , 2015 ) which uses the discriminator to distinguish the latent representations generated by encoder from Gaussian noise . Larsen et al . ( 2015 ) employed image representation in the discriminator as the reconstruction basis of a VAE . Their method turns pixel-wise loss to feature-wise , which can capture the real distribution more simply when some form of invariance is induced . Different from VAE-GAN , Che et al . ( 2016 ) regarded the encoder as an auxiliary network , which can promote GANs to pay much attention on missing mode and derive an objective function similar to VAE-GAN . A more detailed discussion of related works can be found in Appendix C. In this paper we propose a novel technique for GANs that improve both the training stability and the quality of generated images . The core of our approach is a regularization term based on the latent representations of real images provided by an encoder network . More specifically , we apply auxiliary intervention operations that preserve the standard Gaussian ( e.g. , the noise distribution ) to these latent representations . The perturbed latent representations are then fed into the generator to produce intervened samples . We then introduce a classifier network to identify the right intervention operations that would have led to these intervened samples . The resulting negative cross-entropy loss is added as a regularizer to the objective when training the generator . We call this regularization term the intervention loss and our approach InterVention Generative Adversarial Nets ( IVGAN ) . We show that the intervention loss is equivalent with the JS-divergence among multiple intervened distributions . Furthermore , these intervened distributions interpolate between the original generative distribution of GAN and the data distribution , providing useful information for the generator that is previously unavailable in common GAN models ( see a thorough analysis on a toy example in Example 1 ) . We show empirically that our model can be trained efficiently by utilizing the parameter sharing strategy between the discriminator and the classifier . The models trained on the MNIST , CIFAR-10 , LSUN and STL-10 datasets successfully generate diverse , visually appealing objects , outperforming state-of-the-art baseline methods such as WGAN-GP and MRGAN in terms of the Frèchet Inception Distance ( FID ) ( proposed in ( Heusel et al. , 2017 ) ) . We also perform a series of experiments on the stacked MNIST dataset and the results show that our proposed method can also effectively alleviate the mode collapse problem . Moreover , an ablation study is conducted , which validates the effectiveness of the proposed intervention loss . In summary , our work offers three major contributions as follows . ( i ) We propose a novel method that can improve GAN ’ s training as well as generating performance . ( ii ) We theoretically analyze our proposed model and give insights on how it makes the gradient of generator more informative and thus stabilizes GAN ’ s training . ( iii ) We evaluate the performance of our method on both standard real-world datasets and the stacked MNIST dataset by carefully designed expriments , showing that our approach is able to stabilize GAN ’ s training and improve the quality and diversity of generated samples as well . 2 PRELIMINARIES . Generative adversarial nets The basic idea of GANs is to utilize a discriminator to continuously push a generator to map Gaussian noise to samples drawn according to an implicit data distribution . The objective function of the vanilla GAN takes the following form : min G max D { V ( D , G ) , Ex∼pdata log ( D ( x ) ) + Ez∼pz log ( 1−D ( G ( z ) ) ) } , ( 1 ) where pz is a prior distribution ( e.g. , the standard Gaussian ) . It can be easily seen that when the discriminator reaches its optimum , that is , D∗ ( x ) = pdata ( x ) pdata ( x ) +pG ( x ) , the objective is equivalent to the Jensen-Shannon ( JS ) divergence between the generated distribution pG and data distribution pdata : JS ( pG‖pdata ) , 1 2 { KL ( pG‖ pG + pdata 2 ) +KL ( pdata‖ pG + pdata 2 ) } . Minimizing this JS divergence guarantees that the generated distribution converges to the data distribution given adequate model capacity . Multi-distribution JS divergence The JS divergence between two distributions p1 and p2 can be rewritten as JS ( p1‖p2 ) = H ( p1 + p2 2 ) − 1 2 H ( p1 ) − 1 2 H ( p2 ) , where H ( p ) denotes the entropy of distribution p. We observe that the JS-divergence can be interpreted as the entropy of the mean of the two distribution minus the mean of two distributions ’ entropy . So it is immediate to generalize the JS-divergence to the setting of multiple distributions . In particular , we define the JS-divergence of p1 , p2 , . . . , pn with respect to weights π1 , π2 , . . . , πn ( ∑ πi = 1 and πi ≥ 0 ) as JSπ1 , ... , πn ( p1 , p2 , . . . , pn ) , H ( n∑ i=1 πipi ) − n∑ i=1 πiH ( pi ) . ( 2 ) The two-distribution case described above is actually a special case of the ‘ multi-JS divergence ’ , where π1 = π2 = 12 . When πi > 0 ∀i , it can be found immediately by Jensen ’ s inequality that JSπ1 , ... , πn ( p1 , p2 , . . . , pn ) = 0 if and only if p1 = p2 = · · · = pn . 3 METHODOLOGY . Training GAN has been challenging , especially when the generated distribution and the data distribution are far away from each other . In such cases , the discriminator often struggles to provide useful information for the generator , leading to instability and mode collapse problems . The key idea behind our approach is that we construct auxiliary intermediate distributions that interpolate between the generated distribution and the data distribution . To do that , we first introduce an encoder network and combine it with the generator to learn the latent representation of real images within the framework of a standard autoencoder . We then perturb these latent representations with carefully designed intervention operations before feeding them into the generator to create these auxiliary interpolating distributions . A classifier is used to distinguish the intervened samples , which leads to an intervention loss that penalizes the dissimilarity of these intervened distributions . The reconstruction loss and the intervention loss are added as regularization terms to the standard GAN loss for training . We start with an introduction of some notation and definitions . Definition 1 ( Intervention ) . Let O be a transformation on the space of d-dimension random vectors and P be a probability distribution whose support is in Rd . We call O a P-intervention if for any d-dimensional random vector X , X ∼ P⇒ O ( X ) ∼ P. Since the noise distribution in GAN models is usually taken to be standard Gaussian , we use the standard Gaussian distribution as the default choice of P and abbreviate the P-intervention as intervention , unless otherwise claimed . One of the simplest groups of interventions is block substitution . Let Z ∈ Rd be a random vector , k ∈ N and k|d . We slice Z into k blocks so that every block is in R d k . A block substitution intervention Oi is to replace the ith block of Z with Gaussian noise , i = 1 , . . . , dk . We will use block substitution interventions in the rest of the paper unless otherwise specified . Note that our theoretical analysis as well as the algorithmic framework do not depend on the specific choice of the intervention group . Notation We use E , G , D , f to represent encoder , generator , discriminator and classifier , respectively . Here and later , preal means the distribution of the real data X , and pz is the prior distribution of noise z defined on the latent space ( usually is taken to be Gaussian ) . Let Oi , i = 1 , . . . , k denote k different interventions and pi be the distribution of intervened sample Xi created from Oi ( namely Xi = G ( Oi ( E ( X ) ) ) ) . Intervention loss The intervention loss is the core of our approach . More specifically , given a latent representation z that is generated by an encoder network E , we sample an intervention Oi from a complete group S = { O1 , . . . , Ok } and obtain the corresponding intervened latent variable Oi ( z ) with label ei . These perturbed latent representations are then fed into the generator to produce intervened samples . We then introduce an auxiliary classifier network to identify which intervention operations may lead to these intervened samples . The intervention loss LIV ( G , E ) is simply the resulting negative cross-entropy loss and we add that as a regularizer to the objective function when training the generator . As we can see , the intervention loss is used to penalize the dissimilarity of the distributions of the images generated by different intervention operations . Moreover , it can be noticed that the classifier and the combination of the generator and the encoder are playing a two-player adversarial game and we will train them in an adversarial manner . In particular , we define LIV ( G , E ) = −min f Vclass , where Vclass = Ei∼U ( [ k ] ) Ex′∼pi − eTi log f ( x′ ) . ( 3 ) Theorem 1 ( Optimal Classifier ) . The optimal solution of the classifier is the conditional probability of label y given X ′ , where X ′ is the intervened sample generated by the intervention operation sampled from S. And the minimum of the cross entropy loss is equivalent with the negative of the Jensen Shannon divergence among { p1 , p2 , ... , pk } . That is , f∗i ( x ) = pi ( x ) ∑k j=1 pj ( x ) and LIV ( G , E ) = JS ( p1 , p2 , ... , pk ) + Const . ( 4 ) The proof can be found in Appendix A.1 . Clearly , the intervention loss is an approximation of the multi-JS divergence among the intervened distributions { pi : i ∈ [ k ] } . If the intervention reaches its global minimum , we have p1 = p2 = · · · = pk . And it reaches the maximum log k if and only if the supports of these k distributions do not intersect with each other . This way , the probability that the ‘ multi ’ JS-divergence has constant value is much smaller , which means the phenomenon of gradient vanishing should be rare in IVGAN . Moreover , as shown in the following example , due to these auxiliary intervened distributions , the intervention is able to provide more informative gradient for the generator that is not previously available in other GAN variants . Example 1 ( Square fitting ) . Let X0 be a random vector with distribution U ( α ) , where α = [ − 12 , 1 2 ] × [ − 1 2 , 1 2 ] . And X1 ∼ U ( β ) , where β = [ a− 12 , a+ 1 2 ] × [ 1 2 , 3 2 ] and 0 ≤ a ≤ 1 . Assuming we have a perfect discriminator ( or classifier ) , we compute the vanilla GAN loss ( i.e . the JS-divergence ) and the intervention loss between these two distributions , respectively , • JS ( X0‖X1 ) = log 2 . • In order to compute the intervention loss we need figure out two intervened samples ’ distributions evolved from U ( α ) and U ( β ) . Y1 ∼ U ( γ1 ) ; γ1 = [ − 12 , 1 2 ] × [ 1 2 , 3 2 ] and Y2 ∼ U ( γ2 ) ; γ2 = [ a − 12 , a + 1 2 ] × [ − 1 2 , 1 2 ] . Then the intervention loss is the multi JS-divergence among these four distributions : LIV = JS ( X0 ; X1 ; Y1 ; Y2 ) = − ∫ Ac 1 4 log 1 4 dµ− ∫ A 1 2 log 1 2 dµ−H ( X0 ) = log 2 2 [ µ ( Ac ) +µ ( A ) ] = log 2 2 × 2 ( 2− a ) −H ( X0 ) = − ( log 2 ) a− Const . Here A is the shaded part in Figure 2 and Ac = { α ∪ β ∪ γ1 ∪ γ2 } \A . The most important observation is that the intervention loss is a function of parameter a and the traditional GAN loss is always constant . When we replace the JS with other f -divergence , the metric between U ( α ) and U ( β ) would still remain constant . Hence in this situation , we can not get any information from the standard JS for training of the generator but the intervention loss works well . Algorithm 1 Intervention GAN Input learning rate α , regularization parameters λ and µ , dimension d of latent space , number k of blocks in which the hidden space is divided , minibatch size n , Hadamard multiplier ∗ 1 : for number of training iterations do 2 : Sample minibatch zj , j = 1 , ... , n , zj ∼ pz 3 : Sample minibatch xj , j = 1 , ... , n , xj ∼ preal 4 : for number of inner iteration do 5 : wj ← E ( xj ) , j = 1 , ... , n 6 : Sample Gaussian noise 7 : Sample ij ∈ [ k ] , j = 1 , ... , n 8 : x′j ← G ( Oij ( wj ) ) 9 : Update the parameters of D by : 10 : θD ← θD − α2n∇θDLadv ( θD ) 11 : Update the parameters of f by : 12 : θf ← θf + αn∇θf n∑ j=1 log fij ( x ′ j ) 13 : Calculate LAdv and LIV 14 : Update the parameter of G by : 15 : θG ← θG + αn∇θG { L̂Adv + λL̂recon + µL̂IV } 16 : Update the parameter of E by : 17 : θE ← θE + αn∇θE { λL̂recon + µL̂IV } Reconstruction loss In some sense we expect our encoder to be a reverse function of the generator . So it is necessary for the objective function to have a term to push the map composed of the Encoder and the Generator to have the ability to reconstruct the real samples . Not only that , we also hope that the representation can be reconstructed from samples in the pixel space . Formally , the reconstruction loss can be defined by the ` p-norm ( p ≥ 1 ) between the two samples , or in the from of the Wasserstein distance between samples if images are regarded as a histogram . Here we choose to use the ` 1-norm as the reconstruction loss : Lrecon = EX∼preal‖G ( E ( X ) ) −X‖1 +Ei∼U ( [ k ] ) Ex , z∼preal , pz‖E ( G ( Oi ( z ) ) ) −Oi ( z ) ‖1 . ( 5 ) Theorem 2 ( Inverse Distribution ) . Suppose the cumulative distribution function of Oi ( z ) is qi . For any given positive real number , there exist a δ > 0 such that if Lrecon +LIV ≤ δ , then ∀i , j ∈ [ k ] , sup r ‖qi ( r ) − qj ( r ) ‖ ≤ . The proof is in A.2 . Adversarial loss The intervention loss and reconstruction loss can be added as regularization terms to the adversarial loss in many GAN models , e.g. , the binary cross entropy loss in vanilla GAN and the least square loss in LSGAN . In the experiments , we use LSGAN ( Mao et al. , 2016 ) and DCGAN ( Radford et al. , 2015 ) as our base models , and name the resulting IVGAN models IVLSGAN and IVDCGAN respectively . Now that we have introduced the essential components in the objective of IVGAN , we can write the loss function of the entire model : Lmodel = LAdv + λLrecon + µLIV , ( 6 ) where λ and µ are the regularization coefficients for the reconstruction loss and the intervention loss respectively . We summarize the training procedure in Algorithm 1 . A diagram of the full workflow of our framework can be found in Figire 3 .
The paper proposes a method for stabilizing the training of GAN as well as overcoming the problem of mode collapse by optimizing several auxiliary models. The first step is to learn a latent space using an autoencoder. Then, this latent space is "intervened" by a predefined set of $K$ transformations to generate a set of distributions $p_k$. A classifier is then taught to distinguish between $p_k$. Eventually, the weights of the classifier are shared with those of the discriminator network to produce the desired stabilization/diversification effect. In other words, the authors propose to stabilize GANs by intervening with the discriminator. This is done by sharing its weights with a classifier that trains on a perturbed latent distribution that is somehow related to the original problem via the prior assumption imposed.
SP:879ce870f09e422aced7d008abc42fe5a8db29bc
Uniform Manifold Approximation with Two-phase Optimization
1 INTRODUCTION . We present a novel dimensionality reduction method , Uniform Manifold Approximation with Twophase Optimization ( UMATO ) to obtain less biased and robust embedding over diverse initialization methods . One effective way of understanding high-dimensional data in various domains is to reduce its dimensionality and investigate the projection in a lower-dimensional space . The limitation of previous approaches such as t-Stochastic Neighbor Embedding ( t-SNE , Maaten & Hinton ( 2008 ) ) and Uniform Manifold Approximation and Projection ( UMAP , McInnes et al . ( 2018 ) ) is that they are susceptible to different initialization methods , generating considerably different embedding results ( Section 5.5 ) . t-SNE adopts Kullback-Leibler ( KL ) divergence as its loss function . The fundamental limitation of the KL divergence is that the penalty for the points that are distant in the original space being close in the projected space is too little ( Appendix B ) . This results in only the local manifolds being captured , while clusters that are far apart change their relative locations from run to run . Meanwhile , UMAP leverages the cross-entropy loss function , which is known to charge a penalty for points that are distant in the original space being close in the projection space and for points that are close in the original space being distant in the projection space ( Appendix B ) . UMAP considers all points in the optimization at once with diverse sampling techniques ( i.e. , negative sampling and edge sampling ) . Although the approximation technique in UMAP optimization makes the computation much faster , this raises another problem that the clusters in the embedding become dispersed as the number of epochs increases ( Appendix K ) , which can lead to misinterpretation . UMAP tried to alleviate this by using a fixed number ( e.g. , 200 ) , which is ad hoc , and by applying a learning rate decay . However , the optimal number of epochs and decay schedule for each initialization method needs to be found in practice . To solve the aforementioned problems , we avoid using approximation during the optimization process , which normally would result in greatly increased computational cost . Instead , we first run optimization only with a small number of points that represent the data ( i.e. , hub points ) . Finding the optimal projection for a small number of points using a cross-entropy function is relatively easy and robust , making the additional techniques employed in UMAP unnecessary . Furthermore , it is less sensitive to the initialization method used ( Section 5.5 ) . After capturing the overall skeleton of the high-dimensional structure , we gradually append the rest of the points in subsequent phases . Although the same approximation technique as UMAP is used for these points , as we have already embedded the hub points and use them as anchors , the projections become more robust and unbiased . The gradual addition of points can in fact be done in a single phase ; we found additional phases do not result in meaningful improvements in the performance but only in the increased computation time ( Section 4.5 ) . Therefore , we used only two phases in UMAP : global optimization to capture the global structures ( i.e. , the pairwise distances in a high-dimensional space ) and local optimization to retain the local structures ( i.e. , the relationships between neighboring points in a high-dimensional space ) of the data . We compared UMATO with popular dimensionality reduction techniques including PCA , Isomap ( Tenenbaum et al . ( 2000 ) ) , t-SNE , UMAP , topological autoencoders ( Moor et al . ( 2020 ) ) and At-SNE ( Fu et al . ( 2019 ) ) . We used one synthetic ( 101-dimensional spheres ) and three realworld ( MNIST , Fashion MNIST , and Kuzushiji MNIST ) datasets and analyzed the projection results with several quality metrics . In conclusion , UMATO demonstrated better performance than the baseline techniques in all datasets in terms of KLσ with different σ values , meaning that it reasonably preserved the density of data over diverse length scales . Finally , we presented the 2D projections of each dataset , including the replication of an experiment using the synthetic Spheres dataset introduced by Moor et al . ( 2020 ) where data points locally constitute multiple small balls globally contained in a larger sphere . Here , we demonstrate that UMATO can better preserve both structures compared to the baseline algorithms ( Figure 3 ) . 2 RELATED WORK . Dimensionality reduction . Most previous dimensionality reduction algorithms focused on preserving the data ’ s local structures . For example , Maaten & Hinton ( 2008 ) proposed t-SNE , focusing on the crowding problem with which the previous attempts ( Hinton & Roweis ( 2002 ) ; Cook et al . ( 2007 ) ) have struggled , to visualize high-dimensional data through projection produced by performing stochastic gradient descent on the KL divergence between two density functions in the original and projection spaces . Van Der Maaten ( 2014 ) accelerated t-SNE developing a variant of the Barnes-Hut algorithm ( Barnes & Hut ( 1986 ) ) and reduced the computational complexity from O ( N2 ) into O ( N logN ) . After that , grounded in Riemannian geometry and algebraic topology , McInnes et al . ( 2018 ) introduced UMAP as an alternative to t-SNE . Leveraging the cross-entropy function as its loss function , UMAP reduced the computation time by employing negative sampling from Word2Vec ( Mikolov et al . ( 2013 ) ) and edge sampling from LargeVis ( Tang et al . ( 2015 ; 2016 ) ) ( Table 1 ) . Moreover , they showed that UMAP can generate stable projection results compared to t-SNE over repetition . On the other hand , there also exist algorithms that aim to capture the global structures of data . Isomap ( Tenenbaum et al . ( 2000 ) ) was proposed to approximate the geodesic distance of highdimensional data and embed it onto the lower dimension . Global t-SNE ( Zhou & Sharpee ( 2018 ) ) converted the joint probability distribution , P , in the high-dimensional space from Gaussian to Student ’ s-t distribution , and proposed a variant of KL divergence . By adding it with the original loss function of t-SNE , Global t-SNE assigns a relatively large penalty for a pair of distant data points in high-dimensional space being close in the projection space . Another example is topological autoencoders ( Moor et al . ( 2020 ) ) , a deep-learning approach that uses a generative model to make the latent space resemble the high-dimensional space by appending a topological loss to the original reconstruction loss of autoencoders . However , they required a huge amount of time for hyperparameter exploration and training for a dataset , and only focused on the global aspect of data . Unlike other techniques that presented a variation of loss functions in a single pipeline , UMATO is novel as it preserves both structures by dividing the optimization into two phases ; this makes it outperform the baselines with respect to quality metrics in our experiments . Hubs , landmarks , and anchors . Many dimensionality reduction techniques have tried to draw sample points to better model the original space ; these points are usually called hubs , landmarks , or anchors . Silva & Tenenbaum ( 2003 ) proposed Landmark Isomap , a landmark version of classical multidimensional scaling ( MDS ) to alleviate its computation cost . Based on the Landmark Isomap , Yan et al . ( 2018 ) tried to retain the topological structures ( i.e. , homology ) of high-dimensional data by approximating the geodesic distances of all data points . However , both techniques have the limitation that landmarks were chosen randomly without considering their importance . UMATO uses a k-nearest neighbor graph to extract significant hubs that can represent the overall skeleton of high-dimensional data . The most similar work to ours is At-SNE ( Fu et al . ( 2019 ) ) , which optimized the anchor points and all other points with two different loss functions . However , since the anchors wander during the optimization and the KL divergence does not care about distant points , it hardly captures the global structure . UMATO separates the optimization process into two phases so that the hubs barely moves but guides other points so that the subareas manifest the shape of the highdimensional manifold in the projection . Applying different cross-entropy functions to each phase also helps preserve both structures . 3 UMAP . Since UMATO shares the overall pipeline of UMAP ( McInnes et al . ( 2018 ) ) , we briefly introduce UMAP in this section . Although UMAP is grounded in a sophisticated mathematical foundation , its computation can be simply divided into two steps , graph construction and layout optimization , a configuration similar to t-SNE . In this section , we succinctly explain the computation in an abstract manner . For more details about UMAP , please consult the original paper ( McInnes et al . ( 2018 ) ) . Graph Construction . UMAP starts by generating a weighted k-nearest neighbor graph that represents the distances between data points in the high-dimensional space . Given an input dataset X = { x1 , . . . , xn } , the number of neighbors to consider k and a distance metric d : X × X → [ 0 , ∞ ) , UMAP first computes Ni , the k-nearest neighbors of xi with respect to d. Then , UMAP computes two parameters , ρi and σi , for each data point xi to identify its local metric space . ρi is a nonzero distance from xi to its nearest neighbor : ρi = min j∈Ni { d ( xi , xj ) | d ( xi , xj ) > 0 } . ( 1 ) Using binary search , UMAP finds σi that satisfies : ∑ j∈Ni exp ( −max ( 0 , d ( xi , xj ) − ρi ) /σi ) = log2 ( k ) . ( 2 ) Next , UMAP computes : vj|i = exp ( −max ( 0 , d ( xi , xj ) − ρi ) /σi ) , ( 3 ) the weight of the edge from a point xi to another point xj . To make it symmetric , UMAP computes vij = vj|i + vi|j − vj|i · vi|j , a single edge with combined weight using vj|i and vi|j . Note that vij indicates the similarity between points xi and xj in the original space . Let yi be the projection of xi in a low-dimensional projection space . The similarity between two projected points yi and yj is wij = ( 1 + a||yi − yj ||2b2 ) −1 , where a and b are positive constants defined by the user . Setting both a and b to 1 is identical to using Student ’ s t-distribution to measure the similarity between two points in the projection space as in t-SNE ( Maaten & Hinton ( 2008 ) ) . Layout Optimization . The goal of layout optimization is to find the yi that minimizes the difference ( or loss ) between vij and wij . Unlike t-SNE , UMAP employs the cross entropy : CUMAP = ∑ i 6=j [ vij · log ( vij/wij ) − ( 1− vij ) · log ( ( 1− vij ) / ( 1− wij ) ) ] , ( 4 ) between vij andwij as the loss function . UMAP initializes yi through spectral embedding ( Belkin & Niyogi ( 2002 ) ) and iteratively optimize its position to minimize CUMAP . Given the output weight wij as 1/ ( 1 + ad2bij ) , the attractive gradient is : CUMAP yi + = −2abd2 ( b−1 ) ij 1 + ad2bij vij ( yi − yj ) , ( 5 ) and the repulsive gradient is : CUMAP yi − = 2b ( + d2ij ) ( 1 + ad 2b ij ) ( 1− vij ) ( yi − yj ) , ( 6 ) where is a small value added to prevent division by zero and dij is a Euclidean distance between yi and yj . For efficient optimization , UMAP leverages the negative sampling technique from Word2Vec ( Mikolov et al . ( 2013 ) ) . After choosing a target point and its negative samples , the position of the target is updated with the attractive gradient , while the positions of the latter do so with the repulsive gradient . Moreover , UMAP utilizes edge sampling ( Tang et al . ( 2015 ; 2016 ) ) to accelerate and simplify the optimization process ( Table 1 ) . In other words , UMAP randomly samples edges with a probability proportional to their weights , and subsequently treats the selected ones as binary edges . Considering the previous sampling techniques , the modified objective function is : O = ∑ ( i , j ) ∈E vij ( log ( wij ) + M∑ k=1 Ejk∼Pn ( j ) γ log ( 1− wijk ) ) . ( 7 ) Here , vij and wij are the similarities in the high and low-dimensional spaces respectively , M is the number of negative samples and Ejk∼Pn ( j ) indicates that jk is sampled according to a noisy distribution , Pn ( j ) , from Word2Vec ( Mikolov et al . ( 2013 ) ) .
This work proposed a dimensionality reduction algorithm called Uniform Manifold Approximation with Two-phase Optimization (UMATO), which is an improved version of UMAP (Ref. [3] see below). UMATO has a two-phase optimization approach: global optimization to obtain the overall skeleton of data & local optimization to identify the local structures. 
SP:a9c70bdca13ee3800c633589a6ee028701e5bf51
A Reduction Approach to Constrained Reinforcement Learning
1 INTRODUCTION . Contemporary approaches in reinforcement learning ( RL ) largely focus on optimizing the behavior of an agent against a single reward function . RL algorithms like value function methods ( Zou et al. , 2019 ; Zheng et al. , 2018 ) or policy optimization methods ( Chen et al. , 2019 ; Zhao et al. , 2017 ) are widely used in real-world tasks . This can be sufficient for simple tasks . However , for complicated applications , designing a reward function that implicitly defines the desired behavior can be challenging . For instance , applications concerning risk ( Geibel & Wysotzki , 2005 ; Chow & Ghavamzadeh , 2014 ; Chow et al. , 2017 ) , safety ( Chow et al. , 2018 ) or budget ( Boutilier & Lu , 2016 ; Xiao et al. , 2019 ) are naturally modelled by augmenting the RL problem with orthant constraints . Exploration suggestions , such as to visit all states as evenly as possible , can be modelled by using a vector to measure the behavior of the agent , and to find a policy whose measurement vector lies in a convex set ( Miryoosefi et al. , 2019 ) . To solve RL problem under constraints , existing methods either ensure convergence only on a specific family of RL algorithms , or treat the underlying RL algorithms as a black box oracle to find individual policy , and look for mixed policy that randomizes among these individual policies . Though the second group of methods has the advantage of working with arbitrary RL algorithms that best suit the underlying problem , existing methods have practically infeasible memory requirement . To get an -approximate solution , they require storing O ( 1/ ) individual policies , and an exact solution requires storing infinitely many policies . This limits the prevalence of such methods , especially when the individual policy uses deep neural networks . In this paper , we propose a novel reduction approach for the general convex constrained RL ( C2RL ) problem . Our approach has the advantage of the second group of methods , yet requires storing at most constantly many policies . For a vector-valued Markov Decision Process ( MDP ) and any given target convex set , our method finds a mixed policy whose measurement vector lies in the target convex set , using any off-the-shelf RL algorithm that optimizes a scalar reward as a RL oracle . To do so , the C2RL problem is reduced to a distance minimization problem between a polytope and a convex set , and a novel variant of Frank-Wolfe type algorithm is proposed to solve this distance minimization problem . To find an -approximate solution in an m-dimensional vector-valued MDP , our method only stores at most m + 1 policies , which improves from infinitely many O ( 1/ ) ( Le et al. , 2019 ; Miryoosefi et al. , 2019 ) to a constant . We also show this m + 1 constant is worstcase optimal to ensure convergence of RL algorithms using deterministic policies . Moreover , our method introduces no extra hyper-parameter , which is favorable for practical usage . A preliminary experimental comparison demonstrates the performance of the proposed method and the sparsity of the policy found . 2 RELATED WORK . For high dimensional constrained RL , one line of approaches incorporates the constraint as a penalty signal into the reward function , and makes updates in a multiple time-scale scheme ( Tessler et al. , 2018 ; Chow & Ghavamzadeh , 2014 ) . When used with policy gradient or actor-critic algorithms ( Sutton & Barto , 2018 ) , this penalty signal guides the policy to converge to a constraint satisfying one ( Paternain et al. , 2019 ; Chow et al. , 2017 ) . However , the convergence guarantee requires the RL algorithm can find a single policy that satisfies the constraint , hence ruling out methods that search for deterministic policies , such as Deep Q-Networks ( DQN ) ( Mnih et al. , 2013 ) , Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2015 ) and their variants ( Van Hasselt et al. , 2015 ; Wang et al. , 2016 ; Fujimoto et al. , 2018 ; Barth-Maron et al. , 2018 ) . Another line of approaches uses a game-theoretic framework , and does not tie to specific families of RL algorithm . The constrained problem is relaxed to a zero-sum game , whose equilibrium is solved by online learning ( Agarwal et al. , 2018 ) . The game is played repeatedly , each time any RL algorithm can be used to find a best response policy to play against a no-regret online learner . The mixed policy that uniformly distributed among all played policies can be shown to converge to an optimal policy of the constrained problem ( Freund & Schapire , 1999 ; Abernethy et al. , 2011 ) . Taking this approach , Le et al . ( 2019 ) uses Lagrangian relaxation to solve the orthant constraint case , and Miryoosefi et al . ( 2019 ) uses conic duality to solve the convex constraint case . However , since the convergence is established by the no-regret property , the policy found by these methods requires randomization among policies found during the learning process , which limits their prevalence . Different from the game-theoretic approaches , we reduce the C2RL to a distance minimization problem and propose a novel variant of Frank-Wolfe ( FW ) algorithm to solve it . Our result builds on recent finding that the standard FW algorithm emerges as computing the equilibrium of a special convex-convave zero sum game ( Abernethy & Wang , 2017 ) . This connects our approach with previous approaches from game-theoretic framework ( Agarwal et al. , 2018 ; Le et al. , 2019 ; Miryoosefi et al. , 2019 ) . The main advantage of our reduction approach is that the convergence of FW algorithm does not rely on the no-regret property of an online learner . Hence there is no need to introduce extra hyper-parameters , such as learning rate of the online learner , and intuitively , we can eliminate unnecessary policies to achieve better sparsity . To do so , we extend Wolfe ’ s method for minimum norm point problem ( Wolfe , 1976 ) to solve our distance minimization problem . Throughout the learning process , we maintain an active policy set , and constantly eliminate policies whose measurement vector are affinely dependent of others . Unlike norm function in Wolfe ’ s method , our objective function is not strongly convex . Hence we can not achieve the linear convergence of Wolfe ’ s method as shown in Lacoste-Julien & Jaggi ( 2015 ) . Instead , we analyze the complexity of our method based on techniques from Chakrabarty et al . ( 2014 ) . A theoretical comparison between our method and various approaches in constrained RL is provided in Table 1 . 3 PRELIMINARIES . A vector-valued Markov decision process can be identified by a tuple { S , A , β , P , c } , where S is a set of states , A is the set of actions and β is the initial state distribution . At the start of each episode , an initial state s0 is drawn following the distribution β . Then , at each step t = 0 , 1 , . . . , the agent observes a state st ∈ S and makes a decision to take an action at . After at is chosen , at the next observation the state evolves to state st+1 ∈ S with probability P ( st+1|st , at ) . However , instead of a scalar reward , in our setting , the agent receives an m-dimensional vector ct ∈ Rm that may implicitly contain measurements of reward , risk or violation of other constraints . The episode ends after a certain number of steps , called the horizon , or when a terminate state is reached . Actions are typically selected according to a policy π , where π ( s ) is a distribution over actions for any s ∈ S . Policies that take a single action for any state are deterministic policies , and can be identified by the mapping π : S 7→ A . The set of all deterministic policies is denoted by Π . For a discount factor γ ∈ [ 0 , 1 ) , the discounted long-term measurement vector of a policy π ∈ Π is defined as c ( π ) : = E ( T∑ t=0 γtct ( st , π ( st ) ) ) , ( 1 ) where the expectation is over trajectories generated by the described random process . Unlike unconstrained setting , for a constrained RL problem , it is possible that all feasible policies are non-deterministic ( see Appendix D for an example ) . This limits the usage of RL algorithms that search for deterministic policies in the setting of constrained RL problem . One workaround is to use mixed policies . For a set of policies U , a mixed policy is a distribution over U , and the set of all mixed policies over U is denoted by ∆ ( U ) . To execute a mixed policy µ ∈ ∆ ( U ) , we first select a policy π ∈ U according to π ∼ µ ( π ) , and then execute π for the entire episode . Altman ( 1999 ) shows that any c ( · ) achievable can be achieved by some mixed deterministic policies µ ∈ ∆ ( Π ) . Therefore , though an off-shelves RL algorithm may not converge to any constraint-satisfying policy , it can be used as a subroutine to find individual policies ( possibly deterministic ) , and a randomization among these policies can converge to a feasible policy . The discounted long-term measurement vector of a mixed policy µ ∈ ∆ ( Π ) is defined similarly c ( µ ) : = Eπ∼µ ( c ( π ) ) = ∑ π∈Π µ ( π ) c ( π ) . ( 2 ) For a mixed policy µ ∈ ∆ ( U ) , its active set is defined to be the set of policies with non-zero weights A : = { π ∈ U|µ ( π ) > 0 } . The memory requirement of storing µ , is then proportional to the size of its active set . Since a mixed policy can be interpreted as a convex combination of policies in its active set , in the following , the term sparsity of a mixed policy refers to the sparsity of this combination . Our learning problem , the convex constrained reinforcement learning ( C2RL ) , is to find a policy whose expected long-term measurement vector lies in a given convex set ; i.e. , for a given convex target set C ⊂ Rm , our target is to find µ∗ such that c ( µ∗ ) ∈ Ω ( C2RL ) . ( 3 ) Any policy µ∗ that satisfies c ( µ∗ ) ∈ Ω is called a feasible policy , and a C2RL problem is feasible if there exists some feasible policies . In the following , we assume the C2RL problem is feasible . 4 APPROACH , ALGORITHM AND ANALYSIS . We now show how the C2RL ( 3 ) can be reduced to a distance minimization problem ( 7 ) between a polytope and a convex set . A novel variant of Frank-Wolfe-type algorithm is then proposed to solve the distance minimization problem , followed by theoretic analysis about convergence and sparsity of the proposed method .
This paper presents a reduction approach to tackle the optimization problem of constrained RL. They propose a Frank-Wolfe type algorithm for the task, which avoids many shortcomings of previous methods, such as the memory complexity. They prove that their algorithm can find an $\epsilon$-approximate solution with $O(1/\epsilon)$ invocation. They also show the power of their algorithm with experiments in a grid-world navigation task, though the tasks looks relatively simple.
SP:fd70696898c5c725ad789565265274a37a6c2ca0
Learnable Uncertainty under Laplace Approximations
Laplace approximations are classic , computationally lightweight means for constructing Bayesian neural networks ( BNNs ) . As in other approximate BNNs , one can not necessarily expect the induced predictive uncertainty to be calibrated . Here we develop a formalism to explicitly “ train ” the uncertainty in a decoupled way to the prediction itself . To this end we introduce uncertainty units for Laplaceapproximated networks : Hidden units with zero weights that can be added to any pre-trained , point-estimated network . Since these units are inactive , they do not affect the predictions . But their presence changes the geometry ( in particular the Hessian ) of the loss landscape around the point estimate , thereby affecting the network ’ s uncertainty estimates under a Laplace approximation . We show that such units can be trained via an uncertainty-aware objective , making the Laplace approximation competitive with more expensive alternative uncertaintyquantification frameworks . 1 INTRODUCTION . The point estimates of neural networks ( NNs ) —constructed as maximum a posteriori ( MAP ) estimates via ( regularized ) empirical risk minimization—empirically achieve high predictive performance . However , they tend to underestimate the uncertainty of their predictions , leading to an overconfidence problem ( Hein et al. , 2019 ) , which could be disastrous in safety-critical applications such as autonomous driving . Bayesian inference offers a principled path to overcome this issue . The goal is to turn a “ vanilla ” NN into a Bayesian neural network ( BNN ) , where the posterior distribution over the network ’ s weights are inferred via Bayes ’ rule and subsequently taken into account when making predictions . Since the cost of exact posterior inference in a BNN is often prohibitive , approximate Bayesian methods are employed instead . Laplace approximations ( LAs ) are classic methods for such a purpose ( MacKay , 1992b ) . The key idea is to obtain an approximate posterior by “ surrounding ” a MAP estimate of a network with a Gaussian , based on the loss landscape ’ s geometry around it . A standard practice in LAs is to tune a single hyperparameter—the prior precision—which is inflexible ( Ritter et al. , 2018b ; Kristiadi et al. , 2020 ) . Here , we aim at improving the flexibility of uncertainty tuning in LAs . To this end , we introduce Learnable Uncertainty under Laplace Approximations ( LULA ) units , which are hidden units associated with a zeroed weight . They can be added to the hidden layers of any MAP-trained network . Because they are inactive , such units do not affect the prediction of the underlying network . However , they can still contribute to the Hessian of the loss with respect to the parameters , and hence induce additional structures to the posterior covariance under a LA . LULA units can be trained via an uncertainty-aware objective ( Hendrycks et al. , 2019 ; Hein et al. , 2019 , etc . ) , such that they improve the predictive uncertainty-quantification ( UQ ) performance of the Laplace-approximated BNN . Figure 1 demonstrates trained LULA units in action : They improve the UQ performance of a standard LA , while keeping the MAP predictions in both regression and classification tasks . In summary , we ( i ) introduce LULA units : inactive hidden units for uncertainty tuning of a LA , ( ii ) bring a robust training technique from non-Bayesian literature for training these units , and ( iii ) show empirically that LULA-augmented Laplace-approximated BNNs can yield better UQ performance compared to both previous tuning techniques and contemporary , more expensive baselines . 2 BACKGROUND . 2.1 BAYESIAN NEURAL NETWORKS . Let f : Rn × Rd → Rk defined by ( x , θ ) 7→ f ( x ; θ ) be an L-layer neural network . Here , θ is the concatenation of all the parameters of f . Suppose that the size of each layer of f is given by the sequence of ( nl ∈ Z > 0 ) Ll=1 . Then , for each l = 1 , . . . , L , the l-th layer of f is defined by a ( l ) : = W ( l ) h ( l−1 ) + b ( l ) , with h ( l ) : = { ϕ ( a ( l ) ) if l < L a ( l ) if l = L , ( 1 ) where W ( l ) ∈ Rnl×nl−1 and b ( l ) ∈ Rnl are the weight matrix and bias vector of the layer , and ϕ is a component-wise activation function . We call the vector h ( l ) ∈ Rnl the l-th hidden units of f . Note that by convention , we consider n0 : = n and nL : = k , while h ( 0 ) : = x and h ( L ) : = f ( x ; θ ) . From the Bayesian perspective , the ubiquitous training formalism of neural networks amounts to MAP estimation : The empirical risk and the regularizer are interpretable as the negative loglikelihood under an i.i.d . dataset D : = { xi , yi } mi=1 and the negative log-prior , respectively . That is , the loss function is interpreted as L ( θ ) : = − m∑ i=1 log p ( yi | f ( xi ; θ ) ) − log p ( θ ) = − log p ( θ | D ) . ( 2 ) In this view , the de facto weight decay regularizer amounts to a zero-mean isotropic Gaussian prior p ( θ ) = N ( θ | 0 , λ−1I ) with a scalar precision hyperparameter λ . Meanwhile , the usual softmax and quadratic output losses correspond to the Categorical and Gaussian distributions over yi in the case of classification and regression , respectively . MAP-trained neural networks have been shown to be overconfident ( Hein et al. , 2019 ) and BNNs can mitigate this issue ( Kristiadi et al. , 2020 ) . They quantify epistemic uncertainty by inferring the full posterior distribution of the parameters θ ( instead of just a single point estimate in MAP training ) . Given that p ( θ | D ) is the posterior , then the prediction for any test point x ∈ Rn is obtained via marginalization p ( y | x , D ) = ∫ p ( y | f ( x ; θ ) ) p ( θ | D ) dθ , ( 3 ) which captures the uncertainty encoded in the posterior . 2.2 LAPLACE APPROXIMATIONS . In deep learning , since the exact Bayesian posterior is intractable , approximate Bayesian inference methods are used . An important family of such methods is formed by LAs . Let θMAP be the minimizer of ( 2 ) , which corresponds to a mode of the posterior distribution . A LA locally approximates the posterior using a Gaussian p ( θ | D ) ≈ N ( θ | θMAP , Σ ) : = N ( θ | θMAP , ( ∇2L|θMAP ) −1 ) . Thus , LAs construct an approximate Gaussian posterior around θMAP , whose precision equals the Hessian of the loss at θMAP—the “ curvature ” of the loss landscape at θMAP . While the covariance of a LA is tied to the weight decay of the loss , a common practice in LAs is to tune the prior precision under some objective , in a post-hoc manner . In other words , the MAP estimation and the covariance inference are thought as separate , independent processes . For example , given a fixed MAP estimate , one can maximize the log-likelihood of a LA w.r.t . the prior precision to obtain the covariance . This hyperparameter tuning can thus be thought as an uncertainty tuning . A recent example of LAs is the Kronecker-factored Laplace ( KFL , Ritter et al. , 2018b ) . The key idea is to approximate the Hessian matrix with the layer-wise Kronecker factorization scheme proposed by Heskes ( 2000 ) ; Martens & Grosse ( 2015 ) . That is , for each layer l = 1 , . . . , L , KFL assumes that the Hessian corresponding to the l-th weight matrix W ( l ) ∈ Rnl×nl−1 can be written as the Kronecker product G ( l ) ⊗ A ( l ) for some G ( l ) ∈ Rnl×nl and A ( l ) ∈ Rnl−1×nl−1 . This assumption brings the inversion cost of the Hessian down to Θ ( n3l +n 3 l−1 ) , instead of the usual Θ ( n 3 l n 3 l−1 ) cost . The approximate Hessian can easily be computed via tools such as BackPACK ( Dangel et al. , 2020 ) . Even with a closed-form Laplace-approximated posterior , the predictive distribution ( 3 ) in general does not have an analytic solution since f is nonlinear . Instead , one can employ Monte-Carlo ( MC ) integration by sampling from the Gaussian : p ( y | x , D ) ≈ 1 S S∑ s=1 p ( y | f ( x ; θs ) ) ; θs ∼ N ( θ | θMAP , Σ ) , for S number of samples . In the case of binary classification with f : Rn × Rd → R , one can use the following well-known approximation , due to MacKay ( 1992a ) : p ( y = 1 | x , D ) ≈ σ ( f ( x ; θMAP ) √ 1 + π/8 v ( x ) ) , ( 4 ) where σ is the logistic-sigmoid function and v ( x ) is the marginal variance of the network output f ( x ) , which is often approximated via a linearization of the network around the MAP estimate : v ( x ) ≈ ( ∇θf ( x ; θ ) |θMAP ) > Σ ( ∇θf ( x ; θ ) |θMAP ) . ( 5 ) ( This approximation has also been generalized to multi-class classifications by Gibbs ( 1997 ) . ) In particular , as v ( x ) increases , the predictive probability of y = 1 goes to 0.5 and therefore the uncertainty increases . This relationship has also been shown empirically in multi-class classifications with MC-integration ( Kristiadi et al. , 2020 ) . 3 LULA UNITS . The problem with the standard uncertainty tuning in LAs is that the only degree-of-freedom available for performing the optimization is the scalar prior precision and therefore inflexible.1 We shall address this by introducing “ uncertainty units ” , which can be added on top of the hidden units of any MAP-trained network ( Section 3.1 ) and can be trained via an uncertainty-aware loss ( Section 3.2 ) . 3.1 CONSTRUCTION . Let f : Rn × Rd → Rk be a MAP-trained L-layer neural network with parameters θMAP = { W ( l ) MAP , b ( l ) MAP } Ll=1 . The premise of our method is simple : At each hidden layer l = 1 , . . . , L − 1 , 1While one can also use a non-scalar prior precision , it appears to be uncommon in deep learning . In any case , such a element-wise weight-cost would interact with the training procedure . MAP . The additional units are represented by the additional block at the bottom of each layer . Dashed lines correspond to the free parameters Ŵ ( 1 ) , . . . , Ŵ ( L−1 ) , while dotted lines to the zero weights . suppose we add ml ∈ Z≥0 additional hidden units , under the original activation function , to h ( l ) . As a consequence , we need to augment each of the weight matrices to accommodate them . Consider the following construction : for each layer l = 1 , . . . , L − 1 of the network , we expand W ( l ) and b ( l ) to obtain the block matrix and vector W̃ ( l ) : = ( W ( l ) MAP 0 Ŵ ( l ) 1 Ŵ ( l ) 2 ) ∈ R ( nl+ml ) × ( nl−1+ml−1 ) ; b̃ ( l ) : = ( b ( l ) MAP b̂ ( l ) ) ∈ Rnl+ml , ( 6 ) respectively , with m0 = 0 since we do not add additional units to the input . For l = L , we define W̃ ( L ) : = ( W ( L ) MAP , 0 ) ∈ R k× ( nL−1+mL−1 ) ; b̃ ( L ) : = b ( L ) MAP ∈ R k , so that the output dimensionality is unchanged . For brevity , we denote Ŵ ( l ) : = ( Ŵ ( l ) 1 , Ŵ ( l ) 2 ) . Refer to Figure 2 for an illustration and Algorithm 2 in Appendix B for a step-by-step summary . Taken together , we denote the resulting augmented network as f̃ and the resulting parameter vector as θ̃MAP ∈ Rd̃ , where d̃ it the resulting number of parameters . Note that we can easily extend this construction to convolutional nets by expanding the “ channel ” of a hidden layer.2 Let us inspect the implication of this construction . Here for each l = 1 , . . . , L − 1 , since they are zero , the upper-right quadrant of W̃ ( l ) deactivates the ml−1 additional hidden units in the previous layer , thus they do not contribute to the original hidden units in the l-th layer . Meanwhile , the submatrix Ŵ ( l ) and the sub-vector b̂ ( l ) contain parameters for the additional ml hidden units in the l-th layer . We are free to choose the the values of these parameters since the following proposition guarantees that they will not change the output of the network ( the proof is in Appendix A ) . Proposition 1 . Let f : Rn × Rd → Rk be a MAP-trained L-layer network parametrized by θMAP . Suppose f̃ : Rn × Rd̃ → R and θ̃MAP ∈ Rd̃ are obtained via the previous construction . For any input x ∈ Rn , we have f̃ ( x ; θ̃MAP ) = f ( x ; θMAP ) . So far , it looks like all our changes to the network are inconsequential . However , they do affect the curvature of the landscape of L,3 and thus the uncertainty arising in a LA . Let θ̃ be a random variable in Rd̃ and θ̃MAP be an instance of it . Suppose we have a Laplace-approximated posterior p ( θ̃ | D ) ≈ N ( θ̃ | θ̃MAP , Σ̃ ) over θ̃ , where the covariance Σ̃ is the inverse Hessian of the negative log-posterior w.r.t . the augmented parameters at θ̃MAP . Then , Σ̃ contains additional dimensions ( and thus in general , additional structured , non-zero uncertainty ) absent in the original network , which depend on the values of the free parameters { Ŵ ( l ) , b̂ ( l ) } L−1l=1 . 2E.g . if the hidden units are a 3D array of ( channel × height × width ) , then we expand the first dimension . 3More formally : The principal curvatures of the graph of L , seen as a d-dimensional submanifold of Rd+1 . The implication of the previous finding can be seen clearly in real-valued networks with diagonal LA posteriors . The following proposition shows that , under such a network and posterior , the construction above will affect the output uncertainty of the original network f ( the proof is in Appendix A ) . Proposition 2 . Suppose f : Rn × Rd → R is a real-valued network and f̃ is as constructed above . Suppose further that diagonal Laplace-approximated posteriors N ( θ | θMAP , diag ( σ ) ) , N ( θ̃ | θ̃ , diag ( σ̃ ) ) are employed . Using the linearization ( 5 ) , for any input x ∈ Rn , the variance over the output f̃ ( x ; θ̃ ) is at least that of f ( x ; θ ) . In summary , the construction along with Propositions 1 and 2 imply that the additional hidden units we have added to the original network are uncertainty units under LAs , i.e . hidden units that only contribute to the Laplace-approximated uncertainty and not the predictions . This property gives rise to the name Learnable Uncertainty under Laplace Approximations ( LULA ) units .
The paper proposes a post-hoc uncertainty tuning pipeline for Bayesian neural networks. After getting the point estimate, it adds extra dimensions to the weight matrices and hidden layers, which has no effect on the network output, with the hope that it would influence the variance of the original network weights under the Laplacian approximation. More specifically, it tunes the extra weights by optimizing another objective borrowed from the non-Bayesian robust learning literature, which encourages low uncertainty over real (extra, validation) data, and high uncertainty over manually constructed, out-of-distribution data.
SP:df5fec4899d97f7d5df259a013f467e038895669
Selfish Sparse RNN Training
1 INTRODUCTION . Recurrent neural networks ( RNNs ) ( Elman , 1990 ) , with a variant of long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) , have been highly successful in various fields , including language modeling ( Mikolov et al. , 2010 ) , machine translation ( Kalchbrenner & Blunsom , 2013 ) , question answering ( Hirschman et al. , 1999 ; Wang & Jiang , 2017 ) , etc . As a standard task to evaluate models ’ ability to capture long-range context , language modeling has witnessed great progress in RNNs . Mikolov et al . ( 2010 ) demonstrated that RNNs perform much better than backoff models for language modeling . After that , various novel RNN architectures such as Recurrent Highway Networks ( RHNs ) ( Zilly et al. , 2017 ) , Pointer Sentinel Mixture Models ( Merity et al. , 2017 ) , Neural Cache Model ( Grave et al. , 2017 ) , Mixture of Softmaxes ( AWD-LSTM-MoS ) ( Yang et al. , 2018 ) , ordered neurons LSTM ( ON-LSTM ) ( Shen et al. , 2019 ) , and effective regularization like variational dropout ( Gal & Ghahramani , 2016 ) , weight tying ( Inan et al. , 2017 ) , DropConnect ( Merity et al. , 2018 ) have been proposed to significantly improve the performance of RNNs . At the same time , as the performance of deep neural networks ( DNNs ) improves , the resources required to train and deploy deep models are becoming prohibitively large . To tackle this problem , various dense-to-sparse methods have been developed , including but not limited to pruning ( LeCun et al. , 1990 ; Han et al. , 2015 ) , Bayesian methods ( Louizos et al. , 2017a ; Molchanov et al. , 2017 ) , distillation ( Hinton et al. , 2015 ) , L1 Regularization ( Wen et al. , 2018 ) , and low-rank decomposition ( Jaderberg et al. , 2014 ) . Given a pre-trained model , these methods work effectively to accelerate the inference . Recently , some dynamic sparse training ( DST ) approaches ( Mocanu et al. , 2018 ; Mostafa & Wang , 2019 ; Dettmers & Zettlemoyer , 2019 ; Evci et al. , 2020 ) have been proposed to bring efficiency for both , the training phase and the inference phase by dynamically changing the sparse connectivity during training . However , previous approaches are mainly for CNNs . For RNNs , the long-term dependencies and repetitive usage of recurrent cells make them more difficult to be sparsified ( Kalchbrenner et al. , 2018 ; Evci et al. , 2020 ) . More importantly , the state-of-the-art performance achieved by RNNs on language modeling is mainly associated with the optimizer , averaged stochastic gradient descent ( ASGD ) ( Polyak & Juditsky , 1992 ) , which is not compatible with the existing DST approaches . The above-mentioned problems heavily limit the performance of the off-the-shelf sparse training methods in the RNN field . For instance , while “ The Rigged Lottery ” ( RigL ) achieves state-of-the-art sparse training results with various CNNs , it fails to match the performance of the iterative pruning method in the RNN setting ( Evci et al. , 2020 ) . In this paper , we introduce an algorithm to train sparse RNNs with a fixed number of computational costs throughout training . We abbreviate our sparse RNN training method as Selfish-RNN because our method encourages cell weights to obtain their parameters selfishly . The main contributions of this work are five-fold : • We propose an algorithm to train sparse RNNs from scratch with a fixed number of parameters . This advantage constrains the training costs to a fraction of the costs needed for training a dense model , allowing us to choose suitable sparsity levels for different types of training platforms . • We introduce SNT-ASGD , a sparse variant of the non-monotonically triggered averaged stochastic gradient descent optimizer , which overcomes the over-sparsified problem of the original NT-ASGD ( Merity et al. , 2018 ) caused by dynamic sparse training . • We demonstrate state-of-the-art sparse training performance with various RNN models , including stacked LSTMs ( Zaremba et al. , 2014 ) , RHNs , ordered neurons LSTM ( ONLSTM ) on Penn TreeBank ( PTB ) dataset ( Marcus et al. , 1993 ) and AWD-LSTM-MoS on WikiText-2 dataset ( Melis et al. , 2018 ) . • We present an approach to analyze the evolutionary trajectory of the sparse connectivity optimized by dynamic sparse training from the perspective of graph . With this approach , we show that there exist many good structural local optima ( sparse sub-networks having equally good performance ) in RNNs , which can be found in an efficient and robust manner . • Our analysis shows two surprising phenomena in the setting of RNNs contrary to CNNs : ( 1 ) random-based weight growth performs better than gradient-based weight growth , ( 2 ) uniform sparse distribution performs better than Erdős-Rényi ( ER ) sparse initialization . These results highlight the need to choose different sparse training methods for different architectures . 2 RELATED WORK . Dense-to-Sparse . There are a large amount of works operating on a dense network to yield a sparse network . We divide them into three categories based on the training cost in terms of memory and computation . ( 1 ) Iterative Pruning and Retraining . To the best of our knowledge , pruning was first proposed by Janowsky ( 1989 ) and Mozer & Smolensky ( 1989 ) to yield a sparse network from a pre-trained network . Recently , Han et al . ( 2015 ) brought it back to people ’ s attention based on the idea of iterative pruning and retraining with modern architectures . Some recent works were proposed to further reduce the number of iterative retraining e.g. , Narang et al . ( 2017 ) ; Zhu & Gupta ( 2017 ) . Frankle & Carbin ( 2019 ) proposed the Lottery Ticket Hypothesis showing that the sub-networks ( “ winning tickets ” ) obtained via iterative pruning combined with their “ lucky ” initialization can outperform the dense networks . Zhou et al . ( 2019 ) discovered that the sign of their initialization is the crucial factor that makes the “ winning tickets ” work . Our work shows that there exists a much more efficient and robust way to find those “ winning ticketts ” without any special initialization . The aforementioned methods require at least the same training cost as training a dense model , sometimes even more , as a pre-trained dense model is involved . We compare our method with state-of-the-art pruning method proposed by Zhu & Gupta ( 2017 ) in Appendix I . With fewer training costs , our method is able to discover sparse networks that can achieve lower test perplexity than iterative pruning . ( 2 ) Learning Sparsity During Training . There are also some works attempting to learn the sparse networks during training . Louizos et al . ( 2017b ) and Wen et al . ( 2018 ) are examples that gradually enforce the network weights to zero via L0 and L1 regularization , respectively . Dai et al . ( 2018 ) proposed a singular value decomposition ( SVD ) based method to accelerate the training process for LSTMs . Liu et al . ( 2020a ) proposed Dynamic Sparse Training to discover sparse structure by learning binary masks associated with network weights . However , these methods start with a fully dense network , and hence are not memory efficient . ( 3 ) One-Shot Pruning . Some works aim to find sparse neural networks by pruning once prior to the main training phase based on some salience criteria , such as connection sensitivity ( Lee et al. , 2019 ) , signal propagation , ( Lee et al. , 2020 ) , and gradient signal preservation ( Wang et al. , 2020 ) . These techniques can find sparse networks before the standard training , but at least one iteration of dense model needs to be trained to identify the sparse sub-networks , and therefore the pruning process is not applicable to memory-limited scenarios . Additionally , one-shot pruning generally can not match the performance of dynamic sparse training , especially at extreme sparsity levels ( Wang et al. , 2020 ) . Sparse-to-Sparse . Recently , many works have emerged to train intrinsically sparse neural networks from scratch to obtain efficiency both for training and inference . ( 1 ) Static Sparse Training . Mocanu et al . ( 2016 ) introduced intrinsically sparse networks by exploring the scale-free and small-world topological properties in Restricted Boltzmann Machines . Later , some works expand static sparse training into CNNs based on expander graphs and show comparable performance ( Prabhu et al. , 2018 ; Kepner & Robinett , 2019 ) . ( 2 ) Dynamic Sparse Training . Mocanu et al . ( 2018 ) introduced Sparse Evolutionary Training ( SET ) which initializes a sparse network and dynamically changes the sparse connectivity by a simple remove-and-regrow strategy . At the same time , DeepR ( Bellec et al. , 2018 ) trained very sparse networks by sampling the sparse connectivity based on a Bayesian posterior . The iterative configuration updates have been proved to converge to a stationary distribution . Mostafa & Wang ( 2019 ) introduced Dynamic Sparse Reparameterization ( DSR ) to train sparse neural networks while dynamically adjusting the sparsity levels of different layers . Sparse Networks from Scratch ( SNFS ) ( Dettmers & Zettlemoyer , 2019 ) improved the sparse training performance by growing free weights according to their momentum . It requires extra computation and memory to update the dense momentum tensor for each iteration . Further , Evci et al . ( 2020 ) introduced RigL which activates weights with the highest magnitude gradients . This approach grows weights expected to receive gradients with high magnitudes , while amortizing a large number of memory requirements and computational cost caused by momentum . Due to the inherent limitations of deep learning software and hardware libraries , all of the above works simulate sparsity using a binary mask over weights . More recently , Liu et al . ( 2020b ) proved the potentials of DST by developing for the first time an independent software framework to train very large truly sparse MLPs trained with SET . However , all these works mainly focus on CNNs and MLPs , and they are not designed to match state-of-the-art performance for RNNs . We summarize the properties of all approaches compared in this paper in Table 1 . Same with SET , our method can guarantee Backward Sparse , which does not require any extra information from the removed weights . Additionally , we discuss the differences among SET , pruning techniques , and our method in Appendix H . 3 SPARSE RNN TRAINING . Our sparse RNN training method is illustrated in Figure 1 with LSTM as a specific case of RNNs . Note that our method can be easily applied to any other RNN variants . The only difference is the number of cell weights . Before training , we randomly initialize each layer at the same sparsity ( the fraction of zero-valued weights ) , so that the training costs are proportional to the dense model at the beginning . To explore more sparse structures , while to maintain a fixed sparsity level , we need to optimize the sparse connectivity together with the corresponding weights ( a combinatorial optimization problem ) . We apply dynamic sparse connectivity and SNT-ASGD to handle this combinatorial optimization problem . The pseudocode of the full training procedure of our algorithm is shown in Algorithm 1 .
In this paper, the authors studied the possibility of sparsity exploration in Recurrent Neural Networks (RNNs) training. The main contributions include two parts: (1) Selfish-RNN training algorithm in Section 3.1 (2) SNT-ASGD optimizer in Section 3.2. The key idea of the Selfish-RNN training algorithm is a non-uniform redistribution across cell weights for better regularization. The authors mentioned previous sparse training techniques mainly focus on Multilayer Perceptron Networks (MLPs) and Convolutional Neural Networks (CNNs) rather than RNNs. This claim seems to be doubtful because one-time SVD + fine-tuning usually works very well for most RNN training applications in the industry.
SP:2a2368b5bc6b59f66af75ea37f4cbc19c8fcf50f
Adaptive Spatial-Temporal Inception Graph Convolutional Networks for Multi-step Spatial-Temporal Network Data Forecasting
1 INTRODUCTION . Spatial-temporal data forecasting has attracted attention from researchers due to its wide range of applications and the same specific characteristics of spatial-temporal data . Typical applications include mobile traffic forecast ( He et al. , 2019 ) , traffic road condition forecast ( Song et al. , 2020 ; Yu et al. , 2017 ; Guo et al. , 2019 ; Zheng et al. , 2020 ; Li et al. , 2017 ) , on-demand vehicle sharing services passenger demand forecast ( Bai et al. , 2019 ) and geo-sensory time series prediction ( Liang et al. , 2018 ) etc . The accurate forecast is the foundation of many real-world applications , such as Intelligent Telecom Network Operation and Intelligent Transportation Systems ( ITS ) . Specifically , accurate traffic forecast can help transportation agencies better control traffic scheduling and reduce traffic congestion ; The traffic volumes prediction of the wireless telecommunication network plays an important role for the network operation and optimization , for example , it can help to infer the accurate sleep periods ( low traffic periods ) of the base stations to achieve energy saving without sacrificing customer experience . However , as we all know , accurate spatialtemporal data forecasting faces multiple challenges . First , it is inherent with complex spatial-temporal correlations . In the spatialtemporal graph , different neighbors may have different impacts on the central location at the same time step , as the bold lines shown in Figure1 , which called spatial correlations.Different historical observations of the same location influence the future moments of itself variously due to temporal correlations . The observations of different neighbors at historical moments can directly affect the central node at future time steps due to the spatial-temporal joint correlations . As shown in Figure1 , the information of the spatialtemporal network can propagate along the spatial and temporal dimensions simultaneously , and the transmission process can be discontinuous due to complex external factors , which result in spatialtemporal joint correlations of the spatial-temporal data in a short period . Spatial-temporal data is heterogenous in both spatial and temporal dimensions ( Song et al. , 2020 ) . Nodes in different regions of the graph have various properties and local spatial structures , so the corresponding data distribution can be different . For example , the traffic flow distribution of urban and suburban areas are quite different , while the traffic of urban area is denser and that of suburban area is relatively sparse . Besides , the traffic flow in the same region also exhibit heterogeneity in different time periods . For example , the mobile traffic in business district would decrease at night compared to the daytime , while it ’ s opposite in the residential district . In addition , multi-step time series forecasting is often accompanied by error accumulation problem . Typical methods like RNNs often cause error accumulation due to iterative forecasting , leading to rapid deterioration of the long-term prediction accuracy . ( Yu et al. , 2017 ; Zheng et al. , 2020 ) . Most of the previous work is mainly to solve the above challenges . To model the spatial-temporal dependency , STGCN ( Yu et al. , 2017 ) and DCRNN ( Li et al. , 2017 ) extract spatial and temporal correlations separately . ASTGCN ( Guo et al. , 2019 ) introduced spatial and temporal attention to model the dynamic spatial and temporal correlations . STG2Seq ( Bai et al. , 2019 ) aimed at using GCN to capture spatial and temporal correlations simultaneously . But they all didn ’ t consider the spatialtemporal joint correlations and heterogeneity . Different from the above methods , STSGCN ( Song et al. , 2020 ) used multiple local spatial-temporal graphs to model the spatial-temporal synchronous correlations and spatial-temporal heterogeneity of the local adjacent time steps . But STSGCN can only model the spatial-temporal synchronous correlations of its defined local spatial-temporal graphs and it is equipped with complex structure . In this paper , we propose a novel model called ASTI-GCN , Adaptive spatial-temporal Inception Graph Convolutional Networks , to address the above issues with multi-step spatial-temporal data forecasting . We propose the spatial-temporal joint convolution to directly model the spatial-temporal joint correlations without introducing elaborately constructed mechanisms . And we introduce the inception mechanism to build multi-scale spatial-temporal features to adapt to graph nodes with different properties . Then , to achieve the heterogeneity modeling , we construct the spatial-temporal Inception Graph Convolution Module , which combined the spatial-temporal inception mechanism with the graph attention to build the adaptive ability of graph nodes with different properties . After multiple spatial-temporal inception-GCMs , two decoder modules named sequence decoder and short-term decoder are designed to directly establish the relationships between the historical and future time steps to alleviate error accumulation . Overall , our main contributions are summarized as follows : • We propose a novel spatial-temporal joint graph convolution network to directly capture spatial-temporal correlations . Moreover , we introduce inception with graph attention to adaptively model the graph heterogeneity . • We propose to combine the sequence decoder and short-term decoder together for multistep forecasting to model direct relationships between historical and future time steps to alleviate the error propagation . • We evaluate our model on three real-world datasets from two fields , and the experimental results show that our model achieves the best performances among all the eight baselines with good generalization ability . 2 RELATED WORK . Spatial-temporal data information can be extracted using the deep learning method from European space , such as ConvLSTM ( Xingjian et al. , 2015 ) , PredRNN ( Gowrishankar & Satyanarayana , 2009 ) and so on . However , most of the spatial-temporal data in real scenes are graph data with complex and changeable relationships.Common timing prediction models , such as HA and ARIMA ( Williams & Hoel , 2003 ) , can not be simply migrated to such scenarios . Graph based methods like DCRNN ( Li et al. , 2017 ) modeled traffic flow as a diffusion process on a directed graph . Spatial dependencies and temporal dependencies are captured by bidirectional random walk and DCGRU based encoder-decoder sequence to sequence learning framework respectively . STGCN ( Yu et al. , 2017 ) constructed an undirected graph of traffic network , which is combined with GCN and CNN to model spatial and temporal correlation respectively . ASTGCN ( Guo et al. , 2019 ) innovatively introduced attention mechanisms to capture dynamic spatial and temporal dependencies . Similarly , GMAN ( Zheng et al. , 2020 ) used temporal and spatial attention to extract dynamic spatial-temporal correlations with spatial-temporal coding . The above models extract spatial-temporal correlation with two separate modules , which can not learn the influence of neighbor node at the same time and the influence of center node at the historical moment simultaneously . To address this problem , Bai et al . ( 2019 ) proposed STG2seq to learn the influence of spatial and temporal dimensions at the same time , which is a purely relies on graph convolution structure . However , all the above methods fail to take the heterogeneity of spatial-temporal data into account , that is , the scope of each node influencing its neighbor nodes at future time steps is different . To solve this problem , Song et al . ( 2020 ) proposed STSGCN with multiple modules for different time periods to effectively extract the heterogeneity in local spatial-temporal maps . However , this method pays more attention to local information and lacks of global information extraction . Besides , STSGCN runs slowly due to too many parameters . Therefore , we propose an Adaptive spatial-temporal Inception Graph Convolutional Networks The Temporal and spatial correlations are extracted simultaneously by spatial-temporal convolution , and the node heterogeneity is modeled by Inception mechanism . At the same time , considering the different influences of each node and time step , the attention mechanism is introduced to adjust the influence weight adaptively . 3 METHODOLOGY . 3.1 PRELIMINARIES . In this paper , we define G = ( V , E , A ) as a static undirected spatial graph network . V represents the set of vertices , |V | = N ( N indicates the number of vertices ) . E is the set of edges representing the connectivity between vertices . A ∈ RN×N is the adjacency matrix of the network graph G where Avi , vj represents the connection between nodes vi and vj . The graph signal matrix is expressed as Xt ∈ RN×C , where t denotes the timestep and C indicates the number of features of vertices . The graph signal matrix represents the observations of graph network G at time step t. Problem Studied Given the graph signal matrix of historical T time steps χ = ( Xt1 , Xt2 , . . . , XtT ) ∈ RT×N×C , our goal is to predict the graph signal matrix of the next M time steps Ŷ = ( X̂tT+1 , X̂tT+2 , ... , X̂tT+M ) ∈ RM×N×C . In other words , we need to learn a mapping function F to map the graph signal matrix of historical time steps to the future time steps : ( X̂tT+1 , X̂tT+2 , · · · , X̂tT+M ) = Fθ ( Xt1 , Xt2 , · · · , XtT ) ( 1 ) where θ represents learnable parameters of our model . 3.2 ARCHITECTURE . The architecture of the ASTI-GCN proposed in this paper is shown in Figure 2 ( a ) . The main ideas of ASTI-GCN can be summarized as follows : ( 1 ) We propose spatial-temporal joint graph convolution to directly extract the spatial-temporal correlations ; ( 2 ) We build the Spatio-Temporal Inception Graph Convolutional Module ( STI-GCM ) to adaptively model the graph heterogeneity ; ( 3 ) We use short-term decoder combined with sequence decoder to achieve accurate multi-step forecast . 3.3 SPATIAL-TEMPORAL INCEPTION-GCM . Spatial-temporal joint graph convolution In order to extract spatial-temporal correlations simultaneously , we propose spatial-temporal joint graph convolution . In this paper , we construct spatial-temporal joint graph convolution based on graph convolution in the spectral domain . The spectral graph convolution implemented by using the graph Fourier transform basis which is from eigenvalue decomposition of the Laplacian matrix ( L ) to transform the graph signals from spatial into the spectral domain . But the computation cost of the eigenvalue decomposition of L is expensive when the graph is large . To reduce the number of parameters and the computation complexity , Chebyshev polynomial Tk ( x ) is used for approximation . The spectral graph convolution can be written as ( Yu et al. , 2017 ; Guo et al. , 2019 ; Kipf & Welling , 2016 ) : Θ∗Gx = Θ ( L ) x ≈ K−1∑ k=0 θkTk ( L̃ ) x ( 2 ) where ∗G is graph convolution operator , Θ is graph convolution kernel , x ∈ RN is the graph signal , Tk ( L̃ ) ∈ RN×N is the Chebyshev polynomial of order k with the scaled Laplacian L̃ = 2 λmax L − IN ( L is the graph Laplacian matrix , λmax is the largest eigenvalue of L , IN is identity matrix ) ( Yu et al. , 2017 ) . θk is the coefficient of the k-th order polynomial . Based on spectral domain graph convolution , we propose spatial-temporal joint graph convolution . First , these K-hop Tk ( L̃ ) are concatenated as the furthest receptive field in the spatial dimension . Then we construct the spatial-temporal joint graph convolution kernel Θs , t , Θs , t ∈ RKt×Ks×C×F , where Kt represents the kernel size in the temporal dimension and Ks represents the kernel size in the spatial dimension , C is the input feature dimensions , F is the number of filters . So the kernel Θs , t has the local spatial-temporal receptive field of Kt × Ks , and Ks should be lower than K ( which can be written as Ks < K ) , because of the largest graph convolution perceived field of Khop . And the spatial-temporal joint graph convolution can be formulated as : TK ( L̃ ) = Concat ( T0 ( L̃ ) , T1 ( L̃ ) , . . . , TK−1 ( L̃ ) ) ( 3 ) Xout = Θs , t ∗X = Θs , tTK ( L̃ ) X ( 4 ) where TK ( L̃ ) ∈ RK×N×N is the concatenation of all Chebyshev polynomials in ( K-1 ) hop . ∗ is the convolution operation between Θs , t and X , X ∈ RN×T×C is the spatial-temporal signal of the input graph , T is the input time steps . After the spatial-temporal joint graph convolution , the output can be written as Xout ∈ RN× ( T−Kt+1 ) × ( K−Ks+1 ) ×F . Besides , the neighbors have various influences on the central node , so we implement a learnable spatial mask matrix Wmask ∈ RN×N ( Song et al. , 2020 ) to adjust the graph adjacency relationship for assigning weights to different neighbors . Spatial-temporal inception-attention Different from the images , each node of the spatial-temporal graph usually represents a road or eNodeB etc . Then , affected by external factors like geographic location and surrounding environment , the spatial-temporal data properties of different nodes are various , namely the heterogeneity . To solve this problem , an intuitive method is to learn different models for each node , but this method could cause extensive parameters and maintain low generalization ability . So we take another way in this paper . We understand that heterogeneity is manifested in the differences of local spatialtemporal receptive fields of each node of the graph which result from nodes ’ various properties and local spatial structures . Inspired by ( Song et al. , 2020 ; Zheng et al. , 2020 ; Zhou et al. , 2018 ; Vaswani et al. , 2017 ) , we apply a learnable graph node embedding Se ∈ RN×E to represent the properties of each node . Meanwhile , we introduce inception ( Szegedy et al. , 2015 ) to extract multi-scale spatialtemporal correlations through spatial-temporal joint graph convolution . Then , we combine the graph attention with inception to achieve node-level attention for modeling the heterogeneity . Firstly , we implement inception , as shown in Figure 2 ( b ) . For example , the 3×2 block represents that it involves the kernel θs , t ∈ R3×2×C×F , which means it can extract the spatial-temporal correlations of the node itself and its neighbors in the three adjacent time steps by one layer , which is needed by two layers STSGCM in STSGCN ( Song et al. , 2020 ) . We use the padding method when implement inception , so after B branches , the output of the inception module can be Cout ∈ RN×T×K× ( F×B ) , where we set the number of output filters of each branch to be the same . Then we combined with the graph node attention , which has being widely used ( Vaswani et al. , 2017 ) . We use the Q = SeWq , Wq ∈ RE×F to get the queries of graph nodes . For each branch in inception , we apply the idea of SKNET ( Li et al. , 2019 ) to do global pooling Cgl = T∑ i=1 K∑ j=1 Cout , Cg ∈ RN× ( F×B ) as the corresponding keys of the branches ( one can also use Wk ∈ RF×F to do transform as ( Vaswani et al. , 2017 ) , here we omit it for simplicity ) . Then we compute the attention by S = QKT ( Vaswani et al. , 2017 ) . Take a graph node vi as an example , Svi , b = qvi•cg , vi , b√ F , qvi ∈ R1×F , cg , vi , b ∈ R1×F denotes the attention of vi and each branchCg , vi , b which represents the corresponding spatialtemporal receptive field perception . Then we concatenate the inception branch results adjusted by attention score to obtain the output . The calculation can be formulated as follows : αvi , b = exp ( Svi , b ) B∑ bc=1 exp ( Svi , bc ) ( 5 ) Attvi = ||Bb=1 { αvi , b · Cout , vi , b } ( 6 ) where αvi , b ∈ R , Cout , vi , b ∈ RT×K×F , Attvi ∈ RT×K×F×B represent the output of node vi from inception-attention block . Therefore , the final output of spatial-temporal inception-attention block is Catt ∈ RN×T×K×F×B . STI-GCM output Layer Then , we use spatial convolution to generate the output of STI-GCM . We first reshape Catt into Catt ∈ RN×T×K× ( F ·B ) . Next , the learnable weight matrix Ws ∈ RK× ( F ·B ) × ( F ·B ) is used to convert Catt to Csatt ∈ RN×T× ( F ·B ) . We also implement SE-net ( Hu et al. , 2020 ) to model the channel attention . Finally , the output is converted to Cstatt ∈ RN×T×F using the full connection layer with Wo ∈ R ( F ·B ) ×F . And the process can be formulated as Cstatt = CattWsWo .
This paper proposes a spatial-temporal graph neural network, which is designed to adaptively capture the complex spatial-temporal dependency. Further, the authors design a spatial-temporal attention module, which aims to capture multi-scale correlations. For multi-step prediction instead of one-step prediction, they further propose the sequence transform block to solve the problem of error accumulations. The authors conducted experiments on three real-world datasets (traffic on highways and mobile traffic), which shows their method achieves the best performance.
SP:60d704b4a1555e24c09963617c879a15d8f3c805
The Recurrent Neural Tangent Kernel
1 INTRODUCTION . The overparameterization of modern deep neural networks ( DNNs ) has resulted in not only remarkably good generalization performance on unseen data ( Novak et al. , 2018 ; Neyshabur et al. , 2019 ; Belkin et al. , 2019 ) but also guarantees that gradient descent learning can find the global minimum of their highly nonconvex loss functions ( Du et al. , 2019b ; Allen-Zhu et al. , 2019b ; a ; Zou et al. , 2018 ; Arora et al. , 2019b ) . From these successes , a natural question arises : What happens when we take overparameterization to the limit by allowing the width of a DNN ’ s hidden layers to go to infinity ? Surprisingly , the analysis of such an ( impractical ) DNN becomes analytically tractable . Indeed , recent work has shown that the training dynamics of ( infinite-width ) DNNs under gradient flow is captured by a constant kernel called the Neural Tangent Kernel ( NTK ) that evolves according to a linear ordinary differential equation ( ODE ) ( Jacot et al. , 2018 ; Lee et al. , 2019 ; Arora et al. , 2019a ) . Every DNN architecture and parameter initialization produces a distinct NTK . The original NTK was derived from the Multilayer Perceptron ( MLP ) ( Jacot et al. , 2018 ) and was soon followed by kernels derived from Convolutional Neural Networks ( CNTK ) ( Arora et al. , 2019a ; Yang , 2019a ) , Residual DNNs ( Huang et al. , 2020 ) , and Graph Convolutional Neural Networks ( GNTK ) ( Du et al. , 2019a ) . In ( Yang , 2020a ) , a general strategy to obtain the NTK of any architecture is provided . In this paper , we extend the NTK concept to the important class of overparametrized Recurrent Neural Networks ( RNNs ) , a fundamental DNN architecture for processing sequential data . We show that RNN in its infinite-width limit converges to a kernel that we dub the Recurrent Neural Tangent Kernel ( RNTK ) . The RNTK provides high performance for various machine learning tasks , and an analysis of the properties of the kernel provides useful insights into the behavior of RNNs in the following overparametrized regime . In particular , we derive and study the RNTK to answer the following theoretical questions : Q : Can the RNTK extract long-term dependencies between two data sequences ? RNNs are known to underperform at learning long-term dependencies due to the gradient vanishing or exploding ( Bengio et al. , 1994 ) . Attempted ameliorations have included orthogonal weights ( Arjovsky et al. , 2016 ; Jing et al. , 2017 ; Henaff et al. , 2016 ) and gating such as in Long Short-Term Memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) and Gated Recurrent Unit ( GRU ) ( Cho et al. , 2014 ) RNNs . We demonstrate that the RNTK can detect long-term dependencies with proper initialization of the hyperparameters , and moreover , we show how the dependencies are extracted through time via different hyperparameter choices . Q : Do the recurrent weights of the RNTK reduce its representation power compared to other NTKs ? An attractive property of an RNN that is shared by the RNTK is that it can deal with sequences of different lengths via weight sharing through time . This enables the reduction of the number of learnable parameters and thus more stable training at the cost of reduced representation power . We prove the surprising fact that employing tied vs. untied weights in an RNN does not impact the analytical form of the RNTK . Q : Does the RNTK generalize well ? A recent study has revealed that the use of an SVM classifier with the NTK , CNTK , and GNTK kernels outperforms other classical kernel-based classifiers and trained finite DNNs on small data sets ( typically fewer than 5000 training samples ) ( Lee et al. , 2020 ; Arora et al. , 2019a ; 2020 ; Du et al. , 2019a ) . We extend these results to RNTKs to demonstrate that the RNTK outperforms a variety of classic kernels , NTKs and finite RNNs for time series data sets in both classification and regression tasks . Carefully designed experiments with data of varying lengths demonstrate that the RNTK ’ s performance accelerates beyond other techniques as the difference in lengths increases . Those results extend the empirical observations from ( Arora et al. , 2019a ; 2020 ; Du et al. , 2019a ; Lee et al. , 2020 ) into finite DNNs , NTK , CNTK , and GNTK comparisons by observing that their performance-wise ranking depends on the employed DNN architecture . We summarize our contributions as follows : [ C1 ] We derive the analytical form for the RNTK of an overparametrized RNN at initialization using rectified linear unit ( ReLU ) and error function ( erf ) nonlinearities for arbitrary data lengths and number of layers ( Section 3.1 ) . [ C2 ] We prove that the RNTK remains constant during ( overparametrized ) RNN training and that the dynamics of training are simplified to a set of ordinary differential equations ( ODEs ) ( Section 3.2 ) . [ C3 ] When the input data sequences are of equal length , we show that the RNTKs of weight-tied and weight-untied RNNs converge to the same RNTK ( Section 3.3 ) . [ C4 ] Leveraging our analytical formulation of the RNTK , we empirically demonstrate how correlations between data at different times are weighted by the function learned by an RNN for different sets of hyperparameters . We also offer practical suggestions for choosing the RNN hyperparameters for deep information propagation through time ( Section 3.4 ) . [ C5 ] We demonstrate that the RNTK is eminently practical by showing its superiority over classical kernels , NTKs , and finite RNNs in exhaustive experiments on time-series classification and regression with both synthetic and 56 real-world data sets ( Section 4 ) . 2 BACKGROUND AND RELATED WORK . Notation . We denote [ n ] = { 1 , . . . , n } , and Id the identity matrix of size d. [ A ] i , j represents the ( i , j ) -th entry of a matrix , and similarly [ a ] i represents the i-th entry of a vector . We use φ ( · ) : R → R to represent the activation function that acts coordinate wise on a vector and φ′ to denote its derivative . We will often use the rectified linear unit ( ReLU ) φ ( x ) = max ( 0 , x ) and error function ( erf ) φ ( x ) = 2√ π ∫ x 0 e−z 2 dz activation functions . N ( µ , Σ ) represents the multidimensional Gaussian distribution with the mean vector µ and the covariance matrix Σ. Recurrent Neural Networks ( RNNs ) . Given an input sequence data x = { xt } Tt=1 of length T with data at time t , xt ∈ Rm , a simple RNN ( Elman , 1990 ) performs the following recursive computation at each layer ` and each time step t g ( ` , t ) ( x ) = W ( ` ) h ( ` , t−1 ) ( x ) +U ( ` ) h ( ` −1 , t ) ( x ) + b ( ` ) , h ( ` , t ) ( x ) = φ ( g ( ` , t ) ( x ) ) , where W ( ` ) ∈ Rn×n , b ( ` ) ∈ Rn for ` ∈ [ L ] , U ( 1 ) ∈ Rn×m and U ( ` ) ∈ Rn×n for ` ≥ 2 are the RNN parameters . g ( ` , t ) ( x ) is the pre-activation vector at layer ` and time step t , and h ( ` , t ) ( x ) is the after-activation ( hidden state ) . For the input layer ` = 0 , we define h ( 0 , t ) ( x ) : = xt . h ( ` ,0 ) ( x ) as the initial hidden state at layer ` that must be initialized to start the RNN recursive computation . The output of an L-hidden layer RNN with linear read out layer is achieved via fθ ( x ) = V h ( L , T ) ( x ) , where V ∈ Rd×n . Figure 1 visualizes an RNN unrolled through time . h ( 2,2 ) h ( 2,1 ) ( x ) h ( 2,3 ) ( x ) h ( 1,2 ) ( x ) h ( 1,1 ) ( x ) h ( 1,3 ) ( x ) W ( 2 ) W ( 2 ) W ( 1 ) W ( 1 ) U ( 2 ) U ( 2 ) U ( 2 ) h ( 2,0 ) ( x ) h ( 1,0 ) ( x ) W ( 2 ) W ( 1 ) x1 x2 x3 U ( 1 ) U ( 1 ) U ( 1 ) Neural Tangent Kernel ( NTK ) . Let fθ ( x ) ∈ Rd be the output of a DNN with parameters θ . For two input data sequences x and x′ , the NTK is defined as ( Jacot et al. , 2018 ) Θ̂s ( x , x ′ ) = 〈∇θsfθs ( x ) , ∇θsfθs ( x′ ) 〉 , where fθs and θs are the network output and parameters during training at time s. 1 Let X and Y be the set of training inputs and targets , ` ( ŷ , y ) : Rd × Rd → R+ be the loss function , and L = 1|X | ∑ ( x , y ) ∈X×Y ` ( fθs ( x ) , y ) be the the empirical loss . The evolution of the parameters θs and output of the network fθs on a test input using gradient descent with infinitesimal step size ( a.k.a gradient flow ) with learning rate η is given by ∂θs ∂s = −η∇θsfθs ( X ) T∇fθs ( X ) L ( 1 ) ∂fθs ( x ) ∂s = −η∇θsfθs ( x ) ∇θsfθs ( X ) T∇fθs ( X ) L = −ηΘ̂s ( x , X ) ∇fθs ( X ) L. ( 2 ) Generally , Θ̂s ( x , x′ ) , hereafter referred to as the empirical NTK , changes over time during training , making the analysis of the training dynamics difficult . When fθs corresponds to an infinite-width MLP , ( Jacot et al. , 2018 ) showed that Θ̂s ( x , x′ ) converges to a limiting kernel at initialization and stays constant during training , i.e. , lim n→∞ Θ̂s ( x , x ′ ) = lim n→∞ Θ̂0 ( x , x ′ ) : = Θ ( x , x′ ) ∀s , which is equivalent to replacing the outputs of the DNN by their first-order Taylor expansion in the parameter space ( Lee et al. , 2019 ) . With a mean-square error ( MSE ) loss function , the training dynamics in ( 1 ) and ( 2 ) simplify to a set of linear ODEs , which coincides with the training dynamics of kernel ridge regression with respect to the NTK when the ridge term goes to zero . A nonzero ridge regularization can be conjured up by adding a regularization term λ 2 2 ‖θs − θ0‖ 2 2 to the empirical loss ( Hu et al. , 2020 ) .
This paper extends NTK to RNN to explain behavior of RNNs in overparametrized case. It’s a good extension study and interesting to see RNN with infinite-width limit converges to a kernel. The paper proves the same RNTK formula when the weights are shared and not shared. The proposed sensitivity for computationally friendly RNTK hyperparameter tuning is also insightful.
SP:a99af0f9e848f4f9068ad407612745a85a262644