paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Disagreement-Regularized Imitation Learning
1 INTRODUCTION . Training artificial agents to perform complex tasks is essential for many applications in robotics , video games and dialogue . If success on the task can be accurately described using a reward or cost function , reinforcement learning ( RL ) methods offer an approach to learning policies which has proven to be successful in a wide variety of applications ( Mnih et al. , 2015 ; 2016 ; Lillicrap et al. , 2016 ; Hessel et al. , 2018 ) . However , in other cases the desired behavior may only be roughly specified and it is unclear how to design a reward function to characterize it . For example , training a video game agent to adopt more human-like behavior using RL would require designing a reward function which characterizes behaviors as more or less human-like , which is difficult . Imitation learning ( IL ) offers an elegant approach whereby agents are trained to mimic the demonstrations of an expert rather than optimizing a reward function . Its simplest form consists of training a policy to predict the expert ’ s actions from states in the demonstration data using supervised learning . While appealingly simple , this approach suffers from the fact that the distribution over states observed at execution time can differ from the distribution observed during training . Minor errors which initially produce small deviations become magnified as the policy encounters states further and further from its training distribution . This phenomenon , initially noted in the early work of ( Pomerleau , 1989 ) , was formalized in the work of ( Ross & Bagnell , 2010 ) who proved a quadratic O ( T 2 ) bound on the regret and showed that this bound is tight . The subsequent work of ( Ross et al. , 2011 ) showed that if the policy is allowed to further interact with the environment and make queries to the expert policy , it is possible to obtain a linear bound on the regret . However , the ability to query an expert can often be a strong assumption . In this work , we propose a new and simple algorithm called DRIL ( Disagreement-Regularized Imitation Learning ) to address the covariate shift problem in imitation learning , in the setting where the agent is allowed to interact with its environment . Importantly , the algorithm does not require any additional interaction with the expert . It operates by training an ensemble of policies on the demonstration data , and using the disagreement in their predictions as a cost which is optimized through RL together with a supervised behavioral cloning cost . The motivation is that the policies in the ensemble will tend to agree on the set of states covered by the expert , leading to low cost , but are more likely to disagree on states not covered by the expert , leading to high cost . The RL cost ∗Work done while at Microsoft Research . thus guides the agent back towards the distribution of the expert , while the supervised cost ensures that it mimics the expert within the expert ’ s distribution . Our theoretical results show that , subject to realizability and optimization oracle assumptions1 , our algorithm obtains aO ( κT ) regret bound , where κ is a measure which quantifies a tradeoff between the concentration of the demonstration data and the diversity of the ensemble outside the demonstration data . We evaluate DRIL empirically across multiple pixel-based Atari environments and continuous control tasks , and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning , often recovering expert performance with only a few trajectories . 2 PRELIMINARIES . We consider episodic finite horizon MDP in this work . Denote by S the state space , A the action space , and Π the class of policies the learner is considering . Let T denote the task horizon and π ? the expert policy whose behavior the learner is trying to mimic . For any policy π , let dπ denote the distribution over states induced by following π. Denote C ( s , a ) the expected immediate cost of performing action a in state s , which we assume is bounded in [ 0 , 1 ] . In the imitation learning setting , we do not necessarily know the true costs C ( s , a ) , and instead we observe expert demonstrations . Our goal is to find a policy π which minimizes an observed surrogate loss ` between its actions and the actions of the expert under its induced distribution of states , i.e . π̂ = arg minEs∼dπ [ ` ( π ( s ) , π ? ( s ) ) ] ( 1 ) For the following , we will assume ` is the total variation distance ( denoted by ‖ · ‖ ) , which is an upper bound on the 0−1 loss . Our goal is thus to minimize the following quantity , which represents the distance between the actions taken by our policy π and the expert policy π ? : Jexp ( π ) = Es∼dπ [ ‖π ( ·|s ) − π ? ( ·|s ) ‖ ] ( 2 ) Denote Qπt ( s , a ) as the standard Q-function of the policy π , which is defined as Q π t ( s , a ) = E [ ∑T τ=t C ( sτ , aτ ) | ( st , at ) = ( s , a ) , aτ ∼ π ] . The following result shows that if ` is an upper bound on the 0 − 1 loss and C satisfies certain smoothness conditions , then minimizing this loss within translates into an O ( T ) regret bound on the true task cost JC ( π ) = Es , a∼dπ [ C ( s , a ) ] : Theorem 1 . ( Ross et al. , 2011 ) If π satisfies Jexp ( π ) = , and Qπ ? T−t+1 ( s , a ) −Qπ ? T−t+1 ( s , π ? ) ≤ u for all time steps t , actions a and states s reachable by π , then JC ( π ) ≤ JC ( π ? ) + uT . Unfortunately , it is often not possible to optimize Jexp directly , since it requires evaluating the expert policy on the states induced by following the current policy . The supervised behavioral cloning cost JBC , which is computed on states induced by the expert , is often used instead : JBC ( π ) = Es∼dπ ? [ ‖π ? ( ·|s ) − π ( ·|s ) ‖ ] ( 3 ) Minimizing this loss within yields a quadratic regret bound on regret : Theorem 2 . ( Ross & Bagnell , 2010 ) Let JBC ( π ) = , then JC ( π ) ≤ JC ( π ? ) + T 2 . Furthermore , this bound is tight : as we will discuss later , there exist simple problems which match the worst-case lower bound . 3 ALGORITHM . Our algorithm is motivated by two criteria : i ) the policy should act similarly to the expert within the expert ’ s data distribution , and ii ) the policy should move towards the expert ’ s data distribution 1We assume for the analysis the action space is discrete , but the state space can be large or infinite . Algorithm 1 Disagreement-Regularized Imitation Learning ( DRIL ) 1 : Input : Expert demonstration data D = { ( si , ai ) } Ni=1 2 : Initialize policy π and policy ensemble ΠE = { π1 , ... , πE } 3 : for e = 1 , E do 4 : Sample De ∼ D with replacement , with |De| = |D| . 5 : Train πe to minimize JBC ( πe ) on De to convergence . 6 : end for 7 : for i = 1 , ... do 8 : Perform one gradient update to minimize JBC ( π ) using a minibatch from D. 9 : Perform one step of policy gradient to minimize Es∼dπ , a∼π ( ·|s ) [ C clip U ( s , a ) ] . 10 : end for if it is outside of it . These two criteria are addressed by combining two losses : a standard behavior cloning loss , and an additional loss which represents the variance over the outputs of an ensemble ΠE = { π1 , ... , πE } of policies trained on the demonstration data D. We call this the uncertainty cost , which is defined as : CU ( s , a ) = Varπ∼ΠE ( π ( a|s ) ) = 1 E E∑ i=1 ( πi ( a|s ) − 1 E E∑ i=1 πi ( a|s ) ) 2 The motivation is that the variance over plausible policies is high outside the expert ’ s distribution , since the data is sparse , but it is low inside the expert ’ s distribution , since the data there is dense . Minimizing this cost encourages the policy to return to regions of dense coverage by the expert . Intuitively , this is what we would expect the expert policy π ? to do as well . The total cost which the algorithm optimizes is given by : Jalg ( π ) = Es∼dπ ? [ ‖π ? ( ·|s ) − π ( ·|s ) ‖ ] ︸ ︷︷ ︸ JBC ( π ) +Es∼dπ , a∼π ( ·|s ) [ CU ( s , a ) ] ︸ ︷︷ ︸ JU ( π ) The first term is a behavior cloning loss and is computed over states generated by the expert policy , of which the demonstration data D is a representative sample . The second term is computed over the distribution of states generated by the current policy and can be optimized using policy gradient . Note that the demonstration data is fixed , and this ensemble can be trained once offline . We then interleave the supervised behavioral cloning updates and the policy gradient updates which minimize the variance of the ensemble . The full algorithm is shown in Algorithm 1 . We also found that dropout ( Srivastava et al. , 2014 ) , which has been proposed as an approximate form of ensembling , worked well ( see Appendix D ) . In practice , for the supervised loss we optimize the KL divergence between the actions predicted by the policy and the expert actions , which is an upper bound on the total variation distance due to Pinsker ’ s inequality . We also found it helpful to use a clipped uncertainty cost : CclipU ( s , a ) = { −1 if CU ( s , a ) ≤ q +1 else where the threshold q is a top quantile of the raw uncertainty costs computed over the demonstration data . The threshold q defines a normal range of uncertainty based on the demonstration data , and values above this range incur a positive cost ( or negative reward ) . The RL cost can be optimized using any policy gradient method . In our experiments we used advantage actor-critic ( A2C ) ( Mnih et al. , 2016 ) or PPO ( Schulman et al. , 2017 ) , which estimate the expected cost using rollouts from multiple parallel actors all sharing the same policy ( see Appendix C for details ) . We note that model-based RL methods could in principle be used as well if sample efficiency is a constraint .
The paper aims to address the covariate shift issue of behavior cloning (BC). The main idea of the paper is to learn a policy by minimizing a BC loss and an uncertainty loss. This uncertainty loss is defined as a variance of a policy posterior given by demonstration. To approximate this posterior, the paper uses an ensemble approach, where an ensemble of policies is learned from demonstrations. This approach leads to a method called disagreement-regularized imitation learning (DRIL). The paper proofs for a tabular setting that DRIL has a linear regret bound in terms of the horizon, which is better than that of BC which has a quadratic regret bound. Empirical evaluation shows that DRIL outperforms BC in both discrete and continuous control tasks, and it outperforms GAIL in discrete control tasks.
SP:5f026e00085a3f771abf068bd884e27a6f9d9e44
Disagreement-Regularized Imitation Learning
1 INTRODUCTION . Training artificial agents to perform complex tasks is essential for many applications in robotics , video games and dialogue . If success on the task can be accurately described using a reward or cost function , reinforcement learning ( RL ) methods offer an approach to learning policies which has proven to be successful in a wide variety of applications ( Mnih et al. , 2015 ; 2016 ; Lillicrap et al. , 2016 ; Hessel et al. , 2018 ) . However , in other cases the desired behavior may only be roughly specified and it is unclear how to design a reward function to characterize it . For example , training a video game agent to adopt more human-like behavior using RL would require designing a reward function which characterizes behaviors as more or less human-like , which is difficult . Imitation learning ( IL ) offers an elegant approach whereby agents are trained to mimic the demonstrations of an expert rather than optimizing a reward function . Its simplest form consists of training a policy to predict the expert ’ s actions from states in the demonstration data using supervised learning . While appealingly simple , this approach suffers from the fact that the distribution over states observed at execution time can differ from the distribution observed during training . Minor errors which initially produce small deviations become magnified as the policy encounters states further and further from its training distribution . This phenomenon , initially noted in the early work of ( Pomerleau , 1989 ) , was formalized in the work of ( Ross & Bagnell , 2010 ) who proved a quadratic O ( T 2 ) bound on the regret and showed that this bound is tight . The subsequent work of ( Ross et al. , 2011 ) showed that if the policy is allowed to further interact with the environment and make queries to the expert policy , it is possible to obtain a linear bound on the regret . However , the ability to query an expert can often be a strong assumption . In this work , we propose a new and simple algorithm called DRIL ( Disagreement-Regularized Imitation Learning ) to address the covariate shift problem in imitation learning , in the setting where the agent is allowed to interact with its environment . Importantly , the algorithm does not require any additional interaction with the expert . It operates by training an ensemble of policies on the demonstration data , and using the disagreement in their predictions as a cost which is optimized through RL together with a supervised behavioral cloning cost . The motivation is that the policies in the ensemble will tend to agree on the set of states covered by the expert , leading to low cost , but are more likely to disagree on states not covered by the expert , leading to high cost . The RL cost ∗Work done while at Microsoft Research . thus guides the agent back towards the distribution of the expert , while the supervised cost ensures that it mimics the expert within the expert ’ s distribution . Our theoretical results show that , subject to realizability and optimization oracle assumptions1 , our algorithm obtains aO ( κT ) regret bound , where κ is a measure which quantifies a tradeoff between the concentration of the demonstration data and the diversity of the ensemble outside the demonstration data . We evaluate DRIL empirically across multiple pixel-based Atari environments and continuous control tasks , and show that it matches or significantly outperforms behavioral cloning and generative adversarial imitation learning , often recovering expert performance with only a few trajectories . 2 PRELIMINARIES . We consider episodic finite horizon MDP in this work . Denote by S the state space , A the action space , and Π the class of policies the learner is considering . Let T denote the task horizon and π ? the expert policy whose behavior the learner is trying to mimic . For any policy π , let dπ denote the distribution over states induced by following π. Denote C ( s , a ) the expected immediate cost of performing action a in state s , which we assume is bounded in [ 0 , 1 ] . In the imitation learning setting , we do not necessarily know the true costs C ( s , a ) , and instead we observe expert demonstrations . Our goal is to find a policy π which minimizes an observed surrogate loss ` between its actions and the actions of the expert under its induced distribution of states , i.e . π̂ = arg minEs∼dπ [ ` ( π ( s ) , π ? ( s ) ) ] ( 1 ) For the following , we will assume ` is the total variation distance ( denoted by ‖ · ‖ ) , which is an upper bound on the 0−1 loss . Our goal is thus to minimize the following quantity , which represents the distance between the actions taken by our policy π and the expert policy π ? : Jexp ( π ) = Es∼dπ [ ‖π ( ·|s ) − π ? ( ·|s ) ‖ ] ( 2 ) Denote Qπt ( s , a ) as the standard Q-function of the policy π , which is defined as Q π t ( s , a ) = E [ ∑T τ=t C ( sτ , aτ ) | ( st , at ) = ( s , a ) , aτ ∼ π ] . The following result shows that if ` is an upper bound on the 0 − 1 loss and C satisfies certain smoothness conditions , then minimizing this loss within translates into an O ( T ) regret bound on the true task cost JC ( π ) = Es , a∼dπ [ C ( s , a ) ] : Theorem 1 . ( Ross et al. , 2011 ) If π satisfies Jexp ( π ) = , and Qπ ? T−t+1 ( s , a ) −Qπ ? T−t+1 ( s , π ? ) ≤ u for all time steps t , actions a and states s reachable by π , then JC ( π ) ≤ JC ( π ? ) + uT . Unfortunately , it is often not possible to optimize Jexp directly , since it requires evaluating the expert policy on the states induced by following the current policy . The supervised behavioral cloning cost JBC , which is computed on states induced by the expert , is often used instead : JBC ( π ) = Es∼dπ ? [ ‖π ? ( ·|s ) − π ( ·|s ) ‖ ] ( 3 ) Minimizing this loss within yields a quadratic regret bound on regret : Theorem 2 . ( Ross & Bagnell , 2010 ) Let JBC ( π ) = , then JC ( π ) ≤ JC ( π ? ) + T 2 . Furthermore , this bound is tight : as we will discuss later , there exist simple problems which match the worst-case lower bound . 3 ALGORITHM . Our algorithm is motivated by two criteria : i ) the policy should act similarly to the expert within the expert ’ s data distribution , and ii ) the policy should move towards the expert ’ s data distribution 1We assume for the analysis the action space is discrete , but the state space can be large or infinite . Algorithm 1 Disagreement-Regularized Imitation Learning ( DRIL ) 1 : Input : Expert demonstration data D = { ( si , ai ) } Ni=1 2 : Initialize policy π and policy ensemble ΠE = { π1 , ... , πE } 3 : for e = 1 , E do 4 : Sample De ∼ D with replacement , with |De| = |D| . 5 : Train πe to minimize JBC ( πe ) on De to convergence . 6 : end for 7 : for i = 1 , ... do 8 : Perform one gradient update to minimize JBC ( π ) using a minibatch from D. 9 : Perform one step of policy gradient to minimize Es∼dπ , a∼π ( ·|s ) [ C clip U ( s , a ) ] . 10 : end for if it is outside of it . These two criteria are addressed by combining two losses : a standard behavior cloning loss , and an additional loss which represents the variance over the outputs of an ensemble ΠE = { π1 , ... , πE } of policies trained on the demonstration data D. We call this the uncertainty cost , which is defined as : CU ( s , a ) = Varπ∼ΠE ( π ( a|s ) ) = 1 E E∑ i=1 ( πi ( a|s ) − 1 E E∑ i=1 πi ( a|s ) ) 2 The motivation is that the variance over plausible policies is high outside the expert ’ s distribution , since the data is sparse , but it is low inside the expert ’ s distribution , since the data there is dense . Minimizing this cost encourages the policy to return to regions of dense coverage by the expert . Intuitively , this is what we would expect the expert policy π ? to do as well . The total cost which the algorithm optimizes is given by : Jalg ( π ) = Es∼dπ ? [ ‖π ? ( ·|s ) − π ( ·|s ) ‖ ] ︸ ︷︷ ︸ JBC ( π ) +Es∼dπ , a∼π ( ·|s ) [ CU ( s , a ) ] ︸ ︷︷ ︸ JU ( π ) The first term is a behavior cloning loss and is computed over states generated by the expert policy , of which the demonstration data D is a representative sample . The second term is computed over the distribution of states generated by the current policy and can be optimized using policy gradient . Note that the demonstration data is fixed , and this ensemble can be trained once offline . We then interleave the supervised behavioral cloning updates and the policy gradient updates which minimize the variance of the ensemble . The full algorithm is shown in Algorithm 1 . We also found that dropout ( Srivastava et al. , 2014 ) , which has been proposed as an approximate form of ensembling , worked well ( see Appendix D ) . In practice , for the supervised loss we optimize the KL divergence between the actions predicted by the policy and the expert actions , which is an upper bound on the total variation distance due to Pinsker ’ s inequality . We also found it helpful to use a clipped uncertainty cost : CclipU ( s , a ) = { −1 if CU ( s , a ) ≤ q +1 else where the threshold q is a top quantile of the raw uncertainty costs computed over the demonstration data . The threshold q defines a normal range of uncertainty based on the demonstration data , and values above this range incur a positive cost ( or negative reward ) . The RL cost can be optimized using any policy gradient method . In our experiments we used advantage actor-critic ( A2C ) ( Mnih et al. , 2016 ) or PPO ( Schulman et al. , 2017 ) , which estimate the expected cost using rollouts from multiple parallel actors all sharing the same policy ( see Appendix C for details ) . We note that model-based RL methods could in principle be used as well if sample efficiency is a constraint .
The paper proposes an imitation learning algorithm that combines behavioral cloning with a regularizer that encourages the agent to visit states similar to the demonstrated states. The key idea is to use ensemble disagreement to approximate uncertainty, and use RL to train the imitation agent to visit states in which an ensemble of cloned imitation policies is least uncertain about which action the expert would take. Experiments on image-based Atari games show that the proposed method significantly outperforms BC and GAIL baselines in three games, and performs comparably or slightly better than the baselines in the remaining three games.
SP:5f026e00085a3f771abf068bd884e27a6f9d9e44
CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY
Differently from the popular Deep Q-Network ( DQN ) learning , Alternating Qlearning ( AltQ ) does not fully fit a target Q-function at each iteration , and is generally known to be unstable and inefficient . Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance . Although Adam appears to be a natural solution , its performance in AltQ has rarely been studied before . In this paper , we first provide a solid exploration on how well AltQ performs with Adam . We then take a further step to improve the implementation by adopting the technique of parameter restart . More specifically , the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method . The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation . To the best of our knowledge , this is the first theoretical study on the Adam-type algorithms in Q-learning . 1 INTRODUCTION . Q-learning ( Watkins & Dayan , 1992 ) is one of the most important model-free reinforcement learning ( RL ) problems , which has received considerable research attention in recent years ( Bertsekas & Tsitsiklis , 1996 ; Even-Dar & Mansour , 2003 ; Hasselt , 2010 ; Lu et al. , 2018 ; Achiam et al. , 2019 ) . When the state-action space is large or continuous , parametric approximation of the Q-function is often necessary . One remarkable success of parametric Q-learning in practice is its combination with deep learning , known as the Deep Q-Network ( DQN ) learning ( Mnih et al. , 2013 ; 2015 ) . It has been applied to various applications in computer games ( Bhatti et al. , 2016 ) , traffic control ( Arel et al. , 2010 ) , recommendation systems ( Zheng et al. , 2018 ; Zhao et al. , 2018 ) , chemistry research ( Zhou et al. , 2017 ) , etc . Its on-policy continuous variant ( Silver et al. , 2014 ) has also led to great achievements in robotics locomotion ( Lillicrap et al. , 2016 ) . The DQN algorithm is performed in a nested-loop manner , where the outer loop follows an one-step update of the Q-function ( via the empirical Bellman operator for Q-learning ) , and the inner loop takes a supervised learning process to fit the updated ( i.e. , target ) Q-function with a neural network . In practice , the inner loop takes a sufficiently large number of iterations under certain optimizer ( e.g . stochastic gradient descent ( SGD ) or Adam ) to fit the neural network well to the target Q-function . In contrast , a conventional Q-learning algorithm runs only one SGD step in each inner loop , in which case the overall Q-learning algorithm updates the Q-function and fits the target Q-function alternatively in each iteration . We refer to such a Q-learning algorithm with alternating updates as Alternating Q-learning ( AltQ ) . Although significantly simpler in the update rule , AltQ is well known to be unstable and have weak performance ( Mnih et al. , 2016 ) . This is in part due to the fact that the inner loop does not fit the target Q-function sufficiently well . To fix this issue , Mnih et al . ( 2016 ) proposed a new exploration strategy and asynchronous sampling schemes over parallel computing units ( rather than the simple replay sampling in DQN ) in order for the AltQ algorithm to achieve comparable or better performance than DQN . As another alternative , Knight & Lerner ( 2018 ) proposed a more involved natural gradient propagation for AltQ to improve the performance . All these schemes require more sophisticated designs or hardware support , which may place AltQ less advantageous compared to the popular DQN , even with their better performances . This motivates us to ask the following first question . • Q1 : Can we design a simple and easy variant of the AltQ algorithm , which uses as simple setup as DQN and does not introduce extra computational burden and heuristics , but still achieves better and more stable performance than DQN ? In this paper , we provide an affirmative answer by introducing novel lightweight designs to AltQ based on Adam . Although Adam appears to be a natural tool , its performance in AltQ has rarely been studied yet . Thus , we first provide a solid exploration on how well AltQ performs with Adam ( Kingma & Ba , 2014 ) , where the algorithm is referred to as AltQ-Adam . We then take a further step to improve the implementation of AltQ-Adam by adopting the technique of parameter restart ( i.e. , restart the initial setting of Adam parameters every a few iterations ) , and refer to the new algorithm as AltQ-AdamR . This is the first time that restart is applied for improving the performance of RL algorithms although restart has been used for conventional optimization before . In a batch of 23 Atari 2600 games , our experiments show that both AltQ-Adam and AltQ-AdamR outperform the baseline performance of DQN by 50 % on average . Furthermore , AltQ-AdamR effectively reduces the performance variance and achieves a much more stable learning process . In our experiments for the linear quadratic regulator ( LQR ) problems , AltQ-AdamR converges even faster than the model-based value iteration ( VI ) solution . This is a rather surprising result given that the model-based VI has been treated as the performance upper bound for the Q-learning ( including DQN ) algorithms with target update ( Lewis & Vrabie , 2009 ; Yang et al. , 2019 ) . Regarding the theoretical analysis of AltQ algorithms , their convergence guarantee has been extensively studied ( Melo et al. , 2008 ; Chen et al. , 2019b ) . More references are given in Section 1.1 . However , all the existing studies focus on the AltQ algorithms that take a simple SGD step . Such theory is not applicable to the proposed AltQ-Adam and AltQ-AdamR that implement the Adam-type update . Thus , the second intriguing question we address here is described as follows . • Q2 : Can we provide the convergence guarantee for AltQ-Adam and AltQ-AdamR or their slightly modified variants ( if these two algorithms do not always converge by nature ) ? It is well known in optimization that Adam does not always converge , and instead , a slightly modified variant AMSGrad proposed in Reddi et al . ( 2018 ) has been widely accepted as an alternative to justify the performance of Adam-type algorithms . Thus , our theoretical analysis here also focuses on such slightly modified variants AltQ-AMSGrad and AltQ-AMSGradR of the proposed algorithms . We show that under the linear function approximation ( which is the structure that the current tools for analysis of Q-learning can handle ) , both AltQ-AMSGrad and AltQ-AMSGradR converge to the global optimal solution under standard assumptions for Qlearning . To the best of our knowledge , this is the first non-asymptotic convergence guarantee on Q-learning that incorporates Adam-type update and momentum restart . Furthermore , a slight adaptation of our proof provides the convergence rate for the AMSGrad for conventional strongly convex optimization which has not been studied before and can be of independent interest . Notations We use ‖x‖ : = ‖x‖2 = √ xTx to denote the ` 2 norm of a vector x , and use ‖x‖∞ = max i |xi| to denote the infinity norm . When x , y are both vectors , x/y , xy , x2 , √ x are all calculated in the element-wise manner , which will be used in the update of Adam and AMSGrad . We denote [ n ] = 1 , 2 , . . . , n , and bxc ∈ Z as the largest integer such that bxc ≤ x < bxc+ 1 . 1.1 RELATED WORK . Empirical performance of AltQ : AltQ algorithms that strictly follow the alternating updates are rarely used in practice , particularly in comparison with the well-accepted DQN learning and its improved variants of dueling network structure ( Wang et al. , 2016 ) , double Q-learning ( Hasselt , 2010 ) and variance exploration and sampling schemes ( Schaul et al. , 2015 ) . Mnih et al . ( 2016 ) proposed the asynchronous one-step Q-learning that is conceptually close to AltQ with competitive performance against DQN . However , the algorithm still relies on a slowly moving target network like DQN , and the multi-thread learning also complicates the computational setup . Lu et al . ( 2018 ) studied the problem of value overestimation and proposed the non-delusional Q-learning algorithm that employed the so-called pre-conditioned Q-networks , which is also computationally complex . Knight & Lerner ( 2018 ) proposed a natural gradient propagation for AltQ to improve the performance , where the gradient implementation is complex . In this paper , we propose two simple and computationally efficient schemes to improve the performance of AltQ . Theoretical analysis of AltQ : Since proposed in Watkins & Dayan ( 1992 ) , Q-learning has aroused great interest in theoretic analysis . The line of theoretic research of AltQ that are most relevant to our study lies in the Q-learning with function approximation . A large number of works study Q-learning with linear function approximation such as Bertsekas & Tsitsiklis ( 1996 ) ; Devraj & Meyn ( 2017 ) ; Zou et al . ( 2019 ) ; Chen et al . ( 2019b ) ; Du et al . ( 2019 ) , to name a few . More recently , convergence of AltQ with neural network parameterization was given in Cai et al . ( 2019 ) , which exploits the linear structure of neural networks in the overparamterized regime for analysis . It is worth noting that all the existing analysis of AltQ with function approximation considers the simple SGD update , whereas our analysis in this paper focuses on the more involved Adam-type updates . Convergence analysis of Adam : Adam was proposed in Kingma & Ba ( 2014 ) and has achieved a great success in training deep neural networks . Kingma & Ba ( 2014 ) and Reddi et al . ( 2018 ) provided regret bounds under the online convex optimization framework for Adam and AMSGrad , respectively . However , Tran et al . ( 2019 ) pointed out errors in the proofs of the previous two papers and corrected them . Recently , convergence analysis of Adam and AMSGrad in nonconvex optimization was provided in Zou et al . ( 2018 ) ; Zhou et al . ( 2018 ) ; Chen et al . ( 2019a ) ; Phuong & Phong ( 2019 ) , in which the Adam-type algorithms were guaranteed to converge to a stationary point . To the best of our knowledge , our study is the first convergence analysis of the Adam-type of algorithms for Q-learning . 2 PRELIMINARIES . We consider a Markov decision process with a considerably large or continuous state space S ⊂ RM and action space A ⊂ RN with a non-negative bounded reward function R : S ×A → [ 0 , Rmax ] . We define U ( s ) ⊂ A as the admissible set of actions at state s , and π : S → A as a feasible stationary policy . We seek to solve a discrete-time sequential decision problem with γ ∈ ( 0 , 1 ) as follows : maximize π Jπ ( s0 ) = EP [ ∞∑ t=0 γtR ( st , π ( st ) ) ] , subject to st+1 ∼ P ( ·|st , at ) . ( 1 ) Let J ? ( s ) : = Jπ ? ( s ) be the optimal value function when applying the optimal policy π ? . The corresponding optimal Q-function can be defined as Q ? ( s , a ) : = R ( s , a ) + γEPJ ? ( s′ ) , ( 2 ) where s′ ∼ P ( ·|s , a ) and we use the same notation hereafter when no confusion arises . In other words , Q ? ( s , a ) is the reward of an agent who starts from state s and takes action a at the first step and then follows the optimal policy π ? thereafter .
This paper is well-written and it provides a convergence result for traditional Q-learning, with linear function approximation, when using an Adam-like update (AMSGrad). It does the same for a variation of this algorithm where the momentum-like term is reset every now and then. This second result is not that exciting as it ends up concluding that the “best way” to converge with such an approach is by resetting the momentum-like term rarely. That being said, it is still interesting to have such theoretical result. On the empirical side, this paper evaluates the traditional Q-learning algorithm with non-linear function approximation (through a neural network), using Adam (and AdamR) while not using a target network, in both an LQR problem and a subset of the Atari games. The empirical results are not necessarily that convincing and there are important details missing. I’m willing to increase my score if my concerns w.r.t. the empirical validation are addressed since this paper presents a potentially interesting theoretical result with Adam, which is so prominent in the literature nowadays.
SP:2bf8148e5dadace0b6dd4b9f715fa8261f2a52db
CAN ALTQ LEARN FASTER: EXPERIMENTS AND THEORY
Differently from the popular Deep Q-Network ( DQN ) learning , Alternating Qlearning ( AltQ ) does not fully fit a target Q-function at each iteration , and is generally known to be unstable and inefficient . Limited applications of AltQ mostly rely on substantially altering the algorithm architecture in order to improve its performance . Although Adam appears to be a natural solution , its performance in AltQ has rarely been studied before . In this paper , we first provide a solid exploration on how well AltQ performs with Adam . We then take a further step to improve the implementation by adopting the technique of parameter restart . More specifically , the proposed algorithms are tested on a batch of Atari 2600 games and exhibit superior performance than the DQN learning method . The convergence rate of the slightly modified version of the proposed algorithms is characterized under the linear function approximation . To the best of our knowledge , this is the first theoretical study on the Adam-type algorithms in Q-learning . 1 INTRODUCTION . Q-learning ( Watkins & Dayan , 1992 ) is one of the most important model-free reinforcement learning ( RL ) problems , which has received considerable research attention in recent years ( Bertsekas & Tsitsiklis , 1996 ; Even-Dar & Mansour , 2003 ; Hasselt , 2010 ; Lu et al. , 2018 ; Achiam et al. , 2019 ) . When the state-action space is large or continuous , parametric approximation of the Q-function is often necessary . One remarkable success of parametric Q-learning in practice is its combination with deep learning , known as the Deep Q-Network ( DQN ) learning ( Mnih et al. , 2013 ; 2015 ) . It has been applied to various applications in computer games ( Bhatti et al. , 2016 ) , traffic control ( Arel et al. , 2010 ) , recommendation systems ( Zheng et al. , 2018 ; Zhao et al. , 2018 ) , chemistry research ( Zhou et al. , 2017 ) , etc . Its on-policy continuous variant ( Silver et al. , 2014 ) has also led to great achievements in robotics locomotion ( Lillicrap et al. , 2016 ) . The DQN algorithm is performed in a nested-loop manner , where the outer loop follows an one-step update of the Q-function ( via the empirical Bellman operator for Q-learning ) , and the inner loop takes a supervised learning process to fit the updated ( i.e. , target ) Q-function with a neural network . In practice , the inner loop takes a sufficiently large number of iterations under certain optimizer ( e.g . stochastic gradient descent ( SGD ) or Adam ) to fit the neural network well to the target Q-function . In contrast , a conventional Q-learning algorithm runs only one SGD step in each inner loop , in which case the overall Q-learning algorithm updates the Q-function and fits the target Q-function alternatively in each iteration . We refer to such a Q-learning algorithm with alternating updates as Alternating Q-learning ( AltQ ) . Although significantly simpler in the update rule , AltQ is well known to be unstable and have weak performance ( Mnih et al. , 2016 ) . This is in part due to the fact that the inner loop does not fit the target Q-function sufficiently well . To fix this issue , Mnih et al . ( 2016 ) proposed a new exploration strategy and asynchronous sampling schemes over parallel computing units ( rather than the simple replay sampling in DQN ) in order for the AltQ algorithm to achieve comparable or better performance than DQN . As another alternative , Knight & Lerner ( 2018 ) proposed a more involved natural gradient propagation for AltQ to improve the performance . All these schemes require more sophisticated designs or hardware support , which may place AltQ less advantageous compared to the popular DQN , even with their better performances . This motivates us to ask the following first question . • Q1 : Can we design a simple and easy variant of the AltQ algorithm , which uses as simple setup as DQN and does not introduce extra computational burden and heuristics , but still achieves better and more stable performance than DQN ? In this paper , we provide an affirmative answer by introducing novel lightweight designs to AltQ based on Adam . Although Adam appears to be a natural tool , its performance in AltQ has rarely been studied yet . Thus , we first provide a solid exploration on how well AltQ performs with Adam ( Kingma & Ba , 2014 ) , where the algorithm is referred to as AltQ-Adam . We then take a further step to improve the implementation of AltQ-Adam by adopting the technique of parameter restart ( i.e. , restart the initial setting of Adam parameters every a few iterations ) , and refer to the new algorithm as AltQ-AdamR . This is the first time that restart is applied for improving the performance of RL algorithms although restart has been used for conventional optimization before . In a batch of 23 Atari 2600 games , our experiments show that both AltQ-Adam and AltQ-AdamR outperform the baseline performance of DQN by 50 % on average . Furthermore , AltQ-AdamR effectively reduces the performance variance and achieves a much more stable learning process . In our experiments for the linear quadratic regulator ( LQR ) problems , AltQ-AdamR converges even faster than the model-based value iteration ( VI ) solution . This is a rather surprising result given that the model-based VI has been treated as the performance upper bound for the Q-learning ( including DQN ) algorithms with target update ( Lewis & Vrabie , 2009 ; Yang et al. , 2019 ) . Regarding the theoretical analysis of AltQ algorithms , their convergence guarantee has been extensively studied ( Melo et al. , 2008 ; Chen et al. , 2019b ) . More references are given in Section 1.1 . However , all the existing studies focus on the AltQ algorithms that take a simple SGD step . Such theory is not applicable to the proposed AltQ-Adam and AltQ-AdamR that implement the Adam-type update . Thus , the second intriguing question we address here is described as follows . • Q2 : Can we provide the convergence guarantee for AltQ-Adam and AltQ-AdamR or their slightly modified variants ( if these two algorithms do not always converge by nature ) ? It is well known in optimization that Adam does not always converge , and instead , a slightly modified variant AMSGrad proposed in Reddi et al . ( 2018 ) has been widely accepted as an alternative to justify the performance of Adam-type algorithms . Thus , our theoretical analysis here also focuses on such slightly modified variants AltQ-AMSGrad and AltQ-AMSGradR of the proposed algorithms . We show that under the linear function approximation ( which is the structure that the current tools for analysis of Q-learning can handle ) , both AltQ-AMSGrad and AltQ-AMSGradR converge to the global optimal solution under standard assumptions for Qlearning . To the best of our knowledge , this is the first non-asymptotic convergence guarantee on Q-learning that incorporates Adam-type update and momentum restart . Furthermore , a slight adaptation of our proof provides the convergence rate for the AMSGrad for conventional strongly convex optimization which has not been studied before and can be of independent interest . Notations We use ‖x‖ : = ‖x‖2 = √ xTx to denote the ` 2 norm of a vector x , and use ‖x‖∞ = max i |xi| to denote the infinity norm . When x , y are both vectors , x/y , xy , x2 , √ x are all calculated in the element-wise manner , which will be used in the update of Adam and AMSGrad . We denote [ n ] = 1 , 2 , . . . , n , and bxc ∈ Z as the largest integer such that bxc ≤ x < bxc+ 1 . 1.1 RELATED WORK . Empirical performance of AltQ : AltQ algorithms that strictly follow the alternating updates are rarely used in practice , particularly in comparison with the well-accepted DQN learning and its improved variants of dueling network structure ( Wang et al. , 2016 ) , double Q-learning ( Hasselt , 2010 ) and variance exploration and sampling schemes ( Schaul et al. , 2015 ) . Mnih et al . ( 2016 ) proposed the asynchronous one-step Q-learning that is conceptually close to AltQ with competitive performance against DQN . However , the algorithm still relies on a slowly moving target network like DQN , and the multi-thread learning also complicates the computational setup . Lu et al . ( 2018 ) studied the problem of value overestimation and proposed the non-delusional Q-learning algorithm that employed the so-called pre-conditioned Q-networks , which is also computationally complex . Knight & Lerner ( 2018 ) proposed a natural gradient propagation for AltQ to improve the performance , where the gradient implementation is complex . In this paper , we propose two simple and computationally efficient schemes to improve the performance of AltQ . Theoretical analysis of AltQ : Since proposed in Watkins & Dayan ( 1992 ) , Q-learning has aroused great interest in theoretic analysis . The line of theoretic research of AltQ that are most relevant to our study lies in the Q-learning with function approximation . A large number of works study Q-learning with linear function approximation such as Bertsekas & Tsitsiklis ( 1996 ) ; Devraj & Meyn ( 2017 ) ; Zou et al . ( 2019 ) ; Chen et al . ( 2019b ) ; Du et al . ( 2019 ) , to name a few . More recently , convergence of AltQ with neural network parameterization was given in Cai et al . ( 2019 ) , which exploits the linear structure of neural networks in the overparamterized regime for analysis . It is worth noting that all the existing analysis of AltQ with function approximation considers the simple SGD update , whereas our analysis in this paper focuses on the more involved Adam-type updates . Convergence analysis of Adam : Adam was proposed in Kingma & Ba ( 2014 ) and has achieved a great success in training deep neural networks . Kingma & Ba ( 2014 ) and Reddi et al . ( 2018 ) provided regret bounds under the online convex optimization framework for Adam and AMSGrad , respectively . However , Tran et al . ( 2019 ) pointed out errors in the proofs of the previous two papers and corrected them . Recently , convergence analysis of Adam and AMSGrad in nonconvex optimization was provided in Zou et al . ( 2018 ) ; Zhou et al . ( 2018 ) ; Chen et al . ( 2019a ) ; Phuong & Phong ( 2019 ) , in which the Adam-type algorithms were guaranteed to converge to a stationary point . To the best of our knowledge , our study is the first convergence analysis of the Adam-type of algorithms for Q-learning . 2 PRELIMINARIES . We consider a Markov decision process with a considerably large or continuous state space S ⊂ RM and action space A ⊂ RN with a non-negative bounded reward function R : S ×A → [ 0 , Rmax ] . We define U ( s ) ⊂ A as the admissible set of actions at state s , and π : S → A as a feasible stationary policy . We seek to solve a discrete-time sequential decision problem with γ ∈ ( 0 , 1 ) as follows : maximize π Jπ ( s0 ) = EP [ ∞∑ t=0 γtR ( st , π ( st ) ) ] , subject to st+1 ∼ P ( ·|st , at ) . ( 1 ) Let J ? ( s ) : = Jπ ? ( s ) be the optimal value function when applying the optimal policy π ? . The corresponding optimal Q-function can be defined as Q ? ( s , a ) : = R ( s , a ) + γEPJ ? ( s′ ) , ( 2 ) where s′ ∼ P ( ·|s , a ) and we use the same notation hereafter when no confusion arises . In other words , Q ? ( s , a ) is the reward of an agent who starts from state s and takes action a at the first step and then follows the optimal policy π ? thereafter .
This paper claims to propose a method to train q-based agents that use “alternating” Q-learning. However, the alternating approach given in the paper appears to be the normal Bellman update implemented in most versions of DQN. Furthermore, the citation given for AltQ (Mnih et al. 2016) makes no mention of the term “Alternating Q learning”.
SP:2bf8148e5dadace0b6dd4b9f715fa8261f2a52db
Growing Action Spaces
In complex tasks , such as those with large combinatorial action spaces , random exploration may be too inefficient to achieve meaningful learning progress . In this work , we use a curriculum of progressively growing action spaces to accelerate learning . We assume the environment is out of our control , but that the agent may set an internal curriculum by initially restricting its action space . Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data , value estimates , and state representations from restricted action spaces to the full task . We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large , multi-agent action spaces . 1 INTRODUCTION . The value of curricula has been well established in machine learning , reinforcement learning , and in biological systems . When a desired behaviour is sufficiently complex , or the environment too unforgiving , it can be intractable to learn the behaviour from scratch through random exploration . Instead , by “ starting small ” ( Elman , 1993 ) , an agent can build skills , representations , and a dataset of meaningful experiences that allow it to accelerate its learning . Such curricula can drastically improve sample efficiency ( Bengio et al. , 2009 ) . Typically , curriculum learning uses a progression of tasks or environments . Simple tasks that provide meaningful feedback to random agents are used first , and some schedule is used to introduce more challenging tasks later during training ( Graves et al. , 2017 ) . However , in many contexts neither the agent nor experimenter has such unimpeded control over the environment . In this work , we instead make use of curricula that are internal to the agent , simplifying the exploration problem without changing the environment . In particular , we grow the size of the action space of reinforcement learning agents over the course of training . At the beginning of training , our agents use a severely restricted action space . This helps exploration by guiding the agent towards rewards and meaningful experiences , and provides low variance updates during learning . The action space is then grown progressively . Eventually , using the most unrestricted action space , the agents are able to find superior policies . Each action space is a strict superset of the more restricted ones . This paradigm requires some domain knowledge to identify a suitable hierarchy of action spaces . However , such a hierarchy is often easy to find . Continuous action spaces can be discretised with increasing resolution . Similarly , curricula for coping with the large combinatorial action spaces induced by many agents can be obtained from the prior that nearby agents are more likely to need to coordinate . For example , in routing or traffic flow problems nearby agents or nodes may wish to adopt similar local policies to alleviate global congestion . Our method will be valuable when it is possible to identify a restricted action space in which random exploration leads to significantly more meaningful experiences than random exploration in the full action space . We propose an approach that uses off-policy reinforcement learning to improve sample efficiency in this type of curriculum learning . Since data from exploration using a restricted action space is still valid in the Markov Decision Processes ( MDPs ) corresponding to the less restricted action spaces , we can learn value functions in the less restricted action space with ‘ off-action-space ’ data collected by exploring in the restricted action space . In our approach , we learn value functions corresponding to each level of restriction simultaneously . We can use the relationships of these value functions to each other to accelerate learning further , by using value estimates themselves as initialisations or as bootstrap targets for the less restricted action spaces , as well as sharing learned state representations . Empirically , we first demonstrate the efficacy of our approach in two simple control tasks , in which the resolution of discretised actions is progressively increased . We then tackle a more challenging set of problems with combinatorial action spaces , in the context of StarCraft micromanagement with large numbers of agents ( 50-100 ) . Given the heuristic prior that nearby agents in a multiagent setting are likely to need to coordinate , we use hierarchical clustering to impose a restricted action space on the agents . Agents in a cluster are restricted to take the same action , but we progressively increase the number of groups that can act independently of one another over the course of training . Our method substantially improves sample efficiency on a number of tasks , outperforming learning any particular action space from scratch , a number of ablations , and an actor-critic baseline that learns a single value function for the behaviour policy , as in the work of Czarnecki et al . ( 2018 ) . Code is available , but redacted here for anonymity . 2 RELATED WORK . Curriculum learning has a long history , appearing at least as early as the work of Selfridge et al . ( 1985 ) in reinforcement learning , and for the training of neural networks since Elman ( 1993 ) . In supervised learning , one typically has control of the order in which data is presented to the learning algorithm . For learning with deep neural networks , Bengio et al . ( 2009 ) explored the use of curricula in computer vision and natural language processing . Many approaches use handcrafted schedules for task curricula , but others ( Zaremba & Sutskever , 2014 ; Pentina et al. , 2015 ; Graves et al. , 2017 ) study diagnostics that can be used to automate the choice of task mixtures throughout training . In a self-supervised control setting , Murali et al . ( 2018 ) use sensitivity analysis to automatically define a curriculum over action dimensions and prioritise their search space . In some reinforcement learning settings , it may also be possible to control the environment so as to induce a curriculum . With a resettable simulator , it is possible to use a sequence of progressively more challenging initial states ( Asada et al. , 1996 ; Florensa et al. , 2017 ) . With a procedurally generated task , it is often possible to automatically tune the difficulty of the environments ( Tamar et al. , 2016 ) . Similar curricula also appear often in hierarchical reinforcement learning , where skills can be learned in comparatively easy settings and then composed in more complex ways later ( Singh , 1992 ) . Taylor et al . ( 2007 ) use more general inter-task mappings to transfer Q-values between tasks that do not share state and action spaces . In adversarial settings , one may also induce a curriculum through self-play ( Tesauro , 1995 ; Sukhbaatar et al. , 2017 ; Silver et al. , 2017 ) . In this case , the learning agents themselves define the changing part of the environment . A less invasive manipulation of the environment involves altering the reward function . Such reward shaping allows learning policies in an easier MDP , which can then be transferred to the more difficult sparse-reward task ( Colombetti & Dorigo , 1992 ; Ng et al. , 1999 ) . It is also possible to learn reward shaping on simple tasks and transfer it to harder tasks in a curriculum ( Konidaris & Barto , 2006 ) . In contrast , learning with increasingly complex function approximators does not require any control of the environment . In reinforcement learning , this has often taken the form of adaptively growing the resolution of the state space considered by a piecewise constant discretised approximation ( Moore , 1994 ; Munos & Moore , 2002 ; Whiteson et al. , 2007 ) . Stanley & Miikkulainen ( 2004 ) study continual complexification in the context of coevolution , growing the complexity of neural network architectures through the course of training . These works progressively increase the capabilities of the agent , but not with respect to its available actions . In the context of planning on-line with a model , there are a number of approaches that use progressive widening to consider increasing large action spaces over the course of search ( Chaslot et al. , 2008 ) , including in planning for continuous action spaces ( Couëtoux et al. , 2011 ) . However , these methods can not directly be applied to grow the action space in the model-free setting . A recent related work tackling our domain is that of Czarnecki et al . ( 2018 ) , who train mixtures of two policies with an actor-critic approach , learning a single value function for the current mixture of policies . The mixture contains a policy that may be harder to learn but has a higher performance ceiling , such as a policy with a larger action space as we consider in this work . The mixing coefficient is initialised to only support the simpler policy , and adapted via population based training ( Jaderberg et al. , 2017 ) . In contrast , we simultaneously learn a different value function for each policy , and exploit the properties of the optimal value functions to induce additional structure on our models . We further use these properties to construct a scheme for off-action-space learning which means our approach may be used in an off-policy setting . Empirically , in our settings , we find our approach to perform better and more consistently than an actor-critic algorithm modeled after Czarnecki et al . ( 2018 ) , although we do not take on the significant additional computational requirements of population based training in any of our experiments . A number of other works address the problem of generalisation and representation for value functions with large discrete action spaces , without explicitly addressing the resulting exploration problem ( Dulac-Arnold et al. , 2015 ; Pan et al. , 2018 ) . These approaches typically rely on action representations from prior knowledge . Such representations could be used in combination with our method to construct a hierarchy of action spaces with which to shape exploration . 3 BACKGROUND . We formalise our problem as a MDP , specified by a tuple < S , A , P , r , γ > . The set of possible states and actions are given by S and A , P is the transition function that specifies the environment dynamics , and γ is a discount factor used to specify the discounted return R = ∑T t=0 γ trt for an episode of length T . We wish our agent to maximise this return in expectation by learning a policy π that maps states to actions . The state-action value function ( Q-function ) is given by Qπ = Eπ [ R|s , a ] . The optimal Q-function Q∗ satisfies the Bellman optimality equation : Q∗ ( s , a ) = T Q∗ ( s , a ) = E [ r ( s , a ) + γmax a′ Q∗ ( s′ , a′ ) ] . ( 1 ) Q-learning ( Watkins & Dayan , 1992 ) uses a sample-based approximation of the Bellman optimality operator T to iteratively improve an estimate of Q∗ . Q-learning is an off-policy method , meaning that samples from any policy may be used to improve the value function estimate . We use this property to engage Q-learning for off-action-space learning , as described in the next section . We also introduce some notation for restricted action spaces . In particular , for an MDP with unrestricted action space A we define a set of N action spaces A ` , ` ∈ { 0 , . . . , N − 1 } . Each action space is a subset of the next : A0 ⊂ A1 ⊂ . . . ⊂ AN−1 ⊆ A . A policy restricted to actions A ` is denoted π ` ( a|s ) . The optimal policy in this restricted policy class is π∗ ` ( a|s ) , and its corresponding action-value and value functions are Q∗ ` ( s , a ) and V ∗ ` ( s ) = maxaQ ∗ ` ( s , a ) . Additionally , we define a hierarchy of actions by identifying for every action a ∈ A ` , ` > 0 a parent action parent ` ( a ) in the space of A ` −1 . Since action spaces are subsets of larger action spaces , for all a ∈ A ` −1 , parent ` ( a ) = a , i.e. , one child of each action is itself . Simple pieces of domain knowledge are often sufficient to define these hierarchies . For example , a discretised continuous action can identify its nearest neighbour in A ` −1 as a parent . In Section 5 we describe a possible hierarchy for multi-agent action spaces . One could also imagine using action-embeddings ( Tennenholtz & Mannor , 2019 ) to learn such a hierarchy from data .
The paper presents a method of scaling up towards action spaces, that exhibit natural hierarchies (such as a controllable resolution of actions), throughout joint training of Q-functions over these. Authors notice, and exploit a few interesting properties, such as inequalities that emerge when action spaces form strict subsets that lead to nice parametrisation of policies in a differential way. The evaluation is performed in simple toy-ish tasks, and in micro-management problem in 5 scenarios in the game of SC2.
SP:2036673d54d07683d1dfdad4567ea18029344359
Growing Action Spaces
In complex tasks , such as those with large combinatorial action spaces , random exploration may be too inefficient to achieve meaningful learning progress . In this work , we use a curriculum of progressively growing action spaces to accelerate learning . We assume the environment is out of our control , but that the agent may set an internal curriculum by initially restricting its action space . Our approach uses off-policy reinforcement learning to estimate optimal value functions for multiple action spaces simultaneously and efficiently transfers data , value estimates , and state representations from restricted action spaces to the full task . We show the efficacy of our approach in proof-of-concept control tasks and on challenging large-scale StarCraft micromanagement tasks with large , multi-agent action spaces . 1 INTRODUCTION . The value of curricula has been well established in machine learning , reinforcement learning , and in biological systems . When a desired behaviour is sufficiently complex , or the environment too unforgiving , it can be intractable to learn the behaviour from scratch through random exploration . Instead , by “ starting small ” ( Elman , 1993 ) , an agent can build skills , representations , and a dataset of meaningful experiences that allow it to accelerate its learning . Such curricula can drastically improve sample efficiency ( Bengio et al. , 2009 ) . Typically , curriculum learning uses a progression of tasks or environments . Simple tasks that provide meaningful feedback to random agents are used first , and some schedule is used to introduce more challenging tasks later during training ( Graves et al. , 2017 ) . However , in many contexts neither the agent nor experimenter has such unimpeded control over the environment . In this work , we instead make use of curricula that are internal to the agent , simplifying the exploration problem without changing the environment . In particular , we grow the size of the action space of reinforcement learning agents over the course of training . At the beginning of training , our agents use a severely restricted action space . This helps exploration by guiding the agent towards rewards and meaningful experiences , and provides low variance updates during learning . The action space is then grown progressively . Eventually , using the most unrestricted action space , the agents are able to find superior policies . Each action space is a strict superset of the more restricted ones . This paradigm requires some domain knowledge to identify a suitable hierarchy of action spaces . However , such a hierarchy is often easy to find . Continuous action spaces can be discretised with increasing resolution . Similarly , curricula for coping with the large combinatorial action spaces induced by many agents can be obtained from the prior that nearby agents are more likely to need to coordinate . For example , in routing or traffic flow problems nearby agents or nodes may wish to adopt similar local policies to alleviate global congestion . Our method will be valuable when it is possible to identify a restricted action space in which random exploration leads to significantly more meaningful experiences than random exploration in the full action space . We propose an approach that uses off-policy reinforcement learning to improve sample efficiency in this type of curriculum learning . Since data from exploration using a restricted action space is still valid in the Markov Decision Processes ( MDPs ) corresponding to the less restricted action spaces , we can learn value functions in the less restricted action space with ‘ off-action-space ’ data collected by exploring in the restricted action space . In our approach , we learn value functions corresponding to each level of restriction simultaneously . We can use the relationships of these value functions to each other to accelerate learning further , by using value estimates themselves as initialisations or as bootstrap targets for the less restricted action spaces , as well as sharing learned state representations . Empirically , we first demonstrate the efficacy of our approach in two simple control tasks , in which the resolution of discretised actions is progressively increased . We then tackle a more challenging set of problems with combinatorial action spaces , in the context of StarCraft micromanagement with large numbers of agents ( 50-100 ) . Given the heuristic prior that nearby agents in a multiagent setting are likely to need to coordinate , we use hierarchical clustering to impose a restricted action space on the agents . Agents in a cluster are restricted to take the same action , but we progressively increase the number of groups that can act independently of one another over the course of training . Our method substantially improves sample efficiency on a number of tasks , outperforming learning any particular action space from scratch , a number of ablations , and an actor-critic baseline that learns a single value function for the behaviour policy , as in the work of Czarnecki et al . ( 2018 ) . Code is available , but redacted here for anonymity . 2 RELATED WORK . Curriculum learning has a long history , appearing at least as early as the work of Selfridge et al . ( 1985 ) in reinforcement learning , and for the training of neural networks since Elman ( 1993 ) . In supervised learning , one typically has control of the order in which data is presented to the learning algorithm . For learning with deep neural networks , Bengio et al . ( 2009 ) explored the use of curricula in computer vision and natural language processing . Many approaches use handcrafted schedules for task curricula , but others ( Zaremba & Sutskever , 2014 ; Pentina et al. , 2015 ; Graves et al. , 2017 ) study diagnostics that can be used to automate the choice of task mixtures throughout training . In a self-supervised control setting , Murali et al . ( 2018 ) use sensitivity analysis to automatically define a curriculum over action dimensions and prioritise their search space . In some reinforcement learning settings , it may also be possible to control the environment so as to induce a curriculum . With a resettable simulator , it is possible to use a sequence of progressively more challenging initial states ( Asada et al. , 1996 ; Florensa et al. , 2017 ) . With a procedurally generated task , it is often possible to automatically tune the difficulty of the environments ( Tamar et al. , 2016 ) . Similar curricula also appear often in hierarchical reinforcement learning , where skills can be learned in comparatively easy settings and then composed in more complex ways later ( Singh , 1992 ) . Taylor et al . ( 2007 ) use more general inter-task mappings to transfer Q-values between tasks that do not share state and action spaces . In adversarial settings , one may also induce a curriculum through self-play ( Tesauro , 1995 ; Sukhbaatar et al. , 2017 ; Silver et al. , 2017 ) . In this case , the learning agents themselves define the changing part of the environment . A less invasive manipulation of the environment involves altering the reward function . Such reward shaping allows learning policies in an easier MDP , which can then be transferred to the more difficult sparse-reward task ( Colombetti & Dorigo , 1992 ; Ng et al. , 1999 ) . It is also possible to learn reward shaping on simple tasks and transfer it to harder tasks in a curriculum ( Konidaris & Barto , 2006 ) . In contrast , learning with increasingly complex function approximators does not require any control of the environment . In reinforcement learning , this has often taken the form of adaptively growing the resolution of the state space considered by a piecewise constant discretised approximation ( Moore , 1994 ; Munos & Moore , 2002 ; Whiteson et al. , 2007 ) . Stanley & Miikkulainen ( 2004 ) study continual complexification in the context of coevolution , growing the complexity of neural network architectures through the course of training . These works progressively increase the capabilities of the agent , but not with respect to its available actions . In the context of planning on-line with a model , there are a number of approaches that use progressive widening to consider increasing large action spaces over the course of search ( Chaslot et al. , 2008 ) , including in planning for continuous action spaces ( Couëtoux et al. , 2011 ) . However , these methods can not directly be applied to grow the action space in the model-free setting . A recent related work tackling our domain is that of Czarnecki et al . ( 2018 ) , who train mixtures of two policies with an actor-critic approach , learning a single value function for the current mixture of policies . The mixture contains a policy that may be harder to learn but has a higher performance ceiling , such as a policy with a larger action space as we consider in this work . The mixing coefficient is initialised to only support the simpler policy , and adapted via population based training ( Jaderberg et al. , 2017 ) . In contrast , we simultaneously learn a different value function for each policy , and exploit the properties of the optimal value functions to induce additional structure on our models . We further use these properties to construct a scheme for off-action-space learning which means our approach may be used in an off-policy setting . Empirically , in our settings , we find our approach to perform better and more consistently than an actor-critic algorithm modeled after Czarnecki et al . ( 2018 ) , although we do not take on the significant additional computational requirements of population based training in any of our experiments . A number of other works address the problem of generalisation and representation for value functions with large discrete action spaces , without explicitly addressing the resulting exploration problem ( Dulac-Arnold et al. , 2015 ; Pan et al. , 2018 ) . These approaches typically rely on action representations from prior knowledge . Such representations could be used in combination with our method to construct a hierarchy of action spaces with which to shape exploration . 3 BACKGROUND . We formalise our problem as a MDP , specified by a tuple < S , A , P , r , γ > . The set of possible states and actions are given by S and A , P is the transition function that specifies the environment dynamics , and γ is a discount factor used to specify the discounted return R = ∑T t=0 γ trt for an episode of length T . We wish our agent to maximise this return in expectation by learning a policy π that maps states to actions . The state-action value function ( Q-function ) is given by Qπ = Eπ [ R|s , a ] . The optimal Q-function Q∗ satisfies the Bellman optimality equation : Q∗ ( s , a ) = T Q∗ ( s , a ) = E [ r ( s , a ) + γmax a′ Q∗ ( s′ , a′ ) ] . ( 1 ) Q-learning ( Watkins & Dayan , 1992 ) uses a sample-based approximation of the Bellman optimality operator T to iteratively improve an estimate of Q∗ . Q-learning is an off-policy method , meaning that samples from any policy may be used to improve the value function estimate . We use this property to engage Q-learning for off-action-space learning , as described in the next section . We also introduce some notation for restricted action spaces . In particular , for an MDP with unrestricted action space A we define a set of N action spaces A ` , ` ∈ { 0 , . . . , N − 1 } . Each action space is a subset of the next : A0 ⊂ A1 ⊂ . . . ⊂ AN−1 ⊆ A . A policy restricted to actions A ` is denoted π ` ( a|s ) . The optimal policy in this restricted policy class is π∗ ` ( a|s ) , and its corresponding action-value and value functions are Q∗ ` ( s , a ) and V ∗ ` ( s ) = maxaQ ∗ ` ( s , a ) . Additionally , we define a hierarchy of actions by identifying for every action a ∈ A ` , ` > 0 a parent action parent ` ( a ) in the space of A ` −1 . Since action spaces are subsets of larger action spaces , for all a ∈ A ` −1 , parent ` ( a ) = a , i.e. , one child of each action is itself . Simple pieces of domain knowledge are often sufficient to define these hierarchies . For example , a discretised continuous action can identify its nearest neighbour in A ` −1 as a parent . In Section 5 we describe a possible hierarchy for multi-agent action spaces . One could also imagine using action-embeddings ( Tennenholtz & Mannor , 2019 ) to learn such a hierarchy from data .
This paper proposes a method to progressively explore the action space for RL. The proposed method is called “growing action spaces”. The basic idea is that actions can usually be grouped by a hierarchical structure: the lowest level is the coarsest and higher levels gradually refine the action partition. This method effectively captures many RL settings, including multi-agent learning. One effective approach is to apply the action hierarchy.
SP:2036673d54d07683d1dfdad4567ea18029344359
On the Invertibility of Invertible Neural Networks
1 INTRODUCTION . Invertible neural networks ( INNs ) have become a standard building block in the deep learning toolkit . Invertibility is useful for training generative models with exact likelihoods ( Dinh et al. , 2014 ; 2017 ; Kingma & Dhariwal , 2018 ; Kingma et al. , 2016 ; Behrmann et al. , 2019 ; Chen et al. , 2019 ) , increasing posterior flexibility in VAEs ( Rezende & Mohamed , 2015 ; Tomczak & Welling , 2016 ; Papamakarios et al. , 2017 ) , learning transition operators in MCMC samplers ( Song et al. , 2017 ; Levy et al. , 2017 ) , computing memory-efficient gradients ( Gomez et al. , 2017 ; Donahue & Simonyan , 2019 ) , allowing for bi-directional training ( Grover et al. , 2018 ) , solving inverse problems ( Ardizzone et al. , 2019 ) and analysing adversarial robustness ( Jacobsen et al. , 2019 ) . The application space of INNs is rapidly growing and many approaches for constructing invertible architectures have been proposed . A common way to construct invertible networks is to use triangular coupling layers ( Dinh et al. , 2014 ; 2017 ; Kingma & Dhariwal , 2018 ) , where dimension partitioning is interleaved with ResNet-type computation . Another approach is to use various forms of masked convolutions , generalizing the dimension partitioning approach of coupling layers ( Song et al. , 2019 ; Hoogeboom et al. , 2019 ) . To avoid dimension partitioning altogether , multiple approaches based on efficiently estimating the log-determinant of the Jacobian , necessary for applying the change of variable formula , have been proposed to allow for free-form Jacobian structure ( Grathwohl et al. , 2019 ; Behrmann et al. , 2019 ; Chen et al. , 2019 ) . From a mathematical perspective , invertible architectures enable several unique guarantees like : • Enabling flexible approximation of non-linear diffeomorphisms ( Rezende & Mohamed , 2015 ; Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Chen et al. , 2019 ) • Memory-saving gradient computation ( Gomez et al. , 2017 ; Donahue & Simonyan , 2019 ) • Fast analytical invertibility ( Dinh et al. , 2014 ) • Guaranteed preservation of mutual information and exact access to invariants of deep networks ( Jacobsen et al. , 2018 ; 2019 ) . Despite the increased interest in invertible neural networks , little attention has been paid to guarantees on their numerical invertibility . Specifically , this means analyzing their ability to learn bi-Lipschitz neural networks , i.e . Lipschitz continuous neural networks with a bound on the Lipschitz constant of the forward and inverse mapping . While the stability analysis of neural networks has received significant attention e.g . due to adversarial examples ( Szegedy et al. , 2013 ) , the focus here is only on bounding Lipschitz constants of the forward mapping . However , bounding the Lipschitz constant of the inverse mapping is of major interest , e.g . when reconstructing inputs from noisy or imprecise features . In fact , analytical invertibility as provided by some invertible architectures does not necessarily imply numerical invertibility in practice . In this paper , we first discuss the relevance of controlling the bi-Lipschitz bounds of invertible networks . Afterwards we analyze Lipschitz bounds of commonly used invertible neural network building blocks . Our contributions are : • We argue for forward and inverse stability analysis as a unified viewpoint on invertible network ( non- ) invertibility . To this end , we derive Lipschitz bounds of commonly-used invertible building blocks for their forward and inverse maps . • We numerically monitor and detect ( non- ) invertibility for different practical tasks such as classification and generative modeling . • We show how this overlooked issue with non-invertibility can lead to questionable claims when computing exact likelihoods with the change-of-variable formula . • Finally , we study spectral normalization as a stabilizer for one of the most commonly-used family of INN architectures , namely additive coupling blocks . 2 BACKGROUND AND MOTIVATION . Invertible neural networks are bijective functions with a parametrized forward mappingFθ : Rd → Rd with Fθ : x 7→ z , where θ ∈ Rp defines the parameter vector . Additionally , they define an inverse mapping F−1θ : Rd → Rd with F −1 θ : z 7→ x . This inverse can be given in closed-form ( analytical inverse , e.g . Dinh et al . ( 2017 ) ; Kingma & Dhariwal ( 2018 ) ) or approximated numerically ( numerical inverse , e.g . Behrmann et al . ( 2019 ) ; Song et al . ( 2019 ) ) . Before we discuss building blocks of invertible networks , we provide some background and motivation for studying forward and inverse stability . Definition 1 ( Lipschitz and bi-Lipschitz continuity ) . A function F : ( Rd1 , ‖ · ‖ ) → ( Rd2 , ‖ · ‖ ) is called Lipschitz continuous if there exists a constant L = : Lip ( F ) such that ‖F ( x1 ) − F ( x2 ) ‖ ≤ L‖x1 − x2‖ , ∀x1 , x2 ∈ Rd1 . If an inverse F−1 : ( Rd2 , ‖ · ‖ ) → ( Rd1 , ‖ · ‖ ) and a constant L∗ = : Lip ( F−1 ) exists such that ‖F−1 ( y1 ) − F−1 ( y2 ) ‖ ≤ L∗‖y1 − y2‖ , ∀y1 , y2 ∈ Rd2 , then F is called bi-Lipschitz continuous . Remark 2 . We focus on invertible functions F : ( Rd , ‖ · ‖2 ) → ( Rd , ‖ · ‖2 ) , i.e . functions where the domain and co-domain are of the same dimensionality d and the norm is given by the euclidian norm . Lemma 3 . ( Rademacher ( Federer , 1969 , Theorem 3.1.6 ) ) If F : Rd → Rd is a locally Lipschitz continuous function ( i.e . functions whose restriction to a neighborhood around any point is Lipschitz ) , then F is differentiable almost everywhere . Moreover , if F is Lipschitz continuous , then Lip ( F ) = sup x∈Rd ‖JF ( x ) ‖2 , where JF ( x ) is the Jacobian matrix of F at x and ‖JF ( x ) ‖2 denotes its spectral norm . Lipschitz bounds on the forward mapping are of crucial importance in several areas , including in adversarial example research ( Szegedy et al. , 2013 ) , to avoid exploding gradients , or the training of Wasserstein GANs ( Anil et al. , 2019 ) . The stability of the inverse , however , can have a similar impact . For instance , having a Lipschitz bound on the inverse may avoid vanishing gradients during training . Given that deep-learning computations are carried out with limited precision , imprecision is always introduced in both the forward and backward passes , i.e. , zδ = F ( x ) + δ and x̂δ = F−1 ( zδ ) . Instability in either pass will aggravate this problem , and essentially make the invertible network numerically non-invertible . To summarize , this problem occurs in the following situations : • Numerical reconstruction of x , where features zδ are inexact due to limited precision ( e.g . when computations are executed in single precision as common on modern hardware ) . • Reconstruction based on imprecise measurements from physical devices ( e.g . when using invertible networks for inverse problems ( Ardizzone et al. , 2019 ) ) . • Numerical re-computation of intermediate activations of the neural network to allow for memory-efficient backpropagation ( Gomez et al. , 2017 ) . Furthermore , some computations are performed via numerical approximation , which in turn adds another source of imprecision that might be aggravated via instability . Examples include : • Numerical forward computation , as in Neural ODEs ( Chen et al. , 2018 ) ( numerical solver is used to approximate dynamic of ODE ) . • Numerical inverse computation , e.g . via fixed-point iterations as in i-ResNets ( Behrmann et al. , 2019 ) or MintNet ( Song et al. , 2019 ) or via ODE-solvers for the backward dynamics as in Neural ODEs ( Chen et al. , 2018 ) . As an example of why bi-Lipschitz continuity is critical for numerical stability in invertible functions , let ’ s consider the simple mappings F1 ( x ) = log ( x ) , F−11 ( z ) = exp ( z ) , and F2 ( x ) = x , F −1 2 ( z ) = z . Though both functions tend to infinity when x → ∞ , F1 is much less stable . Consider the introduction of numerical imprecision as zδ = F1 ( x ) + δ where δ denotes the introduced imprecision . Then this imprecision is magnified in the inverse pass as : ||F−11 ( z ) − F −1 1 ( z δ ) ||22 ≈ ||δ ∂F−11 ( z δ ) ∂zδ ||22 = ||δ exp ( zδ ) ||22 . ( 1 ) A similar example can be constructed for both the forward and backward passes , which speaks to the importance of bi-Lipschitz continuity . For an additional discussion on the connection of Lipschitz constants and numerical errors , we refer to Appendix B . 3 STABILITY OF INVERTIBLE NEURAL NETWORKS . 3.1 LIPSCHITZ BOUNDS FOR BUILDING BLOCKS OF INVERTIBLE NETWORKS . Research on invertible networks has produced a large variety of architectural building blocks . Yet , the focus of prior work was on obtaining flexible architectures while maintaining invertibility guarantees . Here , we build on the work in ( Behrmann et al. , 2019 ) , where bi-Lipschitz bounds were proven for invertible ResNets , by deriving Lipschitz bounds on the forward and inverse mapping of common building blocks . Together with an overview of common invertible building blocks , we provide our main results in Table 1 . We chose these particular model classes in order to cover both coupling-based approaches and free-form approaches like Neural ODE ( Chen et al. , 2018 ) and i-ResNets ( Behrmann et al. , 2019 ) . The derivations of the bounds are given in Appendix A . Note that the bounds provide the worst-case stability and serve mainly as a guideline for future designs of invertible building blocks . 3.2 CONTROLLING STABILITY OF BUILDING BLOCKS . As shown in Table 1 , there are many factors that influence the stability of INNs . Of particular importance are the Lipschitz constants Lip ( g ) of the sub-network g for i-ResNets ( Behrmann et al. , 2019 ) and affine coupling blocks ( Dinh et al. , 2014 ) , and Lip ( s ) , Lip ( t ) for additive coupling blocks ( Dinh et al. , 2017 ) . Whereas computing the Lipschitz constants of neural networks is NP-hard ( Virmaux & Scaman , 2018 ) , there is a simple data-independent upper bound : Lip ( g ) ≤ L∏ i=1 ‖Ai‖2 , for g ( x ) = AL ◦ φ ◦AL−1 ◦ · · · ◦A2 ◦ φ ◦A1 , ( 2 ) where Ai are linear layers , ‖ · ‖2 is the spectral norm and φ a contractive activation function ( Lip ( φ ) ≤ 1 ) . The above bound was used by ( Behrmann et al. , 2019 ) in conjunction with spectral normalization ( Miyato et al. , 2018 ; Gouk et al. , 2018 ) to ensure a contractive residual block g. In particular , this employs a normalization via : Ã = κ A σ̂1 , with σ̂1 ≈ σ1 = ‖A‖2 ( approx . via power-method ) , where κ > 0 is a coefficient that sets the approximate upper bound on the spectral norm of each linear layer Ai . Thus , by setting an appropriate coefficient κ depending on the targeted Lipschitz bound of the building block , this approach enables one to control both forward and inverse stability . Note that the above discussion can be generalized to other ` p-norms , see ( Chen et al. , 2019 ) . However , this is not sufficient when using affine coupling blocks because their bound on the Lipschitz constant holds only locally . In particular , it depends on the regions of the inputs x to the coupling block . While inputs to the first layer are usually bounded by the nature of the data , obtaining bounds for intermediate activations is less straightforward . One interesting avenue for future work could be local regularizers like gradient penalties ( Gulrajani et al. , 2017 ) , where spectral normalization could be used post-hoc to certify stability . Lastly , we use ActNorm ( Kingma & Dhariwal , 2018 ) in several architectures and avoid small diagonal terms which would yield large Lipschitz constants in the inverse ( see Table 1 ) by adding a positive constant . Further stabilization could be achieved via bounding the scaling .
The paper claims that for invertible neural networks, mathematical guarantees on invertibility is not enough, and we also require numerical invertibility. To this end, the lipschitz constants/condition numbers of Jacobians of both the forward and inverse maps of invertible NNs based on coupling layers are examined mathematically and experimentally. The paper also displays cases that expose non-invertibility in these architectures via gradient-based construction of adversarial inputs, as well as a decorrelation benchmark task, and show that spectral normalization can be a remedy for stabilizing these flows.
SP:76b0a90c46bc2151088210ca47ea4761706f1716
On the Invertibility of Invertible Neural Networks
1 INTRODUCTION . Invertible neural networks ( INNs ) have become a standard building block in the deep learning toolkit . Invertibility is useful for training generative models with exact likelihoods ( Dinh et al. , 2014 ; 2017 ; Kingma & Dhariwal , 2018 ; Kingma et al. , 2016 ; Behrmann et al. , 2019 ; Chen et al. , 2019 ) , increasing posterior flexibility in VAEs ( Rezende & Mohamed , 2015 ; Tomczak & Welling , 2016 ; Papamakarios et al. , 2017 ) , learning transition operators in MCMC samplers ( Song et al. , 2017 ; Levy et al. , 2017 ) , computing memory-efficient gradients ( Gomez et al. , 2017 ; Donahue & Simonyan , 2019 ) , allowing for bi-directional training ( Grover et al. , 2018 ) , solving inverse problems ( Ardizzone et al. , 2019 ) and analysing adversarial robustness ( Jacobsen et al. , 2019 ) . The application space of INNs is rapidly growing and many approaches for constructing invertible architectures have been proposed . A common way to construct invertible networks is to use triangular coupling layers ( Dinh et al. , 2014 ; 2017 ; Kingma & Dhariwal , 2018 ) , where dimension partitioning is interleaved with ResNet-type computation . Another approach is to use various forms of masked convolutions , generalizing the dimension partitioning approach of coupling layers ( Song et al. , 2019 ; Hoogeboom et al. , 2019 ) . To avoid dimension partitioning altogether , multiple approaches based on efficiently estimating the log-determinant of the Jacobian , necessary for applying the change of variable formula , have been proposed to allow for free-form Jacobian structure ( Grathwohl et al. , 2019 ; Behrmann et al. , 2019 ; Chen et al. , 2019 ) . From a mathematical perspective , invertible architectures enable several unique guarantees like : • Enabling flexible approximation of non-linear diffeomorphisms ( Rezende & Mohamed , 2015 ; Dinh et al. , 2017 ; Kingma & Dhariwal , 2018 ; Chen et al. , 2019 ) • Memory-saving gradient computation ( Gomez et al. , 2017 ; Donahue & Simonyan , 2019 ) • Fast analytical invertibility ( Dinh et al. , 2014 ) • Guaranteed preservation of mutual information and exact access to invariants of deep networks ( Jacobsen et al. , 2018 ; 2019 ) . Despite the increased interest in invertible neural networks , little attention has been paid to guarantees on their numerical invertibility . Specifically , this means analyzing their ability to learn bi-Lipschitz neural networks , i.e . Lipschitz continuous neural networks with a bound on the Lipschitz constant of the forward and inverse mapping . While the stability analysis of neural networks has received significant attention e.g . due to adversarial examples ( Szegedy et al. , 2013 ) , the focus here is only on bounding Lipschitz constants of the forward mapping . However , bounding the Lipschitz constant of the inverse mapping is of major interest , e.g . when reconstructing inputs from noisy or imprecise features . In fact , analytical invertibility as provided by some invertible architectures does not necessarily imply numerical invertibility in practice . In this paper , we first discuss the relevance of controlling the bi-Lipschitz bounds of invertible networks . Afterwards we analyze Lipschitz bounds of commonly used invertible neural network building blocks . Our contributions are : • We argue for forward and inverse stability analysis as a unified viewpoint on invertible network ( non- ) invertibility . To this end , we derive Lipschitz bounds of commonly-used invertible building blocks for their forward and inverse maps . • We numerically monitor and detect ( non- ) invertibility for different practical tasks such as classification and generative modeling . • We show how this overlooked issue with non-invertibility can lead to questionable claims when computing exact likelihoods with the change-of-variable formula . • Finally , we study spectral normalization as a stabilizer for one of the most commonly-used family of INN architectures , namely additive coupling blocks . 2 BACKGROUND AND MOTIVATION . Invertible neural networks are bijective functions with a parametrized forward mappingFθ : Rd → Rd with Fθ : x 7→ z , where θ ∈ Rp defines the parameter vector . Additionally , they define an inverse mapping F−1θ : Rd → Rd with F −1 θ : z 7→ x . This inverse can be given in closed-form ( analytical inverse , e.g . Dinh et al . ( 2017 ) ; Kingma & Dhariwal ( 2018 ) ) or approximated numerically ( numerical inverse , e.g . Behrmann et al . ( 2019 ) ; Song et al . ( 2019 ) ) . Before we discuss building blocks of invertible networks , we provide some background and motivation for studying forward and inverse stability . Definition 1 ( Lipschitz and bi-Lipschitz continuity ) . A function F : ( Rd1 , ‖ · ‖ ) → ( Rd2 , ‖ · ‖ ) is called Lipschitz continuous if there exists a constant L = : Lip ( F ) such that ‖F ( x1 ) − F ( x2 ) ‖ ≤ L‖x1 − x2‖ , ∀x1 , x2 ∈ Rd1 . If an inverse F−1 : ( Rd2 , ‖ · ‖ ) → ( Rd1 , ‖ · ‖ ) and a constant L∗ = : Lip ( F−1 ) exists such that ‖F−1 ( y1 ) − F−1 ( y2 ) ‖ ≤ L∗‖y1 − y2‖ , ∀y1 , y2 ∈ Rd2 , then F is called bi-Lipschitz continuous . Remark 2 . We focus on invertible functions F : ( Rd , ‖ · ‖2 ) → ( Rd , ‖ · ‖2 ) , i.e . functions where the domain and co-domain are of the same dimensionality d and the norm is given by the euclidian norm . Lemma 3 . ( Rademacher ( Federer , 1969 , Theorem 3.1.6 ) ) If F : Rd → Rd is a locally Lipschitz continuous function ( i.e . functions whose restriction to a neighborhood around any point is Lipschitz ) , then F is differentiable almost everywhere . Moreover , if F is Lipschitz continuous , then Lip ( F ) = sup x∈Rd ‖JF ( x ) ‖2 , where JF ( x ) is the Jacobian matrix of F at x and ‖JF ( x ) ‖2 denotes its spectral norm . Lipschitz bounds on the forward mapping are of crucial importance in several areas , including in adversarial example research ( Szegedy et al. , 2013 ) , to avoid exploding gradients , or the training of Wasserstein GANs ( Anil et al. , 2019 ) . The stability of the inverse , however , can have a similar impact . For instance , having a Lipschitz bound on the inverse may avoid vanishing gradients during training . Given that deep-learning computations are carried out with limited precision , imprecision is always introduced in both the forward and backward passes , i.e. , zδ = F ( x ) + δ and x̂δ = F−1 ( zδ ) . Instability in either pass will aggravate this problem , and essentially make the invertible network numerically non-invertible . To summarize , this problem occurs in the following situations : • Numerical reconstruction of x , where features zδ are inexact due to limited precision ( e.g . when computations are executed in single precision as common on modern hardware ) . • Reconstruction based on imprecise measurements from physical devices ( e.g . when using invertible networks for inverse problems ( Ardizzone et al. , 2019 ) ) . • Numerical re-computation of intermediate activations of the neural network to allow for memory-efficient backpropagation ( Gomez et al. , 2017 ) . Furthermore , some computations are performed via numerical approximation , which in turn adds another source of imprecision that might be aggravated via instability . Examples include : • Numerical forward computation , as in Neural ODEs ( Chen et al. , 2018 ) ( numerical solver is used to approximate dynamic of ODE ) . • Numerical inverse computation , e.g . via fixed-point iterations as in i-ResNets ( Behrmann et al. , 2019 ) or MintNet ( Song et al. , 2019 ) or via ODE-solvers for the backward dynamics as in Neural ODEs ( Chen et al. , 2018 ) . As an example of why bi-Lipschitz continuity is critical for numerical stability in invertible functions , let ’ s consider the simple mappings F1 ( x ) = log ( x ) , F−11 ( z ) = exp ( z ) , and F2 ( x ) = x , F −1 2 ( z ) = z . Though both functions tend to infinity when x → ∞ , F1 is much less stable . Consider the introduction of numerical imprecision as zδ = F1 ( x ) + δ where δ denotes the introduced imprecision . Then this imprecision is magnified in the inverse pass as : ||F−11 ( z ) − F −1 1 ( z δ ) ||22 ≈ ||δ ∂F−11 ( z δ ) ∂zδ ||22 = ||δ exp ( zδ ) ||22 . ( 1 ) A similar example can be constructed for both the forward and backward passes , which speaks to the importance of bi-Lipschitz continuity . For an additional discussion on the connection of Lipschitz constants and numerical errors , we refer to Appendix B . 3 STABILITY OF INVERTIBLE NEURAL NETWORKS . 3.1 LIPSCHITZ BOUNDS FOR BUILDING BLOCKS OF INVERTIBLE NETWORKS . Research on invertible networks has produced a large variety of architectural building blocks . Yet , the focus of prior work was on obtaining flexible architectures while maintaining invertibility guarantees . Here , we build on the work in ( Behrmann et al. , 2019 ) , where bi-Lipschitz bounds were proven for invertible ResNets , by deriving Lipschitz bounds on the forward and inverse mapping of common building blocks . Together with an overview of common invertible building blocks , we provide our main results in Table 1 . We chose these particular model classes in order to cover both coupling-based approaches and free-form approaches like Neural ODE ( Chen et al. , 2018 ) and i-ResNets ( Behrmann et al. , 2019 ) . The derivations of the bounds are given in Appendix A . Note that the bounds provide the worst-case stability and serve mainly as a guideline for future designs of invertible building blocks . 3.2 CONTROLLING STABILITY OF BUILDING BLOCKS . As shown in Table 1 , there are many factors that influence the stability of INNs . Of particular importance are the Lipschitz constants Lip ( g ) of the sub-network g for i-ResNets ( Behrmann et al. , 2019 ) and affine coupling blocks ( Dinh et al. , 2014 ) , and Lip ( s ) , Lip ( t ) for additive coupling blocks ( Dinh et al. , 2017 ) . Whereas computing the Lipschitz constants of neural networks is NP-hard ( Virmaux & Scaman , 2018 ) , there is a simple data-independent upper bound : Lip ( g ) ≤ L∏ i=1 ‖Ai‖2 , for g ( x ) = AL ◦ φ ◦AL−1 ◦ · · · ◦A2 ◦ φ ◦A1 , ( 2 ) where Ai are linear layers , ‖ · ‖2 is the spectral norm and φ a contractive activation function ( Lip ( φ ) ≤ 1 ) . The above bound was used by ( Behrmann et al. , 2019 ) in conjunction with spectral normalization ( Miyato et al. , 2018 ; Gouk et al. , 2018 ) to ensure a contractive residual block g. In particular , this employs a normalization via : Ã = κ A σ̂1 , with σ̂1 ≈ σ1 = ‖A‖2 ( approx . via power-method ) , where κ > 0 is a coefficient that sets the approximate upper bound on the spectral norm of each linear layer Ai . Thus , by setting an appropriate coefficient κ depending on the targeted Lipschitz bound of the building block , this approach enables one to control both forward and inverse stability . Note that the above discussion can be generalized to other ` p-norms , see ( Chen et al. , 2019 ) . However , this is not sufficient when using affine coupling blocks because their bound on the Lipschitz constant holds only locally . In particular , it depends on the regions of the inputs x to the coupling block . While inputs to the first layer are usually bounded by the nature of the data , obtaining bounds for intermediate activations is less straightforward . One interesting avenue for future work could be local regularizers like gradient penalties ( Gulrajani et al. , 2017 ) , where spectral normalization could be used post-hoc to certify stability . Lastly , we use ActNorm ( Kingma & Dhariwal , 2018 ) in several architectures and avoid small diagonal terms which would yield large Lipschitz constants in the inverse ( see Table 1 ) by adding a positive constant . Further stabilization could be achieved via bounding the scaling .
This paper analyses the numerical invertibility of analytically invertible neural networks (INN). The numerical invertibility depends on the Lipschitz constant of the respective transformation. The paper provides Lipschitz bounds on the components of building blocks for certain INN architectures, which would guarantee numerical stability. Furthermore, this paper shows empirically, that the numerical invertibility can indeed be a problem in practice.
SP:76b0a90c46bc2151088210ca47ea4761706f1716
Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer
1 INTRODUCTION . Transferring solution strategies from one problem to another is a crucial ability for intelligent behavior ( Silver et al. , 2013 ) . Current learning systems can learn a multitude of specialized tasks , but extracting the underlying structure of the solution for effective transfer is an open research problem ( Taylor & Stone , 2009 ) . Abstraction is key to enable these transfers ( Tenenbaum et al. , 2011 ) and the concept of algorithms in computer science is an ideal example for such transferable abstract strategies . An algorithm is a sequence of instructions , which solves a given problem when executed , independent of the specific instantiation of the problem . For example , consider the task of sorting a set of objects . The algorithmic solution , specified as the sequence of instructions , is able to sort any number of arbitrary classes of objects in any order , e.g. , toys by color , waste by type , or numbers by value , by using the same sequence of instructions , as long as the features and compare operations defining the order are specified . Learning such structured , abstract strategies enables the transfer to new domains and representations ( Tenenbaum et al. , 2011 ) . Moreover , abstract strategies as algorithms have built-in generalization capabilities to new task configurations and complexities . Here , we present a novel architecture for learning abstract strategies in the form of algorithmic solutions . Based on the Differential Neural Computer ( Graves et al. , 2016 ) and inspired by the von Neumann and Harvard architectures of modern computers , the architectures modular structure allows for straightforward transfer by reusing learned modules instead of relearning , prior knowledge can be included , and the behavior of the modules can be examined and interpreted . Moreover , the individual modules of the architecture can be learned with different learning settings and strategies – or be hardcoded if applicable – allowing to split the overall task into easier subproblems , contrary to the end-to-end learning philosophy of most deep learning architectures . Building on memory-augmented neural networks ( Graves et al. , 2016 ; Neelakantan et al. , 2016 ; Weston et al. , 2015 ; Joulin & Mikolov , 2015 ) , we propose a flexible architecture for learning abstract strategies as algorithmic solutions and show the learning and transferring of such in symbolic planning tasks . 1.1 THE PROBLEM OF LEARNING ALGORITHMIC SOLUTIONS . We investigate the problem of learning algorithmic solutions which are characterized by three requirements : R1 – generalization to different and unseen task configurations and task complexities , R2 – independence of the data representation , and R3 – independence of the task domain . Picking up the sorting algorithm example again , R1 represents the ability to sort lists of arbitrary length and initial order , while R2 and R3 represent the abstract nature of the solution . This abstraction enables the algorithm , for example , to sort a list of binary numbers while being trained only on hexadecimal numbers ( R2 ) . Furthermore , the algorithm trained on numbers is able to sort lists of strings ( R3 ) . If R1 – R3 are fulfilled , the algorithmic solution does not need to be retrained or adapted to solve unforeseen task instantiations – only the data specific operations need to be adjusted . Research on learning algorithms typically focuses on identifying algorithmic generated patterns or solving algorithmic problems ( Neelakantan et al. , 2016 ; Zaremba & Sutskever , 2014 ; Kaiser & Sutskever , 2016 ; Kaiser & Bengio , 2016 ) , less on finding algorithmic solutions ( Joulin & Mikolov , 2015 ; Zaremba et al. , 2016 ) fulfilling the three discussed requirements R1 – R3 . While R1 is typically tackled , as it represents the overall goal of generalization in machine learning , the abstraction abilities from R2 and R3 are missing . Additionally , most algorithms require a form of feedback , using computed intermediate results from one computational step in subsequent steps , and a variable number of computational steps to solve a problem instance . Thus , it is necessary to be able to cope with varying numbers of steps and determining when to stop , in contrast to using a fixed number of steps ( Neelakantan et al. , 2016 ; Sukhbaatar et al. , 2015 ) , making the learning problem more challenging in addition . A crucial feature for algorithms is the ability to save and retrieve data . Therefore , augmenting neural networks with different forms of external memory , e.g. , matrices , stacks , tapes or grids , to increase their expressiveness and to separate computation from memory , especially in long time dependencies setups , is an active research direction ( Graves et al. , 2016 ; Weston et al. , 2015 ; Joulin & Mikolov , 2015 ; Zaremba et al. , 2016 ; Sukhbaatar et al. , 2015 ; Kumar et al. , 2016 ; Greve et al. , 2016 ) with earlier work in the field of grammar learning ( Das et al. , 1992 ; Mozer & Das , 1993 ; Zeng et al. , 1994 ) . These memory-augmented networks improve performance on a variety of tasks like reasoning and inference in natural language ( Graves et al. , 2016 ; Weston et al. , 2015 ; Sukhbaatar et al. , 2015 ; Kumar et al. , 2016 ) , learning of simple algorithms and algorithmic patterns ( Joulin & Mikolov , 2015 ; Zaremba et al. , 2016 ; Graves et al. , 2014 ) , and navigation tasks ( Wayne et al. , 2018 ) . The contribution of this paper is a novel modular architecture building on a memory-augmented neural network ( DNC ( Graves et al. , 2016 ) ) for learning algorithmic solutions in a reinforcement learning setting . We show that the learned solutions fulfill all three requirements R1 – R3 for an algorithmic solution and the architecture can process a variable number of computational steps . 2 A NEURAL COMPUTER ARCHITECTURE FOR ALGORITHMIC SOLUTIONS . In this section , we introduce the novel modular architecture for learning algorithmic solutions , shown in Figure 1 . The architecture builds on the Differential Neural Computer ( DNC ) ( Graves et al. , 2016 ) and its modular design is inspired by modern computer architectures , related to ( Neelakantan et al. , 2016 ; Weston et al. , 2015 ) . The DNC augments a controller neural network with a differentiable autoassociative external memory to separate computation from memory , as memorization is usually done in the networks weights . The controller network learns to write and read information from that memory by emitting an interface vector which is mapped onto different vectors by linear transformations . These vectors control the read and write operations of the memory , called read and write heads . For writing and reading , multiple attention mechanisms are employed , including content lookup , temporal linkage and memory allocation . Due to the design of the interface and the attention mechanisms , the DNC is independent of the memory size and fully differentiable , allowing gradient-based end-to-end learning . Our architecture . In order to learn algorithmic solutions , the computations need to be decoupled from the specific data and task . To enable such data and task independent computations , we propose multiple alterations and extensions to the DNC , inspired by modern computer architectures . First , information flow is divided into two streams , data and control . This separation allows to disentangle data representation dependent manipulations from data independent algorithmic instructions . Due to this separation , the algorithmic modules need to be extended to include two memories , a data and a computational memory . The data memory stores and retrieves the data stream , whereas the computational memory works on information generated by the control signal flow through the learnable controller and memory transformations . The two memories are coupled , operating on the same locations , and these locations are determined by the computational memory , and hence by the control stream . As with the DNC , multiple read and write heads can be used . In our experiments , one read and two write heads are used , with one write head constrained to the previously read location . In contrast to the DNC , but in line with the computer architecture-inspired design and the goal of learning deterministic algorithms , writing and reading uses hard attentions instead of soft attentions . Hard attention means that only one memory location can be written to and read from ( unique addresses ) , instead of an weighted average over all locations as with soft attentions . Such hard attention was shown to be beneficial for generalization ( Greve et al. , 2016 ) . We also employed an additional attention mechanism for reading , called usage linkage , similar to the temporal linkage of the DNC , but instead of capturing temporal relations , it captures usage relations , i.e. , the relation between written memory location and previously read location . With both linkages in two directions and the content look up , the model has five attention mechanisms for reading . While the final read memory location is determined by a weighted combination of these attentions ( see attention in Figure 5 in the Appendix ) , each attention mechanism itself uses hard decisions , returning only one memory location . See Appendix C for the effect of the introduced modifications and extensions . For computing the actual solution , operating only on the control stream is not enough , as the model still needs to manipulate the data . Therefore , we added several modules operating on the data stream , inspired by the architecture of computers . In particular , an Input , TransformD , ALU ( arithmetic logic unit ) and Output module were added ( more details in Section 2.2 ) . These modules manipulate the data , steered by the algorithmic modules . The full architecture is shown in Figure 1 . As algorithms typically involve recursive or iterative data manipulation , the model receives its own output as input in the next computation step , making the whole architecture an output-input model . With all aforementioned extensions , algorithmic solutions fulfilling R1 – R3 can be learned . 2.1 THE ALGORITHMIC MODULES . The algorithmic modules consist of the Controller , the Memory and the TransformC module and build the core of the model . These modules learn the algorithmic solution operating on the control stream . With t as the current computational step and c as the control stream ( see Figure 1 ) , the input-output of the modules are C ( ci , t , cm , t−1 , cf , t−1 , ca , t−1 , co , t−1 ) 7−→ cc , t , M ( ci , t , cc , t ) 7−→ cm , t , dm , t and TC ( cc , t , cm , t , ci , t ) 7−→ cf , t . The algorithmic modules are based on the DNC with the alterations and extensions described before . Next we discuss how these algorithmic modules can be learned before looking into the data-dependent modules . 2.1.1 LEARNING OF THE ALGORITHMIC MODULES . Learning the algorithmic modules , and hence the algorithmic solution , is done in a reinforcement learning setting using Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2014 ) . NES is a blackbox optimizer that does not require differentiable models , giving more freedom to the model design , e.g. , the hard attention mechanisms are not differentiable . NES updates a search distribution of the parameters to be learned by following the natural gradient towards regions of higher fitness using a population of offsprings ( altered parameters ) for exploration . Let θ be the parameters to be learned and using an isotropic multivariate Gaussian search distribution with fixed variance σ2 , the stochastic natural gradient at iteration t is given by ∇θtE ∼N ( 0 , I ) [ u ( θt + σ ) ] ≈ 1 Pσ P∑ i=1 u ( θit ) i , where P is the population size and u ( · ) is the rank transformed fitness ( Wierstra et al. , 2014 ) . The parameters are updated by θt+1 = θt + α Pσ P∑ i=1 u ( θit ) i , with learning rate α . Recent research showed that NES and related approaches like Random Search ( Mania et al. , 2018 ) or NEAT ( Stanley & Miikkulainen , 2002 ) are powerful alternatives in reinforcement learning . They are easier to implement and scale , perform better with sparse rewards and credit assignment over long time scales , have fewer hyperparameters ( Salimans et al. , 2017 ) and were used to train memory-augmented networks ( Greve et al. , 2016 ; Merrild et al. , 2018 ) . For robustness and learning efficiency , weight decay for regularization ( Krogh & Hertz , 1992 ) and automatic restarts of runs stuck in local optima are used as in ( Wierstra et al. , 2014 ) . This restarting can be seen as another level of evolution , where some lineages die out . Another way of dealing with early converged or stuck lineages is to add intrinsic motivation signals like novelty , that help to get attracted by another local optima , as in NSRA-ES ( Conti et al. , 2018 ) . In the experiments however , we found that within our setting , restarting – or having an additional survival of the fittest on the lineages – was more effective , see Appendix C for a comparison . The algorithmic solutions are learned in a curriculum learning setup ( Bengio et al. , 2009 ) with sampling from old lessons ( Zaremba & Sutskever , 2014 ) to prevent unlearning and to foster generalization . Furthermore , we created bad memories , a learning from mistakes strategy , similar to the idea of AdaBoost ( Freund & Schapire , 1997 ) , which samples previously failed tasks to encourage focusing on the hard tasks . This can also be seen as a form of experience replay ( Mnih et al. , 2015 ; Lin , 1992 ) , but only using the task configurations , the initial input to the model , not the full generated sequence . Bad memories were developed for training the data-dependent modules to ensure their robustness and 100 % accuracy , which is crucial to learn algorithmic solutions . If the individual modules do not have 100 % accuracy , no stable algorithmic solution can be learned even if the algorithmic modules are doing the correct computations . For example , if one module has an accuracy of 99 % , the 1 % error prevents learning an algorithmic solution that works always . This problem is even reinforced as the proposed model is an output-input architecture that works over multiple computation steps using its own output as the new input – meaning the overall accuracy drops to 36.6 % for 100 computation steps . Therefore using the bad memories strategy , and thus focusing on the mistakes , helps significantly in achieving robust results when learning the modules , enabling the learning of algorithmic solutions . While the bad memories strategy was crucial to achieve 100 % robustness when training the data-dependent modules , the effect on learning the algorithmic solutions was less significant ( see Appendix C for an evaluation ) .
This paper introduces a neural controller architecture for learning abstract algorithmic solutions to search and planning problems. By combining abstract and domain-specific components, the model is able to mimic two classical algorithms quite closely across several domains. The precision of the learning is very high; verified by generalization to substantially larger problem sizes and different domains. One notable conclusion is that Evolutionary Strategies is able to learn algorithmic solutions whose precision is on par with deterministic algorithms. The method of triggering learning based on curriculum level performance is a notable feature that nicely couples generalization progress with learning, and yields insightful learning curves.
SP:a1d1e8d13b1df53435caa45e5fed856fcdd1b6ec
Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer
1 INTRODUCTION . Transferring solution strategies from one problem to another is a crucial ability for intelligent behavior ( Silver et al. , 2013 ) . Current learning systems can learn a multitude of specialized tasks , but extracting the underlying structure of the solution for effective transfer is an open research problem ( Taylor & Stone , 2009 ) . Abstraction is key to enable these transfers ( Tenenbaum et al. , 2011 ) and the concept of algorithms in computer science is an ideal example for such transferable abstract strategies . An algorithm is a sequence of instructions , which solves a given problem when executed , independent of the specific instantiation of the problem . For example , consider the task of sorting a set of objects . The algorithmic solution , specified as the sequence of instructions , is able to sort any number of arbitrary classes of objects in any order , e.g. , toys by color , waste by type , or numbers by value , by using the same sequence of instructions , as long as the features and compare operations defining the order are specified . Learning such structured , abstract strategies enables the transfer to new domains and representations ( Tenenbaum et al. , 2011 ) . Moreover , abstract strategies as algorithms have built-in generalization capabilities to new task configurations and complexities . Here , we present a novel architecture for learning abstract strategies in the form of algorithmic solutions . Based on the Differential Neural Computer ( Graves et al. , 2016 ) and inspired by the von Neumann and Harvard architectures of modern computers , the architectures modular structure allows for straightforward transfer by reusing learned modules instead of relearning , prior knowledge can be included , and the behavior of the modules can be examined and interpreted . Moreover , the individual modules of the architecture can be learned with different learning settings and strategies – or be hardcoded if applicable – allowing to split the overall task into easier subproblems , contrary to the end-to-end learning philosophy of most deep learning architectures . Building on memory-augmented neural networks ( Graves et al. , 2016 ; Neelakantan et al. , 2016 ; Weston et al. , 2015 ; Joulin & Mikolov , 2015 ) , we propose a flexible architecture for learning abstract strategies as algorithmic solutions and show the learning and transferring of such in symbolic planning tasks . 1.1 THE PROBLEM OF LEARNING ALGORITHMIC SOLUTIONS . We investigate the problem of learning algorithmic solutions which are characterized by three requirements : R1 – generalization to different and unseen task configurations and task complexities , R2 – independence of the data representation , and R3 – independence of the task domain . Picking up the sorting algorithm example again , R1 represents the ability to sort lists of arbitrary length and initial order , while R2 and R3 represent the abstract nature of the solution . This abstraction enables the algorithm , for example , to sort a list of binary numbers while being trained only on hexadecimal numbers ( R2 ) . Furthermore , the algorithm trained on numbers is able to sort lists of strings ( R3 ) . If R1 – R3 are fulfilled , the algorithmic solution does not need to be retrained or adapted to solve unforeseen task instantiations – only the data specific operations need to be adjusted . Research on learning algorithms typically focuses on identifying algorithmic generated patterns or solving algorithmic problems ( Neelakantan et al. , 2016 ; Zaremba & Sutskever , 2014 ; Kaiser & Sutskever , 2016 ; Kaiser & Bengio , 2016 ) , less on finding algorithmic solutions ( Joulin & Mikolov , 2015 ; Zaremba et al. , 2016 ) fulfilling the three discussed requirements R1 – R3 . While R1 is typically tackled , as it represents the overall goal of generalization in machine learning , the abstraction abilities from R2 and R3 are missing . Additionally , most algorithms require a form of feedback , using computed intermediate results from one computational step in subsequent steps , and a variable number of computational steps to solve a problem instance . Thus , it is necessary to be able to cope with varying numbers of steps and determining when to stop , in contrast to using a fixed number of steps ( Neelakantan et al. , 2016 ; Sukhbaatar et al. , 2015 ) , making the learning problem more challenging in addition . A crucial feature for algorithms is the ability to save and retrieve data . Therefore , augmenting neural networks with different forms of external memory , e.g. , matrices , stacks , tapes or grids , to increase their expressiveness and to separate computation from memory , especially in long time dependencies setups , is an active research direction ( Graves et al. , 2016 ; Weston et al. , 2015 ; Joulin & Mikolov , 2015 ; Zaremba et al. , 2016 ; Sukhbaatar et al. , 2015 ; Kumar et al. , 2016 ; Greve et al. , 2016 ) with earlier work in the field of grammar learning ( Das et al. , 1992 ; Mozer & Das , 1993 ; Zeng et al. , 1994 ) . These memory-augmented networks improve performance on a variety of tasks like reasoning and inference in natural language ( Graves et al. , 2016 ; Weston et al. , 2015 ; Sukhbaatar et al. , 2015 ; Kumar et al. , 2016 ) , learning of simple algorithms and algorithmic patterns ( Joulin & Mikolov , 2015 ; Zaremba et al. , 2016 ; Graves et al. , 2014 ) , and navigation tasks ( Wayne et al. , 2018 ) . The contribution of this paper is a novel modular architecture building on a memory-augmented neural network ( DNC ( Graves et al. , 2016 ) ) for learning algorithmic solutions in a reinforcement learning setting . We show that the learned solutions fulfill all three requirements R1 – R3 for an algorithmic solution and the architecture can process a variable number of computational steps . 2 A NEURAL COMPUTER ARCHITECTURE FOR ALGORITHMIC SOLUTIONS . In this section , we introduce the novel modular architecture for learning algorithmic solutions , shown in Figure 1 . The architecture builds on the Differential Neural Computer ( DNC ) ( Graves et al. , 2016 ) and its modular design is inspired by modern computer architectures , related to ( Neelakantan et al. , 2016 ; Weston et al. , 2015 ) . The DNC augments a controller neural network with a differentiable autoassociative external memory to separate computation from memory , as memorization is usually done in the networks weights . The controller network learns to write and read information from that memory by emitting an interface vector which is mapped onto different vectors by linear transformations . These vectors control the read and write operations of the memory , called read and write heads . For writing and reading , multiple attention mechanisms are employed , including content lookup , temporal linkage and memory allocation . Due to the design of the interface and the attention mechanisms , the DNC is independent of the memory size and fully differentiable , allowing gradient-based end-to-end learning . Our architecture . In order to learn algorithmic solutions , the computations need to be decoupled from the specific data and task . To enable such data and task independent computations , we propose multiple alterations and extensions to the DNC , inspired by modern computer architectures . First , information flow is divided into two streams , data and control . This separation allows to disentangle data representation dependent manipulations from data independent algorithmic instructions . Due to this separation , the algorithmic modules need to be extended to include two memories , a data and a computational memory . The data memory stores and retrieves the data stream , whereas the computational memory works on information generated by the control signal flow through the learnable controller and memory transformations . The two memories are coupled , operating on the same locations , and these locations are determined by the computational memory , and hence by the control stream . As with the DNC , multiple read and write heads can be used . In our experiments , one read and two write heads are used , with one write head constrained to the previously read location . In contrast to the DNC , but in line with the computer architecture-inspired design and the goal of learning deterministic algorithms , writing and reading uses hard attentions instead of soft attentions . Hard attention means that only one memory location can be written to and read from ( unique addresses ) , instead of an weighted average over all locations as with soft attentions . Such hard attention was shown to be beneficial for generalization ( Greve et al. , 2016 ) . We also employed an additional attention mechanism for reading , called usage linkage , similar to the temporal linkage of the DNC , but instead of capturing temporal relations , it captures usage relations , i.e. , the relation between written memory location and previously read location . With both linkages in two directions and the content look up , the model has five attention mechanisms for reading . While the final read memory location is determined by a weighted combination of these attentions ( see attention in Figure 5 in the Appendix ) , each attention mechanism itself uses hard decisions , returning only one memory location . See Appendix C for the effect of the introduced modifications and extensions . For computing the actual solution , operating only on the control stream is not enough , as the model still needs to manipulate the data . Therefore , we added several modules operating on the data stream , inspired by the architecture of computers . In particular , an Input , TransformD , ALU ( arithmetic logic unit ) and Output module were added ( more details in Section 2.2 ) . These modules manipulate the data , steered by the algorithmic modules . The full architecture is shown in Figure 1 . As algorithms typically involve recursive or iterative data manipulation , the model receives its own output as input in the next computation step , making the whole architecture an output-input model . With all aforementioned extensions , algorithmic solutions fulfilling R1 – R3 can be learned . 2.1 THE ALGORITHMIC MODULES . The algorithmic modules consist of the Controller , the Memory and the TransformC module and build the core of the model . These modules learn the algorithmic solution operating on the control stream . With t as the current computational step and c as the control stream ( see Figure 1 ) , the input-output of the modules are C ( ci , t , cm , t−1 , cf , t−1 , ca , t−1 , co , t−1 ) 7−→ cc , t , M ( ci , t , cc , t ) 7−→ cm , t , dm , t and TC ( cc , t , cm , t , ci , t ) 7−→ cf , t . The algorithmic modules are based on the DNC with the alterations and extensions described before . Next we discuss how these algorithmic modules can be learned before looking into the data-dependent modules . 2.1.1 LEARNING OF THE ALGORITHMIC MODULES . Learning the algorithmic modules , and hence the algorithmic solution , is done in a reinforcement learning setting using Natural Evolution Strategies ( NES ) ( Wierstra et al. , 2014 ) . NES is a blackbox optimizer that does not require differentiable models , giving more freedom to the model design , e.g. , the hard attention mechanisms are not differentiable . NES updates a search distribution of the parameters to be learned by following the natural gradient towards regions of higher fitness using a population of offsprings ( altered parameters ) for exploration . Let θ be the parameters to be learned and using an isotropic multivariate Gaussian search distribution with fixed variance σ2 , the stochastic natural gradient at iteration t is given by ∇θtE ∼N ( 0 , I ) [ u ( θt + σ ) ] ≈ 1 Pσ P∑ i=1 u ( θit ) i , where P is the population size and u ( · ) is the rank transformed fitness ( Wierstra et al. , 2014 ) . The parameters are updated by θt+1 = θt + α Pσ P∑ i=1 u ( θit ) i , with learning rate α . Recent research showed that NES and related approaches like Random Search ( Mania et al. , 2018 ) or NEAT ( Stanley & Miikkulainen , 2002 ) are powerful alternatives in reinforcement learning . They are easier to implement and scale , perform better with sparse rewards and credit assignment over long time scales , have fewer hyperparameters ( Salimans et al. , 2017 ) and were used to train memory-augmented networks ( Greve et al. , 2016 ; Merrild et al. , 2018 ) . For robustness and learning efficiency , weight decay for regularization ( Krogh & Hertz , 1992 ) and automatic restarts of runs stuck in local optima are used as in ( Wierstra et al. , 2014 ) . This restarting can be seen as another level of evolution , where some lineages die out . Another way of dealing with early converged or stuck lineages is to add intrinsic motivation signals like novelty , that help to get attracted by another local optima , as in NSRA-ES ( Conti et al. , 2018 ) . In the experiments however , we found that within our setting , restarting – or having an additional survival of the fittest on the lineages – was more effective , see Appendix C for a comparison . The algorithmic solutions are learned in a curriculum learning setup ( Bengio et al. , 2009 ) with sampling from old lessons ( Zaremba & Sutskever , 2014 ) to prevent unlearning and to foster generalization . Furthermore , we created bad memories , a learning from mistakes strategy , similar to the idea of AdaBoost ( Freund & Schapire , 1997 ) , which samples previously failed tasks to encourage focusing on the hard tasks . This can also be seen as a form of experience replay ( Mnih et al. , 2015 ; Lin , 1992 ) , but only using the task configurations , the initial input to the model , not the full generated sequence . Bad memories were developed for training the data-dependent modules to ensure their robustness and 100 % accuracy , which is crucial to learn algorithmic solutions . If the individual modules do not have 100 % accuracy , no stable algorithmic solution can be learned even if the algorithmic modules are doing the correct computations . For example , if one module has an accuracy of 99 % , the 1 % error prevents learning an algorithmic solution that works always . This problem is even reinforced as the proposed model is an output-input architecture that works over multiple computation steps using its own output as the new input – meaning the overall accuracy drops to 36.6 % for 100 computation steps . Therefore using the bad memories strategy , and thus focusing on the mistakes , helps significantly in achieving robust results when learning the modules , enabling the learning of algorithmic solutions . While the bad memories strategy was crucial to achieve 100 % robustness when training the data-dependent modules , the effect on learning the algorithmic solutions was less significant ( see Appendix C for an evaluation ) .
This paper proposes modifications and modular extensions to the differential neural computer (DNC). The approach is nicely modular, decoupling the data modules from algorithmic modules. This enables the authors to pretrain the data modules with supervised learning and to train the small algorithmic modules with neural evolution strategies (NES). NES is a global optimization method (which may be understood as policy gradients where the parameters of the neural policy are the actions) and consequently this enables the authors to use discrete selection mechanisms instead of the soft attention mechanisms of DNC.
SP:a1d1e8d13b1df53435caa45e5fed856fcdd1b6ec
Few-Shot Regression via Learning Sparsifying Basis Functions
Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples . Previous few-shot learning works have mainly focused on classification and reinforcement learning . In this paper , we propose a method that focuses on regression tasks . Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions . This enables using a few labelled samples to learn a good approximation of the entire function . We design a Basis Function Learner network to encode the basis functions for a task distribution , and a Weights Generator to generate the weight vector for a novel task . We show that our model outperforms current state of the art meta-learning methods in various regression tasks . 1 INTRODUCTION . Regression deals with the problem of learning a model relating a set of inputs to a set of outputs . The learned model can be thought as function y = F ( x ) that gives a prediction y ∈ Rdy given input x ∈ Rdx where dy and dx are dimensions of the output and input respectively . Typically , a regression model is trained on a large number of data points to be able to provide accurate predictions for new inputs . Recently , there have been a surge in popularity on few-shot learning methods ( Vinyals et al. , 2016 ; Koch et al. , 2015 ; Gidaris & Komodakis , 2018 ) . Few-shot learning methods require only a few examples from each task to be able to quickly adapt and perform well on a new task . These few-shot learning methods in essence are learning to learn i.e . the model learns to quickly adapt itself to new tasks rather than just learning to give the correct prediction for a particular input sample . In this work , we propose a few shot learning model that targets few-shot regression tasks . Our model takes inspiration from the idea that the degree of freedom of F ( x ) can be significantly reduced when it is modeled a linear combination of sparsifying basis functions . Thus , with a few samples , we can estimate F ( x ) . The two primary components of our model are ( i ) the Basis Function Learner network which encodes the basis functions for the distribution of tasks , and ( ii ) the Weights Generator network which produces the appropriate weights given a few labelled samples . We evaluate our model on the sinusoidal regression tasks and compare the performance to several meta-learning algorithms . We also evaluate our model on other regression tasks , namely the 1D heat equation tasks modeled by partial differential equations and the 2D Gaussian distribution tasks . Furthermore , we evaluate our model on image completion as a 2D regression problem on the MNIST and CelebA data-sets , using only a small subset of known pixel values . To summarize , our contributions for this paper are : • We propose to address few shot regression by linear combination of a set of sparsifying basis functions . • We propose to learn these ( continuous ) sparsifying basis functions from data . Traditionally , basis functions are hand-crafted ( e.g . Fourier basis ) . • We perform experiments to evaluate our approach using sinusoidal , heat equation , 2D Gaussian tasks and MNIST/CelebA image completion tasks . 2 RELATED WORK . Regression problems has long been a topic of study in the machine learning and signal processing community ( Myers & Myers , 1990 ; Specht , 1991 ) . Though similar to classification , regression estimates one or multiple scalar values and is usually thought of as a single task problem . A single model is trained to only perform regression on only one task . Our model instead reformulates the regression problem as a few-shot learning problem , allowing for our model to be able to perform regressions of tasks sampled from the same task distribution . The success achieved by deep neural networks heavily relies on a large amount of data , especially labelled ones . As labelling data is time-consuming and labor-intensive , learning from limited labelled data is drawing more and more attention . A prominent approach is meta learning . Meta learning , also referred as learning to learn , aims at learning an adaptive model across different tasks . Meta learning has shown potential in style transfer ( Zhang et al. , 2019 ) , visual navigation ( Wortsman et al. , 2018 ) , etc . Meta learning has also been applied to few-shot learning problems , which concerns models that can learn from prior experiences to adapt to new tasks . Lake et al . ( 2011 ) proposed the one-shot classification problem and introduced the Omniglot data set as a few-shot classification data set , similar to MNIST ( LeCun , 1998 ) for traditional classification . Since then , there has been a surge of meta learning methods striving to solve few-shot problems . Some meta learning approaches learn a similarity metric ( Snell et al. , 2017 ; Vinyals et al. , 2016 ; Koch et al. , 2015 ) between new test examples with few-shot training samples to make the prediction . The similarity metric used here can be Euclidean distance , cosine similarity or more expressive metric learned by relation networks ( Sung et al. , 2018 ) . On the other hand , optimization-based approaches learn how to optimize the model directly . Finn et al . ( 2017 ) learned an optimal initialization of models for different tasks in the same distribution , which is able to achieve good performance by simple gradient descent . Rusu et al . ( 2019 ) learned how to perform gradient descent in the latent space to adapt the model parameters more effectively . Ravi & Larochelle ( 2016 ) employed an LSTM to learn an optimization algorithm . Generative models are also proposed to overcome the limitations resulted from few-shot setting ( Zhang et al. , 2018 ; Hariharan & Girshick , 2017 ; Wang et al. , 2018 ) . Few-shot regression tasks are used among various few-shot leaning methods ( Finn et al. , 2017 ; Rusu et al. , 2019 ; Li et al. , 2017 ) . In most existing works , these experiment usually does not extend beyond the sinusoidal and linear regression tasks . A prominent family of algorithms that tackles a similar problem as few-shot regression is Neural Processes ( Garnelo et al. , 2018b ; a ; Kim et al. , 2019 ) . Neural Processes algorithms model the distributions of the outputs of regression functions using Deep Neural Networks given pairs of input-output pairs . Similar to Variational Autoencoders ( Kingma & Welling , 2013 ) , Neural Processes employ a Bayesian approach in modelling the output distribution of regression function using an encoder-decoder architecture . Our model on the other hand employs a deterministic approach where we directly learn a set of basis functions to model the output distribution . Our model also does not produce any latent vectors but instead produces predictions via a dot product between the learned basis functions and weight vector . Our experiment results show that our model ( based on sparse linear combination of basis functions ) compares favorably to Neural Processes ( based on conditional stochastic processes ) . Our proposed sparse linear representation framework for few shot regression makes the few shot regression problem appears to be similar to another research problem called dictionary learning ( DL ) ( Tosic & Frossard , 2011 ) , which focuses on learning dictionaries of atoms that provide efficient representations of some class of signals . However the differences between DL and our problem are significant : Our problems are continuous rather than discrete as in DL , and we only observe a very small percentage of samples . Detailed comparison with DL is discussed in the appendix . 3 PROPOSED METHOD . 3.1 PROBLEM FORMULATION . We first provide problem definition for few-shot regression . We aim at developing a model that can rapidly regress to a variety of equations and functions based on only a few training samples . We assume that each equation we would like to regress is a task Ti sampled from a distribution p ( T ) . We train our model on a set of training tasks , Strain , and evaluate it on a separate set of testing tasks , Stest . Unlike few-shot classification tasks , the tasks distribution p ( T ) is continuous for regression task in general . Each regression task is comprised of training samples Dtrain and validation samples Dval , for both the training set Strain and testing set Stest , Dtrain is comprised of K training samples and labels Dtrain = { ( xkt , ykt ) |k = 1 ... K } while Dval is comprised of N samples and labels Dval = { ( xnp , ynp ) |n = 1 ... N } . The goal of few-shot regression is to regress the entire , continuous output range of the equation given only the few points as training set . 3.2 FEW-SHOT REGRESSION VIA LEARNING SPARSIFYING BASIS FUNCTIONS . Here we discuss our main idea . We would like to model the unknown function y = F ( x ) given only Dtrain = { ( xkt , ykt ) |k = 1 ... K } . With small K , e.g . K = 10 , this is an ill-posed task , as F ( x ) can take any form . As stated before , we assume that each function we would like to regress is a task Ti drawn from an unknown distribution p ( T ) . To simplify discussion , we assume scalar input and scalar output . Our idea is to learn sparse representation of the unknown function F ( x ) , so that a few samples { ( xkt , ykt ) |k = 1 ... K } can provide adequate information to approximate the entire F ( x ) . Specifically , we model the unknown function F ( x ) as a linear combination of a set of basis functions { φi ( x ) } : F ( x ) = ∑ i wiφi ( x ) ( 1 ) Many handcrafted basis functions have been developed to expand F ( x ) . For example , the Maclaurin series expansion ( Taylor series expansion at x = 0 ) uses { φi ( x ) } = { 1 , x , x2 , x3 , ... } : F ( x ) = w0 + w1x+ w2x 2 + ... ( 2 ) If F ( x ) is a polynomial , ( 2 ) can be a sparse representation , i.e . only a few non-zero , significant wi , and most wi are zero or near zero . However , if F ( x ) is a sinusoid , it would require many terms to represent F ( x ) adequately , e.g . : sin ( x ) ≈ w1x+ w3x3 + w5x5 + w7x7 + ... + wMxM ( 3 ) In ( 3 ) , M is large and M K. Given only K samples { ( xkt , ykt ) |k = 1 ... K } , it is not adequate to determine { wi } and model the unknown function . On the other hand , if we use the Fourier basis instead , i.e. , { φi ( x ) } = { 1 , sin ( x ) , sin ( 2x ) , ... , cos ( x ) , cos ( 2x ) , ... } , clearly , we can obtain a sparse representation : we can adequately approximate the sinusoid with only a few terms . Under Fourier basis , there are only a few non-zero significant weights wi , and K samples are sufficient to estimate the significant wi and approximate the function . Essentially , with a sparsifying basis { φi ( x ) } , the degree of freedom of F ( x ) can be significantly reduced when it is modeled using ( 1 ) , so that K samples can well estimate F ( x ) . Our approach is to use the set of training tasks drawn from p ( T ) to learn { φi ( x ) } that result in sparse representation for any task drawn from p ( T ) . The set of { φi ( x ) } is encoded in the Basis Function Learner Network that takes in x and outputs Φ ( x ) = [ φ1 ( x ) , φ2 ( x ) , ... , φM ( x ) ] T . In our framework , Φ ( x ) is the same for any task drawn from p ( T ) , as it encodes the set of { φi ( x ) } that can sparsely represent any task from p ( T ) . We further learn a Weights Generator Network to map the K training samples of a novel task to a constant vector w = [ w1 , w2 , ... , wM ] T . The unknown function is modeled as wTΦ ( x ) .
The paper proposes a regression approach that, given a few training (support) samples of a regression task (input and desired output pairs), should be able to output the values of the target function on additional (query) inputs. The proposed method is to learn a set of basis functions (MLPs) and a weight generator that for a given support set predicts weights using which the basis functions are linearly combined to form the predicted regression function, which is later tested (using the MSE metric) w.r.t. the ground truth. The method is trained on a large collection of randomly sampled task from the target family and is tested on a separate set of random tasks. The experiments include:
SP:da33f43dc72578ff039a1843c3bbbfc70ed4a685
Few-Shot Regression via Learning Sparsifying Basis Functions
Recent few-shot learning algorithms have enabled models to quickly adapt to new tasks based on only a few training samples . Previous few-shot learning works have mainly focused on classification and reinforcement learning . In this paper , we propose a method that focuses on regression tasks . Our model is based on the idea that the degree of freedom of the unknown function can be significantly reduced if it is represented as a linear combination of a set of sparsifying basis functions . This enables using a few labelled samples to learn a good approximation of the entire function . We design a Basis Function Learner network to encode the basis functions for a task distribution , and a Weights Generator to generate the weight vector for a novel task . We show that our model outperforms current state of the art meta-learning methods in various regression tasks . 1 INTRODUCTION . Regression deals with the problem of learning a model relating a set of inputs to a set of outputs . The learned model can be thought as function y = F ( x ) that gives a prediction y ∈ Rdy given input x ∈ Rdx where dy and dx are dimensions of the output and input respectively . Typically , a regression model is trained on a large number of data points to be able to provide accurate predictions for new inputs . Recently , there have been a surge in popularity on few-shot learning methods ( Vinyals et al. , 2016 ; Koch et al. , 2015 ; Gidaris & Komodakis , 2018 ) . Few-shot learning methods require only a few examples from each task to be able to quickly adapt and perform well on a new task . These few-shot learning methods in essence are learning to learn i.e . the model learns to quickly adapt itself to new tasks rather than just learning to give the correct prediction for a particular input sample . In this work , we propose a few shot learning model that targets few-shot regression tasks . Our model takes inspiration from the idea that the degree of freedom of F ( x ) can be significantly reduced when it is modeled a linear combination of sparsifying basis functions . Thus , with a few samples , we can estimate F ( x ) . The two primary components of our model are ( i ) the Basis Function Learner network which encodes the basis functions for the distribution of tasks , and ( ii ) the Weights Generator network which produces the appropriate weights given a few labelled samples . We evaluate our model on the sinusoidal regression tasks and compare the performance to several meta-learning algorithms . We also evaluate our model on other regression tasks , namely the 1D heat equation tasks modeled by partial differential equations and the 2D Gaussian distribution tasks . Furthermore , we evaluate our model on image completion as a 2D regression problem on the MNIST and CelebA data-sets , using only a small subset of known pixel values . To summarize , our contributions for this paper are : • We propose to address few shot regression by linear combination of a set of sparsifying basis functions . • We propose to learn these ( continuous ) sparsifying basis functions from data . Traditionally , basis functions are hand-crafted ( e.g . Fourier basis ) . • We perform experiments to evaluate our approach using sinusoidal , heat equation , 2D Gaussian tasks and MNIST/CelebA image completion tasks . 2 RELATED WORK . Regression problems has long been a topic of study in the machine learning and signal processing community ( Myers & Myers , 1990 ; Specht , 1991 ) . Though similar to classification , regression estimates one or multiple scalar values and is usually thought of as a single task problem . A single model is trained to only perform regression on only one task . Our model instead reformulates the regression problem as a few-shot learning problem , allowing for our model to be able to perform regressions of tasks sampled from the same task distribution . The success achieved by deep neural networks heavily relies on a large amount of data , especially labelled ones . As labelling data is time-consuming and labor-intensive , learning from limited labelled data is drawing more and more attention . A prominent approach is meta learning . Meta learning , also referred as learning to learn , aims at learning an adaptive model across different tasks . Meta learning has shown potential in style transfer ( Zhang et al. , 2019 ) , visual navigation ( Wortsman et al. , 2018 ) , etc . Meta learning has also been applied to few-shot learning problems , which concerns models that can learn from prior experiences to adapt to new tasks . Lake et al . ( 2011 ) proposed the one-shot classification problem and introduced the Omniglot data set as a few-shot classification data set , similar to MNIST ( LeCun , 1998 ) for traditional classification . Since then , there has been a surge of meta learning methods striving to solve few-shot problems . Some meta learning approaches learn a similarity metric ( Snell et al. , 2017 ; Vinyals et al. , 2016 ; Koch et al. , 2015 ) between new test examples with few-shot training samples to make the prediction . The similarity metric used here can be Euclidean distance , cosine similarity or more expressive metric learned by relation networks ( Sung et al. , 2018 ) . On the other hand , optimization-based approaches learn how to optimize the model directly . Finn et al . ( 2017 ) learned an optimal initialization of models for different tasks in the same distribution , which is able to achieve good performance by simple gradient descent . Rusu et al . ( 2019 ) learned how to perform gradient descent in the latent space to adapt the model parameters more effectively . Ravi & Larochelle ( 2016 ) employed an LSTM to learn an optimization algorithm . Generative models are also proposed to overcome the limitations resulted from few-shot setting ( Zhang et al. , 2018 ; Hariharan & Girshick , 2017 ; Wang et al. , 2018 ) . Few-shot regression tasks are used among various few-shot leaning methods ( Finn et al. , 2017 ; Rusu et al. , 2019 ; Li et al. , 2017 ) . In most existing works , these experiment usually does not extend beyond the sinusoidal and linear regression tasks . A prominent family of algorithms that tackles a similar problem as few-shot regression is Neural Processes ( Garnelo et al. , 2018b ; a ; Kim et al. , 2019 ) . Neural Processes algorithms model the distributions of the outputs of regression functions using Deep Neural Networks given pairs of input-output pairs . Similar to Variational Autoencoders ( Kingma & Welling , 2013 ) , Neural Processes employ a Bayesian approach in modelling the output distribution of regression function using an encoder-decoder architecture . Our model on the other hand employs a deterministic approach where we directly learn a set of basis functions to model the output distribution . Our model also does not produce any latent vectors but instead produces predictions via a dot product between the learned basis functions and weight vector . Our experiment results show that our model ( based on sparse linear combination of basis functions ) compares favorably to Neural Processes ( based on conditional stochastic processes ) . Our proposed sparse linear representation framework for few shot regression makes the few shot regression problem appears to be similar to another research problem called dictionary learning ( DL ) ( Tosic & Frossard , 2011 ) , which focuses on learning dictionaries of atoms that provide efficient representations of some class of signals . However the differences between DL and our problem are significant : Our problems are continuous rather than discrete as in DL , and we only observe a very small percentage of samples . Detailed comparison with DL is discussed in the appendix . 3 PROPOSED METHOD . 3.1 PROBLEM FORMULATION . We first provide problem definition for few-shot regression . We aim at developing a model that can rapidly regress to a variety of equations and functions based on only a few training samples . We assume that each equation we would like to regress is a task Ti sampled from a distribution p ( T ) . We train our model on a set of training tasks , Strain , and evaluate it on a separate set of testing tasks , Stest . Unlike few-shot classification tasks , the tasks distribution p ( T ) is continuous for regression task in general . Each regression task is comprised of training samples Dtrain and validation samples Dval , for both the training set Strain and testing set Stest , Dtrain is comprised of K training samples and labels Dtrain = { ( xkt , ykt ) |k = 1 ... K } while Dval is comprised of N samples and labels Dval = { ( xnp , ynp ) |n = 1 ... N } . The goal of few-shot regression is to regress the entire , continuous output range of the equation given only the few points as training set . 3.2 FEW-SHOT REGRESSION VIA LEARNING SPARSIFYING BASIS FUNCTIONS . Here we discuss our main idea . We would like to model the unknown function y = F ( x ) given only Dtrain = { ( xkt , ykt ) |k = 1 ... K } . With small K , e.g . K = 10 , this is an ill-posed task , as F ( x ) can take any form . As stated before , we assume that each function we would like to regress is a task Ti drawn from an unknown distribution p ( T ) . To simplify discussion , we assume scalar input and scalar output . Our idea is to learn sparse representation of the unknown function F ( x ) , so that a few samples { ( xkt , ykt ) |k = 1 ... K } can provide adequate information to approximate the entire F ( x ) . Specifically , we model the unknown function F ( x ) as a linear combination of a set of basis functions { φi ( x ) } : F ( x ) = ∑ i wiφi ( x ) ( 1 ) Many handcrafted basis functions have been developed to expand F ( x ) . For example , the Maclaurin series expansion ( Taylor series expansion at x = 0 ) uses { φi ( x ) } = { 1 , x , x2 , x3 , ... } : F ( x ) = w0 + w1x+ w2x 2 + ... ( 2 ) If F ( x ) is a polynomial , ( 2 ) can be a sparse representation , i.e . only a few non-zero , significant wi , and most wi are zero or near zero . However , if F ( x ) is a sinusoid , it would require many terms to represent F ( x ) adequately , e.g . : sin ( x ) ≈ w1x+ w3x3 + w5x5 + w7x7 + ... + wMxM ( 3 ) In ( 3 ) , M is large and M K. Given only K samples { ( xkt , ykt ) |k = 1 ... K } , it is not adequate to determine { wi } and model the unknown function . On the other hand , if we use the Fourier basis instead , i.e. , { φi ( x ) } = { 1 , sin ( x ) , sin ( 2x ) , ... , cos ( x ) , cos ( 2x ) , ... } , clearly , we can obtain a sparse representation : we can adequately approximate the sinusoid with only a few terms . Under Fourier basis , there are only a few non-zero significant weights wi , and K samples are sufficient to estimate the significant wi and approximate the function . Essentially , with a sparsifying basis { φi ( x ) } , the degree of freedom of F ( x ) can be significantly reduced when it is modeled using ( 1 ) , so that K samples can well estimate F ( x ) . Our approach is to use the set of training tasks drawn from p ( T ) to learn { φi ( x ) } that result in sparse representation for any task drawn from p ( T ) . The set of { φi ( x ) } is encoded in the Basis Function Learner Network that takes in x and outputs Φ ( x ) = [ φ1 ( x ) , φ2 ( x ) , ... , φM ( x ) ] T . In our framework , Φ ( x ) is the same for any task drawn from p ( T ) , as it encodes the set of { φi ( x ) } that can sparsely represent any task from p ( T ) . We further learn a Weights Generator Network to map the K training samples of a novel task to a constant vector w = [ w1 , w2 , ... , wM ] T . The unknown function is modeled as wTΦ ( x ) .
The authors propose using sparse adaptive basis function models for few shot regression. The basis functions and the corresponding weights are generated via respective networks whose parameters are shared across all tasks. Elastic net regularization is used to encourage task specific sparsity in the weights, the idea being that with only a small number of available training examples, learning a sparse basis is easier than learning a dense basis with many more parameters. The method is validated on both synthetic data and on image completion tasks.
SP:da33f43dc72578ff039a1843c3bbbfc70ed4a685
Clustered Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton & Barto , 1998 ) studies how an agent can maximize its cumulative reward in an unknown environment , by learning through exploration and exploitation . A key challenge in RL is to balance the relationship between exploration and exploitation . If the agent explores novel states excessively , it might never find rewards to guide the learning direction . Otherwise , if the agent exploits rewards too intensely , it might converge to suboptimal behaviors and have fewer opportunities to discover more rewards from exploration . Although reinforcement learning , especially deep RL ( DRL ) , has recently attracted much attention and achieved significant performance in a variety of applications , such as game playing ( Mnih et al. , 2015 ; Silver et al. , 2016 ) and robot navigation ( Zhang et al. , 2016 ) , exploration techniques in RL are far from satisfactory in many cases . Exploration strategy design is still one of the challenging problems in RL , especially when the environment contains a large state space or sparse rewards . Hence , it has become a hot research topic to design exploration strategy , and many exploration methods have been proposed in recent years . Some heuristic methods for exploration , such as -greedy ( Silver et al. , 2016 ; Sutton & Barto , 1998 ) , uniform sampling ( Mnih et al. , 2015 ) and i.i.d./correlated Gaussian noise ( Lillicrap et al. , 2016 ; Schulman et al. , 2015 ) , try to directly obtain more different experiences during exploration . For hard applications or games , these heuristic methods are insufficient enough and the agent needs exploration techniques that can incorporate meaningful information about the environment . In recent years , some exploration strategies try to discover novel state areas for exploring . The direct way to measure novelty is to count the visited experiences . In ( Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ) , pseudo-counts are estimated from a density model . Hash-based method ( Tang et al. , 2017 ) counts the hash codes of states . There also exist some work using the counts of state-action pairs to design their exploration techniques , such as explicit explore or exploit ( E3 ) ( Kearns & Singh , 2002 ) , R-Max Brafman & Tennenholtz ( 2002 ) , UCRL ( Auer & Ortner , 2006 ) , UCAGG ( Ortner , 2013 ) . Besides , the state novelty can also be measured by empowerment ( Klyubin et al. , 2005 ) , the agent ’ s belief of environment dynamics ( Houthooft et al. , 2016 ) , prediction error of the system dynamics model ( Pathak et al. , 2017 ; Stadie et al. , 2015 ) , prediction by exemplar model ( Fu et al. , 2017 ) , and the error of predicting features of states ( Burda et al. , 2018 ) . All the above methods perform exploration mainly based on the novelty of states without considering the quality of states . Furthermore , there are some methods to estimate the quality of states . Kernel-based reinforcement learning ( Ormoneit & Sen , 2002 ) uses locally weighted averaging to estimate the quality ( value ) of states . UCRL ( Auer & Ortner , 2006 ) and UCAGG ( Ortner , 2013 ) compute average rewards for choosing optimistic values . The average reward can be regarded as an estimation of the quality of states to guide the exploring direction , but there are no methods using the quality of states as an exploration technique . Furthermore , in most existing methods , the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent . To tackle this problem , we propose a novel RL framework , called clustered reinforcement learning ( CRL ) , for efficient exploration in RL . The contributions of CRL are briefly outlined as follows : • CRL adopts clustering to divide the collected states into several clusters . The states from the same cluster have similar features . Hence , the clustered results in CRL provide a possibility to share meaningful information among different states from the same cluster . • CRL proposes a novel bonus reward , which reflects both novelty and quality in the neighboring area of the current state . Here , the neighboring area is defined by the states which share the same cluster with the current state . This bonus reward can guide the agent to perform efficient exploration , by seamlessly integrating novelty and quality of states . • Experiments on several continuous control tasks with sparse rewards and several hard exploratory Atari-2600 games ( Bellemare et al. , 2013 ) show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases . In particular , on several games known to be hard for heuristic exploration strategies , CRL achieves significantly improvement over baselines . 2 RELATED WORK . Recently , there are some exploration strategies used to discover novel state areas . The direct way to measure the novelty of states is to count the visited experiences , which has been applied in several methods . In the tabular setting and finite Markov decision processes ( MDPs ) , the number of stateaction pairs is finite which can be counted directly , such as model-based interval estimation with exploratory bonus ( MBIE-EB ) ( Strehl & Littman , 2008 ) , explicit explore or exploit ( E3 ) ( Kearns & Singh , 2002 ) and R-Max ( Brafman & Tennenholtz , 2002 ) . MBIE-EB adds the reciprocal of square root of counts of state-action pairs as the bonus reward to the augmented Bellman equation for exploring less visited ones with theoretical guarantee . E3 determines the action based on the counts of state-actions pairs . If the state has never been visited , the action is chosen randomly and if the state has been visited for some times , the agent takes the action that has been tried the fewest times before . R-Max uses counts of states as a way to check for known states . In the continuous and high-dimensional space , the number of states is too large to be counted directly ( Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Tang et al. , 2017 ; Abel et al. , 2016 ) . Bellemare et al . ( 2016 ) and Ostrovski et al . ( 2017 ) use a density model to estimate the state pseudo-count quantity , which is used to design the exploration bonus reward . Tang et al . ( 2017 ) counts the number of states by using the hash function to encode states and then it explores by using the reciprocal of visits as a form of reward bonus , which performs well on some hard exploration Atari-2600 games . Abel et al . ( 2016 ) records the number of cluster center and action pairs and makes use of it to select an action from the Gibbs distribution . These count-based methods encourage the agent to explore by making use of the novelty of states and do not take quality into consideration . Furthermore , there are some methods to estimate the quality of states . Average reward , in kernel-based reinforcement learning ( Ormoneit & Sen , 2002 ) , UCRL ( Auer & Ortner , 2006 ) and UCAGG ( Ortner , 2013 ) , can be regarded as the quality of states . Kernel-based reinforcement learning ( Ormoneit & Sen , 2002 ) is proposed to solve the stability problem of TD-learning by using locally weighted averaging to estimate the value of state . UCRL ( Auer & Ortner , 2006 ) and UCAGG ( Ortner , 2013 ) use average reward to choose optimistic values . Besides , the value of cluster space can also indicate the quality of states . Singh et al . ( 1994 ) uses the value of cluster space with Q-learning and TD ( 0 ) by soft state aggregation and provides convergence results . But these methods do not use the quality of states to explore more areas . To the best of our knowledge , the novelty and quality in the neighboring area of the current state have not been well utilized to guide the exploration of the agent in existing methods , especially in the high dimensional state space . This motivates the work of this paper . 3 NOTATION . In this paper , we adopt similar notations as those in ( Tang et al. , 2017 ) . More specifically , we model the RL problem as a finite-horizon discounted Markov decision process ( MDP ) , which can be defined by a tuple ( S , A , P , r , ρ0 , γ , T ) . Here , S ∈ Rd denotes the state space , A ∈ Rm denotes the action space , P : S×A×S → R denotes a transition probability distribution , r : S×A → R denotes a reward function , ρ0 is an initial state distribution , γ ∈ ( 0 , 1 ] is a discount factor , and T denotes the horizon time . In this paper , we assume r ≥ 0 . For cases with negative rewards , we can transform them to cases without negative rewards . The goal of RL is to maximize Eπ , P [ ∑T t=0 γ tr ( st , at ) ] which is the total expected discounted reward over a policy π . 4 CLUSTERED REINFORCEMENT LEARNING . This section presents the details of our proposed RL framework , called clustered reinforcement learning ( CRL ) . The key idea of CRL is to adopt clustering to divide the collected states into several clusters , and then design a novel cluster-based bonus reward for exploration . 4.1 CLUSTERING . Intuitively , both novelty and quality are useful for exploration strategy design . If the agent only cares about novelty , it might explore intensively in some unexplored areas without any reward . If the agent only cares about quality , it might converge to suboptimal behaviors and have low opportunity to discover unexplored areas with higher rewards . Hence , it is better to integrate both novelty and quality into the same exploration strategy . We find that clustering can provide the possibility to integrate both novelty and quality together . Intuitively , a cluster of states can be treated as an area . The number of collected states in a cluster reflects the count ( novelty ) information of that area . The average reward of the collected states in a cluster reflects the quality of that area . Hence , based on the clustered results , we can design an exploration strategy considering both novelty and quality . Furthermore , the states from the same cluster have similar features , and hence the clustered results provide a possibility to share meaningful information among different states from the same cluster . The details of exploration strategy design based on clustering will be left to the following subsection . Here , we only describe the clustering algorithm . In CRL , we perform clustering on states . Assume the number of clusters isK , and we have collected N state-action samples { ( si , ai , ri ) } Ni=1 with some policy . We need to cluster the collected states { si } Ni=1 into K clusters by using some clustering algorithm f : S → C , where C = { Ci } Ki=1 and Ci is the center of the i-th cluster . We can use any clustering algorithm in the CRL framework . Although more sophisticated clustering algorithms might be able to achieve better performance , in this paper we just choose k-means algorithm ( Coates & Ng , 2012 ) . K-means is one of the simplest clustering algorithms with wide applications . The detail of k-means is omitted here , and readers can find it in most machine learning textbooks .
This paper presents a clear approach to improve the exploration strategy in reinforcement learning, which is named clustered reinforcement learning. The approach tries to push the agent to explore more states with high novelty and quality. It is done by adding a bonus reward shown in Eq. (3) to the reward function. The author first cluster states into clusters using the k-means algorithm. The bonus reward will return a high value for a state if the corresponding cluster has a high average reward. When the total reward in a cluster is smaller than a certain threshold, the bonus reward will consider the number of states explored. In the experiments, the authors test different models on two MuJoCo tasks and five Atari games. TRPO, TRPO-Hash, VIME are selected as baselines to compare with. Results show that the proposed bonus reward reaches faster convergence and the highest return in both MuJoCo tasks. In those five Atari games, the proposed method achieves the highest or second-highest average returns.
SP:e1726e0e4f65ec99676002eca4ad9cdf27f60b56
Clustered Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton & Barto , 1998 ) studies how an agent can maximize its cumulative reward in an unknown environment , by learning through exploration and exploitation . A key challenge in RL is to balance the relationship between exploration and exploitation . If the agent explores novel states excessively , it might never find rewards to guide the learning direction . Otherwise , if the agent exploits rewards too intensely , it might converge to suboptimal behaviors and have fewer opportunities to discover more rewards from exploration . Although reinforcement learning , especially deep RL ( DRL ) , has recently attracted much attention and achieved significant performance in a variety of applications , such as game playing ( Mnih et al. , 2015 ; Silver et al. , 2016 ) and robot navigation ( Zhang et al. , 2016 ) , exploration techniques in RL are far from satisfactory in many cases . Exploration strategy design is still one of the challenging problems in RL , especially when the environment contains a large state space or sparse rewards . Hence , it has become a hot research topic to design exploration strategy , and many exploration methods have been proposed in recent years . Some heuristic methods for exploration , such as -greedy ( Silver et al. , 2016 ; Sutton & Barto , 1998 ) , uniform sampling ( Mnih et al. , 2015 ) and i.i.d./correlated Gaussian noise ( Lillicrap et al. , 2016 ; Schulman et al. , 2015 ) , try to directly obtain more different experiences during exploration . For hard applications or games , these heuristic methods are insufficient enough and the agent needs exploration techniques that can incorporate meaningful information about the environment . In recent years , some exploration strategies try to discover novel state areas for exploring . The direct way to measure novelty is to count the visited experiences . In ( Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ) , pseudo-counts are estimated from a density model . Hash-based method ( Tang et al. , 2017 ) counts the hash codes of states . There also exist some work using the counts of state-action pairs to design their exploration techniques , such as explicit explore or exploit ( E3 ) ( Kearns & Singh , 2002 ) , R-Max Brafman & Tennenholtz ( 2002 ) , UCRL ( Auer & Ortner , 2006 ) , UCAGG ( Ortner , 2013 ) . Besides , the state novelty can also be measured by empowerment ( Klyubin et al. , 2005 ) , the agent ’ s belief of environment dynamics ( Houthooft et al. , 2016 ) , prediction error of the system dynamics model ( Pathak et al. , 2017 ; Stadie et al. , 2015 ) , prediction by exemplar model ( Fu et al. , 2017 ) , and the error of predicting features of states ( Burda et al. , 2018 ) . All the above methods perform exploration mainly based on the novelty of states without considering the quality of states . Furthermore , there are some methods to estimate the quality of states . Kernel-based reinforcement learning ( Ormoneit & Sen , 2002 ) uses locally weighted averaging to estimate the quality ( value ) of states . UCRL ( Auer & Ortner , 2006 ) and UCAGG ( Ortner , 2013 ) compute average rewards for choosing optimistic values . The average reward can be regarded as an estimation of the quality of states to guide the exploring direction , but there are no methods using the quality of states as an exploration technique . Furthermore , in most existing methods , the novelty and quality in the neighboring area of the current state are not well utilized to guide the exploration of the agent . To tackle this problem , we propose a novel RL framework , called clustered reinforcement learning ( CRL ) , for efficient exploration in RL . The contributions of CRL are briefly outlined as follows : • CRL adopts clustering to divide the collected states into several clusters . The states from the same cluster have similar features . Hence , the clustered results in CRL provide a possibility to share meaningful information among different states from the same cluster . • CRL proposes a novel bonus reward , which reflects both novelty and quality in the neighboring area of the current state . Here , the neighboring area is defined by the states which share the same cluster with the current state . This bonus reward can guide the agent to perform efficient exploration , by seamlessly integrating novelty and quality of states . • Experiments on several continuous control tasks with sparse rewards and several hard exploratory Atari-2600 games ( Bellemare et al. , 2013 ) show that CRL can outperform other state-of-the-art methods to achieve the best performance in most cases . In particular , on several games known to be hard for heuristic exploration strategies , CRL achieves significantly improvement over baselines . 2 RELATED WORK . Recently , there are some exploration strategies used to discover novel state areas . The direct way to measure the novelty of states is to count the visited experiences , which has been applied in several methods . In the tabular setting and finite Markov decision processes ( MDPs ) , the number of stateaction pairs is finite which can be counted directly , such as model-based interval estimation with exploratory bonus ( MBIE-EB ) ( Strehl & Littman , 2008 ) , explicit explore or exploit ( E3 ) ( Kearns & Singh , 2002 ) and R-Max ( Brafman & Tennenholtz , 2002 ) . MBIE-EB adds the reciprocal of square root of counts of state-action pairs as the bonus reward to the augmented Bellman equation for exploring less visited ones with theoretical guarantee . E3 determines the action based on the counts of state-actions pairs . If the state has never been visited , the action is chosen randomly and if the state has been visited for some times , the agent takes the action that has been tried the fewest times before . R-Max uses counts of states as a way to check for known states . In the continuous and high-dimensional space , the number of states is too large to be counted directly ( Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Tang et al. , 2017 ; Abel et al. , 2016 ) . Bellemare et al . ( 2016 ) and Ostrovski et al . ( 2017 ) use a density model to estimate the state pseudo-count quantity , which is used to design the exploration bonus reward . Tang et al . ( 2017 ) counts the number of states by using the hash function to encode states and then it explores by using the reciprocal of visits as a form of reward bonus , which performs well on some hard exploration Atari-2600 games . Abel et al . ( 2016 ) records the number of cluster center and action pairs and makes use of it to select an action from the Gibbs distribution . These count-based methods encourage the agent to explore by making use of the novelty of states and do not take quality into consideration . Furthermore , there are some methods to estimate the quality of states . Average reward , in kernel-based reinforcement learning ( Ormoneit & Sen , 2002 ) , UCRL ( Auer & Ortner , 2006 ) and UCAGG ( Ortner , 2013 ) , can be regarded as the quality of states . Kernel-based reinforcement learning ( Ormoneit & Sen , 2002 ) is proposed to solve the stability problem of TD-learning by using locally weighted averaging to estimate the value of state . UCRL ( Auer & Ortner , 2006 ) and UCAGG ( Ortner , 2013 ) use average reward to choose optimistic values . Besides , the value of cluster space can also indicate the quality of states . Singh et al . ( 1994 ) uses the value of cluster space with Q-learning and TD ( 0 ) by soft state aggregation and provides convergence results . But these methods do not use the quality of states to explore more areas . To the best of our knowledge , the novelty and quality in the neighboring area of the current state have not been well utilized to guide the exploration of the agent in existing methods , especially in the high dimensional state space . This motivates the work of this paper . 3 NOTATION . In this paper , we adopt similar notations as those in ( Tang et al. , 2017 ) . More specifically , we model the RL problem as a finite-horizon discounted Markov decision process ( MDP ) , which can be defined by a tuple ( S , A , P , r , ρ0 , γ , T ) . Here , S ∈ Rd denotes the state space , A ∈ Rm denotes the action space , P : S×A×S → R denotes a transition probability distribution , r : S×A → R denotes a reward function , ρ0 is an initial state distribution , γ ∈ ( 0 , 1 ] is a discount factor , and T denotes the horizon time . In this paper , we assume r ≥ 0 . For cases with negative rewards , we can transform them to cases without negative rewards . The goal of RL is to maximize Eπ , P [ ∑T t=0 γ tr ( st , at ) ] which is the total expected discounted reward over a policy π . 4 CLUSTERED REINFORCEMENT LEARNING . This section presents the details of our proposed RL framework , called clustered reinforcement learning ( CRL ) . The key idea of CRL is to adopt clustering to divide the collected states into several clusters , and then design a novel cluster-based bonus reward for exploration . 4.1 CLUSTERING . Intuitively , both novelty and quality are useful for exploration strategy design . If the agent only cares about novelty , it might explore intensively in some unexplored areas without any reward . If the agent only cares about quality , it might converge to suboptimal behaviors and have low opportunity to discover unexplored areas with higher rewards . Hence , it is better to integrate both novelty and quality into the same exploration strategy . We find that clustering can provide the possibility to integrate both novelty and quality together . Intuitively , a cluster of states can be treated as an area . The number of collected states in a cluster reflects the count ( novelty ) information of that area . The average reward of the collected states in a cluster reflects the quality of that area . Hence , based on the clustered results , we can design an exploration strategy considering both novelty and quality . Furthermore , the states from the same cluster have similar features , and hence the clustered results provide a possibility to share meaningful information among different states from the same cluster . The details of exploration strategy design based on clustering will be left to the following subsection . Here , we only describe the clustering algorithm . In CRL , we perform clustering on states . Assume the number of clusters isK , and we have collected N state-action samples { ( si , ai , ri ) } Ni=1 with some policy . We need to cluster the collected states { si } Ni=1 into K clusters by using some clustering algorithm f : S → C , where C = { Ci } Ki=1 and Ci is the center of the i-th cluster . We can use any clustering algorithm in the CRL framework . Although more sophisticated clustering algorithms might be able to achieve better performance , in this paper we just choose k-means algorithm ( Coates & Ng , 2012 ) . K-means is one of the simplest clustering algorithms with wide applications . The detail of k-means is omitted here , and readers can find it in most machine learning textbooks .
This paper proposed a clustering based algorithm to improve the exploration performance in reinforcement learning. Similar to the count based approaches, the novelty of a new state was computed based on the statistics of the corresponding clusters. This exploration bonus was then combined with the TRPO algorithm to obtain the policy. The experimental results showed some improvement, compare with its competitors.
SP:e1726e0e4f65ec99676002eca4ad9cdf27f60b56
Non-Autoregressive Dialog State Tracking
1 INTRODUCTION . In task-oriented dialogues , a dialogue agent is required to assist humans for one or many tasks such as finding a restaurant and booking a hotel . As a sample dialogue shown in Table 1 , each user utterance typically contains important information identified as slots related to a dialogue domain such as attraction-area and train-day . A crucial part of a task-oriented dialogue system is Dialogue State Tracking ( DST ) , which aims to identify user goals expressed during a conversation in the form of dialogue states . A dialogue state consists of a set of ( slot , value ) pairs e.g . ( attraction-area , centre ) and ( train-day , tuesday ) . Existing DST models can be categorized into two types : fixed- and open-vocabulary . Fixed vocabulary models assume known slot ontology and generate a score for each candidate of ( slot , value ) ( Ramadan et al. , 2018 ; Lee et al. , 2019 ) . Recent approaches propose open-vocabulary models that can generate the candidates , especially for slots such as entity names and time , from the dialogue history ( Lei et al. , 2018 ; Wu et al. , 2019 ) . Most open-vocabulary DST models rely on autoregressive encoders and decoders , which encode dialogue history sequentially and generate token ti of individual slot value one by one conditioned on all previously generated tokens t [ 1 : i−1 ] . For downstream tasks of DST that emphasize on low latency ( e.g . generating real-time dialogue responses ) , auto-regressive approaches incur expensive time cost as the ongoing dialogues become more complex . The time cost is caused by two major components : length of dialogue history i.e . number of turns , and length of slot values . For complex dialogues extended over many turns and multiple domains , the time cost will increase significantly in both encoding and decoding phases . Similar problems can be seen in the field of Neural Machine Translation ( NMT ) research where a long piece of text is translated from one language to another . Recent work has tried to improve the ∗All work was done while the first author was a research intern at Salesforce Research Asia . latency in NMT by using neural network architectures such as convolution ( Krizhevsky et al. , 2012 ) and attention ( Luong et al. , 2015 ) . Several non- and semi-autoregressive approaches aim to generate tokens of the target language independently ( Gu et al. , 2018 ; Lee et al. , 2018 ; Kaiser et al. , 2018 ) . Motivated by this line of research , we thus propose a non-autoregressive approach to minimize the time cost of DST models without a negative impact on the model performance . We adopt the concept of fertility proposed by Gu et al . ( 2018 ) . Fertility denotes the number of times each input token is copied to form a sequence as the input to the decoder for non-autoregressive decoding . We first reconstruct dialogue state as a sequence of concatenated slot values . The result sequence contains the inherent structured representation in which we can apply the fertility concept . The structure is defined by the boundaries of individual slot values . These boundaries can be easily obtained from dialogue state itself by simply measuring number of the tokens of individual slots . Our model includes a two-stage decoding process : ( 1 ) the first decoder learns relevant signals from the input dialogue history and generates a fertility for each input slot representation ; and ( 2 ) the predicted fertility is used to form a structured sequence which consists of multiple sub-sequences , each represented as ( slot token×slot fertility ) . The result sequence is used as input to the second decoder to generate all the tokens of the target dialogue state at once . In addition to being non-autoregressive , our models explicitly consider dependencies at both slot level and token level . Most of existing DST models assume independence among slots in dialogue states without explicitly considering potential signals across the slots ( Wu et al. , 2019 ; Lee et al. , 2019 ; Goel et al. , 2019 ; Gao et al. , 2019 ) . However , we hypothesize that it is not true in many cases . For example , a good DST model should detect the relation that train departure should not have the same value as train destination ( example in Table 1 ) . Other cases include time-related pairs such as ( taxi arriveBy , taxi leaveAt ) and cross-domain pairs such as ( hotel area , attraction area ) . Our proposed approach considers all possible signals across all domains and slots to generate a dialogue state as a set . Our approach directly optimizes towards the DST evaluation metric Joint Accuracy ( Henderson et al. , 2014b ) , which measures accuracy at state ( set of slots ) level rather than slot level . Our contributions in this work include : ( 1 ) we propose a novel framework of Non-Autoregressive Dialog State Tracking ( NADST ) , which explicitly learns inter-dependencies across slots for decoding dialogue states as a complete set rather than individual slots ; ( 2 ) we propose a non-autoregressive decoding scheme , which not only enjoys low latency for real-time dialogues , but also allows to capture dependencies at token level in addition to slot level ; ( 3 ) we achieve the state-of-the-art performance on the multi-domain task-oriented dialogue dataset “ MultiWOZ 2.1 ” ( Budzianowski et al. , 2018 ; Eric et al. , 2019 ) while significantly reducing the inference latency by an order of magnitude ; ( 4 ) we conduct extensive ablation studies in which our analysis reveals that our models can detect potential signals across slots and dialogue domains to generate more correct “ sets ” of slots for DST . 2 RELATED WORK . Our work is related to two research areas : dialogue state tracking and non-autoregressive decoding . 2.1 DIALOGUE STATE TRACKING . Dialogue State Tracking ( DST ) is an important component in task-oriented dialogues , especially for dialogues with complex domains that require fine-grained tracking of relevant slots . Traditionally , DST is coupled with Natural Language Understanding ( NLU ) . NLU output as tagged user utterances is input to DST models to update the dialogue states turn by turn ( Kurata et al. , 2016 ; Shi et al. , 2016 ; Rastogi et al. , 2017 ) . Recent approaches combine NLU and DST to reduce the credit assignment problem and remove the need for NLU ( Mrkšić et al. , 2017 ; Xu & Hu , 2018 ; Zhong et al. , 2018 ) . Within this body of research , Goel et al . ( 2019 ) differentiates two DST approaches : fixed- and openvocabulary . Fixed-vocabulary approaches are usually retrieval-based methods in which all candidate pairs of ( slot , value ) from a given slot ontology are considered and the models predict a probability score for each pair ( Henderson et al. , 2014c ; Ramadan et al. , 2018 ; Lee et al. , 2019 ) . Recent work has moved towards open-vocabulary approaches that can generate the candidates based on input text i.e . dialogue history ( Lei et al. , 2018 ; Gao et al. , 2019 ; Wu et al. , 2019 ) . Our work is more related to these models , but different from most of the current work , we explicitly consider dependencies among slots and domains to decode dialogue state as a complete set . 2.2 NON-AUTOREGRESSIVE DECODING . Most of prior work in non- or semi-autoregressive decoding methods are used for NMT to address the need for fast translation . Schwenk ( 2012 ) proposes to estimate the translation model probabilities of a phase-based NMT system . Libovickỳ & Helcl ( 2018 ) formulates the decoding process as a sequence labeling task by projecting source sequence into a longer sequence and applying CTC loss ( Graves et al. , 2006 ) to decode the target sequence . Wang et al . ( 2019 ) adds regularization terms to NAT models ( Gu et al. , 2018 ) to reduce translation errors such as repeated tokens and incomplete sentences . Ghazvininejad et al . ( 2019 ) uses a non-autoregressive decoder with masked attention to decode target sequences over multiple generation rounds . A common challenge in nonautoregressive NMT is the large number of sequential latent variables , e.g. , fertility sequences ( Gu et al. , 2018 ) and projected target sequences ( Libovickỳ & Helcl , 2018 ) . These latent variables are used as supporting signals for non- or semi-autoregressive decoding . We reformulate dialogue state as a structured sequence with sub-sequences defined as a concatenation of slot values . This form of dialogue state can be inferred easily from the dialogue state annotation itself whereas such supervision information is not directly available in NMT . The lower semantic complexity of slot values as compared to long sentences in NMT makes it easier to adopt non-autoregressive approaches into DST . According to our review , we are the first to apply a non-autoregressive framework for generation-based DST . Our approach allows joint state tracking across slots , which results in better performance and an order of magnitude lower latency during inference . 3 APPROACH . Our NADST model is composed of three parts : encoders , fertility decoder , and state decoder , as shown in Figure 1 . The input includes the dialogue history X = ( x1 , ... , xN ) and a sequence of applicable ( domain , slot ) pairsXds = ( ( d1 , s1 ) , ... , ( dG , sH ) ) , whereG andH are the total numbers of domains and slots , respectively . The output is the corresponding dialogue states up to the current dialogue history . Conventionally , the output of dialogue state is denoted as tuple ( slot , value ) ( or ( domain-slot , value ) for multi-domain dialogues ) . We reformulate the output as a concatenation of slot values Y di , sj : Y = ( Y d1 , s1 , ... , Y dI , sJ ) = ( yd1 , s11 , y d1 , s1 2 , ... , y dI , sJ 1 , y dI , sJ 2 , ... ) where I and J are the numbers of domains and slots in the output dialogue state , respectively . First , the encoders use token-level embedding and positional encoding to encode the input dialogue history and ( domain , slot ) pairs into continuous representations . The encoded domains and slots are then input to stacked self-attention and feed-forward network to obtain relevant signals across dialogue history and generate a fertility Y dg , shf for each ( domain , slot ) pair ( dg , sh ) . The output of fertility decoder is defined as a sequence : Yfert = Y d1 , s1 f , ... , Y dG , sH f where Y dg , dh f ∈ { 0 , max ( SlotLength ) } . For example , for the MultiWOZ dataset in our experiments , we have max ( SlotLength ) = 9 according to the training data . We follow ( Wu et al. , 2019 ; Gao et al. , 2019 ) to add a slot gating mechanism as an auxiliary prediction . Each gate g is restricted to 3 possible values : “ none ” , “ dontcare ” and “ generate ” . They are used to form higher-level classification signals to support fertility decoding process . The gate output is defined as a sequence : Ygate = Y d1 , s1g , ... , Y dG , sH g . The predicted fertilities are used to form an input sequence to the state decoder for nonautoregressive decoding . The sequence includes sub-sequences of ( dg , sh ) repeated by Y dg , sh f times and concatenated sequentially : Xds×fert = ( ( d1 , s1 ) Y d1 , s1 f , ... , ( dG , sH ) Y dG , sH f ) and ‖Xds×fert‖ = ‖Y ‖ . The decoder projects this sequence through attention layers with dialogue history . During this decoding process , we maintain a memory of hidden states of dialogue history . The output from the state decoder is used as a query to attend on this memory and copy tokens from the dialogue history to generate a dialogue state . Following Lei et al . ( 2018 ) , we incorporate information from previous dialogue turns to predict current turn state by using a partially delexicalized dialogue historyXdel = ( x1 , del , ... , xN , del ) as an input of the model . The dialogue history is delexicalized till the last system utterance by removing real-value tokens that match the previously decoded slot values to tokens expressed as domain-slot . Given a token xn and the current dialogue turn t , the token is delexicalized as follows : xn , del =delex ( xn ) = { domainidx-slotidx , if xn ⊂ Ŷt−1 . xn , otherwise . ( 1 ) domainidx = Xds×fert [ idx ] [ 0 ] , slotidx = Xds×fert [ idx ] [ 1 ] , idx = Index ( xn , Ŷt−1 ) ( 2 ) For example , the user utterance “ I look for a cheap hotel ” is delexicalized to “ I look for a hotel pricerange hotel. ” if the slot hotel pricerange is predicted as “ cheap ” in the previous turn . This approach makes use of the delexicalized form of dialogue history while not relying on an NLU module as we utilize the predicted state from DST model itself . In addition to the belief state , we also use the system action in the previous turn to delexicalize the dialog history in a similar manner , following prior work ( Rastogi et al. , 2017 ; Zhong et al. , 2018 ; Goel et al. , 2019 ) .
This paper proposed a model that is capable of tracking dialogue states in a non-recursive fashion. The main techniques behind the non-recursive model is similar to that of the ICLR 2018 paper "NON-AUTOREGRESSIVE NEURAL MACHINE TRANSLATION". Unfortunately, as state tacking can be formulated as one special case of sequence decoding, there is not much of innovation that can be claimed in this paper considering the "fertility" idea was already been proposed. The paper did illustrate a strong experimental results on a recent dataset comparing with many state-of-the-art models. However, it is not clear how much innovation this work generates and how the ICLR community would benefit from the problem that the paper is addressing.
SP:333b75014a3cde3eb10486b9b7db6eea42db3196
Non-Autoregressive Dialog State Tracking
1 INTRODUCTION . In task-oriented dialogues , a dialogue agent is required to assist humans for one or many tasks such as finding a restaurant and booking a hotel . As a sample dialogue shown in Table 1 , each user utterance typically contains important information identified as slots related to a dialogue domain such as attraction-area and train-day . A crucial part of a task-oriented dialogue system is Dialogue State Tracking ( DST ) , which aims to identify user goals expressed during a conversation in the form of dialogue states . A dialogue state consists of a set of ( slot , value ) pairs e.g . ( attraction-area , centre ) and ( train-day , tuesday ) . Existing DST models can be categorized into two types : fixed- and open-vocabulary . Fixed vocabulary models assume known slot ontology and generate a score for each candidate of ( slot , value ) ( Ramadan et al. , 2018 ; Lee et al. , 2019 ) . Recent approaches propose open-vocabulary models that can generate the candidates , especially for slots such as entity names and time , from the dialogue history ( Lei et al. , 2018 ; Wu et al. , 2019 ) . Most open-vocabulary DST models rely on autoregressive encoders and decoders , which encode dialogue history sequentially and generate token ti of individual slot value one by one conditioned on all previously generated tokens t [ 1 : i−1 ] . For downstream tasks of DST that emphasize on low latency ( e.g . generating real-time dialogue responses ) , auto-regressive approaches incur expensive time cost as the ongoing dialogues become more complex . The time cost is caused by two major components : length of dialogue history i.e . number of turns , and length of slot values . For complex dialogues extended over many turns and multiple domains , the time cost will increase significantly in both encoding and decoding phases . Similar problems can be seen in the field of Neural Machine Translation ( NMT ) research where a long piece of text is translated from one language to another . Recent work has tried to improve the ∗All work was done while the first author was a research intern at Salesforce Research Asia . latency in NMT by using neural network architectures such as convolution ( Krizhevsky et al. , 2012 ) and attention ( Luong et al. , 2015 ) . Several non- and semi-autoregressive approaches aim to generate tokens of the target language independently ( Gu et al. , 2018 ; Lee et al. , 2018 ; Kaiser et al. , 2018 ) . Motivated by this line of research , we thus propose a non-autoregressive approach to minimize the time cost of DST models without a negative impact on the model performance . We adopt the concept of fertility proposed by Gu et al . ( 2018 ) . Fertility denotes the number of times each input token is copied to form a sequence as the input to the decoder for non-autoregressive decoding . We first reconstruct dialogue state as a sequence of concatenated slot values . The result sequence contains the inherent structured representation in which we can apply the fertility concept . The structure is defined by the boundaries of individual slot values . These boundaries can be easily obtained from dialogue state itself by simply measuring number of the tokens of individual slots . Our model includes a two-stage decoding process : ( 1 ) the first decoder learns relevant signals from the input dialogue history and generates a fertility for each input slot representation ; and ( 2 ) the predicted fertility is used to form a structured sequence which consists of multiple sub-sequences , each represented as ( slot token×slot fertility ) . The result sequence is used as input to the second decoder to generate all the tokens of the target dialogue state at once . In addition to being non-autoregressive , our models explicitly consider dependencies at both slot level and token level . Most of existing DST models assume independence among slots in dialogue states without explicitly considering potential signals across the slots ( Wu et al. , 2019 ; Lee et al. , 2019 ; Goel et al. , 2019 ; Gao et al. , 2019 ) . However , we hypothesize that it is not true in many cases . For example , a good DST model should detect the relation that train departure should not have the same value as train destination ( example in Table 1 ) . Other cases include time-related pairs such as ( taxi arriveBy , taxi leaveAt ) and cross-domain pairs such as ( hotel area , attraction area ) . Our proposed approach considers all possible signals across all domains and slots to generate a dialogue state as a set . Our approach directly optimizes towards the DST evaluation metric Joint Accuracy ( Henderson et al. , 2014b ) , which measures accuracy at state ( set of slots ) level rather than slot level . Our contributions in this work include : ( 1 ) we propose a novel framework of Non-Autoregressive Dialog State Tracking ( NADST ) , which explicitly learns inter-dependencies across slots for decoding dialogue states as a complete set rather than individual slots ; ( 2 ) we propose a non-autoregressive decoding scheme , which not only enjoys low latency for real-time dialogues , but also allows to capture dependencies at token level in addition to slot level ; ( 3 ) we achieve the state-of-the-art performance on the multi-domain task-oriented dialogue dataset “ MultiWOZ 2.1 ” ( Budzianowski et al. , 2018 ; Eric et al. , 2019 ) while significantly reducing the inference latency by an order of magnitude ; ( 4 ) we conduct extensive ablation studies in which our analysis reveals that our models can detect potential signals across slots and dialogue domains to generate more correct “ sets ” of slots for DST . 2 RELATED WORK . Our work is related to two research areas : dialogue state tracking and non-autoregressive decoding . 2.1 DIALOGUE STATE TRACKING . Dialogue State Tracking ( DST ) is an important component in task-oriented dialogues , especially for dialogues with complex domains that require fine-grained tracking of relevant slots . Traditionally , DST is coupled with Natural Language Understanding ( NLU ) . NLU output as tagged user utterances is input to DST models to update the dialogue states turn by turn ( Kurata et al. , 2016 ; Shi et al. , 2016 ; Rastogi et al. , 2017 ) . Recent approaches combine NLU and DST to reduce the credit assignment problem and remove the need for NLU ( Mrkšić et al. , 2017 ; Xu & Hu , 2018 ; Zhong et al. , 2018 ) . Within this body of research , Goel et al . ( 2019 ) differentiates two DST approaches : fixed- and openvocabulary . Fixed-vocabulary approaches are usually retrieval-based methods in which all candidate pairs of ( slot , value ) from a given slot ontology are considered and the models predict a probability score for each pair ( Henderson et al. , 2014c ; Ramadan et al. , 2018 ; Lee et al. , 2019 ) . Recent work has moved towards open-vocabulary approaches that can generate the candidates based on input text i.e . dialogue history ( Lei et al. , 2018 ; Gao et al. , 2019 ; Wu et al. , 2019 ) . Our work is more related to these models , but different from most of the current work , we explicitly consider dependencies among slots and domains to decode dialogue state as a complete set . 2.2 NON-AUTOREGRESSIVE DECODING . Most of prior work in non- or semi-autoregressive decoding methods are used for NMT to address the need for fast translation . Schwenk ( 2012 ) proposes to estimate the translation model probabilities of a phase-based NMT system . Libovickỳ & Helcl ( 2018 ) formulates the decoding process as a sequence labeling task by projecting source sequence into a longer sequence and applying CTC loss ( Graves et al. , 2006 ) to decode the target sequence . Wang et al . ( 2019 ) adds regularization terms to NAT models ( Gu et al. , 2018 ) to reduce translation errors such as repeated tokens and incomplete sentences . Ghazvininejad et al . ( 2019 ) uses a non-autoregressive decoder with masked attention to decode target sequences over multiple generation rounds . A common challenge in nonautoregressive NMT is the large number of sequential latent variables , e.g. , fertility sequences ( Gu et al. , 2018 ) and projected target sequences ( Libovickỳ & Helcl , 2018 ) . These latent variables are used as supporting signals for non- or semi-autoregressive decoding . We reformulate dialogue state as a structured sequence with sub-sequences defined as a concatenation of slot values . This form of dialogue state can be inferred easily from the dialogue state annotation itself whereas such supervision information is not directly available in NMT . The lower semantic complexity of slot values as compared to long sentences in NMT makes it easier to adopt non-autoregressive approaches into DST . According to our review , we are the first to apply a non-autoregressive framework for generation-based DST . Our approach allows joint state tracking across slots , which results in better performance and an order of magnitude lower latency during inference . 3 APPROACH . Our NADST model is composed of three parts : encoders , fertility decoder , and state decoder , as shown in Figure 1 . The input includes the dialogue history X = ( x1 , ... , xN ) and a sequence of applicable ( domain , slot ) pairsXds = ( ( d1 , s1 ) , ... , ( dG , sH ) ) , whereG andH are the total numbers of domains and slots , respectively . The output is the corresponding dialogue states up to the current dialogue history . Conventionally , the output of dialogue state is denoted as tuple ( slot , value ) ( or ( domain-slot , value ) for multi-domain dialogues ) . We reformulate the output as a concatenation of slot values Y di , sj : Y = ( Y d1 , s1 , ... , Y dI , sJ ) = ( yd1 , s11 , y d1 , s1 2 , ... , y dI , sJ 1 , y dI , sJ 2 , ... ) where I and J are the numbers of domains and slots in the output dialogue state , respectively . First , the encoders use token-level embedding and positional encoding to encode the input dialogue history and ( domain , slot ) pairs into continuous representations . The encoded domains and slots are then input to stacked self-attention and feed-forward network to obtain relevant signals across dialogue history and generate a fertility Y dg , shf for each ( domain , slot ) pair ( dg , sh ) . The output of fertility decoder is defined as a sequence : Yfert = Y d1 , s1 f , ... , Y dG , sH f where Y dg , dh f ∈ { 0 , max ( SlotLength ) } . For example , for the MultiWOZ dataset in our experiments , we have max ( SlotLength ) = 9 according to the training data . We follow ( Wu et al. , 2019 ; Gao et al. , 2019 ) to add a slot gating mechanism as an auxiliary prediction . Each gate g is restricted to 3 possible values : “ none ” , “ dontcare ” and “ generate ” . They are used to form higher-level classification signals to support fertility decoding process . The gate output is defined as a sequence : Ygate = Y d1 , s1g , ... , Y dG , sH g . The predicted fertilities are used to form an input sequence to the state decoder for nonautoregressive decoding . The sequence includes sub-sequences of ( dg , sh ) repeated by Y dg , sh f times and concatenated sequentially : Xds×fert = ( ( d1 , s1 ) Y d1 , s1 f , ... , ( dG , sH ) Y dG , sH f ) and ‖Xds×fert‖ = ‖Y ‖ . The decoder projects this sequence through attention layers with dialogue history . During this decoding process , we maintain a memory of hidden states of dialogue history . The output from the state decoder is used as a query to attend on this memory and copy tokens from the dialogue history to generate a dialogue state . Following Lei et al . ( 2018 ) , we incorporate information from previous dialogue turns to predict current turn state by using a partially delexicalized dialogue historyXdel = ( x1 , del , ... , xN , del ) as an input of the model . The dialogue history is delexicalized till the last system utterance by removing real-value tokens that match the previously decoded slot values to tokens expressed as domain-slot . Given a token xn and the current dialogue turn t , the token is delexicalized as follows : xn , del =delex ( xn ) = { domainidx-slotidx , if xn ⊂ Ŷt−1 . xn , otherwise . ( 1 ) domainidx = Xds×fert [ idx ] [ 0 ] , slotidx = Xds×fert [ idx ] [ 1 ] , idx = Index ( xn , Ŷt−1 ) ( 2 ) For example , the user utterance “ I look for a cheap hotel ” is delexicalized to “ I look for a hotel pricerange hotel. ” if the slot hotel pricerange is predicted as “ cheap ” in the previous turn . This approach makes use of the delexicalized form of dialogue history while not relying on an NLU module as we utilize the predicted state from DST model itself . In addition to the belief state , we also use the system action in the previous turn to delexicalize the dialog history in a similar manner , following prior work ( Rastogi et al. , 2017 ; Zhong et al. , 2018 ; Goel et al. , 2019 ) .
The authors build on recent work for non-autoregressive encoder-decoder models in the context of machine translation (most significantly [Gu, et al., ICLR18]) and adapt this to dialogue state tracking. Specifically, as in [Gu, et al, ICLR18], they use a fertility decoder modified for DST to be on a per-slot basis which is input to a second decoder to generate the (open-vocabulary) tokens representing dialogue state. An interesting aspect of the resulting model as formulated is that the latent space also takes into account interdependencies between generated slot values, which leads to a direct structured prediction-like loss of joint accuracy. Additionally, as slot values have a smaller (and likely peakier) combinatorial space, NAT models actually are more applicable to DST than MT. The resulting model achieves state-of-the-art empirical results on the MultiWOZ dataset while incurring decoding times that are an order of magnitude faster.
SP:333b75014a3cde3eb10486b9b7db6eea42db3196
SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference
Github : http : //github.com/google-research/seed_rl . 1 INTRODUCTION . The field of reinforcement learning ( RL ) has recently seen impressive results across a variety of tasks . This has in part been fueled by the introduction of deep learning in RL and the introduction of accelerators such as GPUs . In the very recent history , focus on massive scale has been key to solve a number of complicated games such as AlphaGo ( Silver et al. , 2016 ) , Dota ( OpenAI , 2018 ) and StarCraft 2 ( Vinyals et al. , 2017 ) . The sheer amount of environment data needed to solve tasks trivial to humans , makes distributed machine learning unavoidable for fast experiment turnaround time . RL is inherently comprised of heterogeneous tasks : running environments , model inference , model training , replay buffer , etc . and current state-of-the-art distributed algorithms do not efficiently use compute resources for the tasks . The amount of data and inefficient use of resources makes experiments unreasonably expensive . The two main challenges addressed in this paper are scaling of reinforcement learning and optimizing the use of modern accelerators , CPUs and other resources . We introduce SEED ( Scalable , Efficient , Deep-RL ) , a modern RL agent that scales well , is flexible and efficiently utilizes available resources . It is a distributed agent where model inference is done centrally combined with fast streaming RPCs to reduce the overhead of inference calls . We show that with simple methods , one can achieve state-of-the-art results faster on a number of tasks . For optimal performance , we use TPUs ( cloud.google.com/tpu/ ) and TensorFlow 2 ( Abadi et al. , 2015 ) to simplify the implementation . The cost of running SEED is analyzed against IMPALA ( Espeholt et al. , 2018 ) which is a commonly used state-of-the-art distributed RL algorithm ( Veeriah et al . ( 2019 ) ; Li et al . ( 2019 ) ; Deverett et al . ( 2019 ) ; Omidshafiei et al . ( 2019 ) ; Vezhnevets et al . ( 2019 ) ; Hansen et al . ( 2019 ) ; Schaarschmidt et al . ; Tirumala et al . ( 2019 ) , ... ) . We show cost reductions of up to 80 % while being significantly faster . When scaling SEED to many accelerators , it can train on millions of frames per second . Finally , the implementation is open-sourced together with examples of running it at scale on Google Cloud ( see Appendix A.4 for details ) making it easy to reproduce results and try novel ideas . ∗Equal contribution 2 RELATED WORK . For value-based methods , an early attempt for scaling DQN was Nair et al . ( 2015 ) that used asynchronous SGD ( Dean et al. , 2012 ) together with a distributed setup consisting of actors , replay buffers , parameter servers and learners . Since then , it has been shown that asynchronous SGD leads to poor sample complexity while not being significantly faster ( Chen et al. , 2016 ; Espeholt et al. , 2018 ) . Along with advances for Q-learning such as prioritized replay ( Schaul et al. , 2015 ) , dueling networks ( Wang et al. , 2016 ) , and double-Q learning ( van Hasselt , 2010 ; Van Hasselt et al. , 2016 ) the state-of-the-art distributed Q-learning was improved with Ape-X ( Horgan et al. , 2018 ) . Recently , R2D2 ( Kapturowski et al. , 2018 ) achieved impressive results across all the Arcade Learning Environment ( ALE ) ( Bellemare et al. , 2013 ) games by incorporating value-function rescaling ( Pohlen et al. , 2018 ) and LSTMs ( Hochreiter & Schmidhuber , 1997 ) on top of the advancements of Ape-X . There have also been many approaches for scaling policy gradients methods . A3C ( Mnih et al. , 2016 ) introduced asynchronous single-machine training using asynchronous SGD and relied exclusively on CPUs . GPUs were later introduced in GA3C ( Mahmood , 2017 ) with improved speed but poor convergence results due to an inherently on-policy method being used in an off-policy setting . This was corrected by V-trace ( Espeholt et al. , 2018 ) in the IMPALA agent both for singlemachine training and also scaled using a simple actor-learner architecture to more than a thousand machines . PPO ( Schulman et al. , 2017 ) serves a similar purpose to V-trace and was used in OpenAI Rapid ( Petrov et al. , 2018 ) with the actor-learner architecture extended with Redis ( redis.io ) , an in-memory data store , and was scaled to 128,000 CPUs . For inexpensive environments like ALE , a single machine with multiple accelerators can achieve results quickly ( Stooke & Abbeel , 2018 ) . This approach was taken a step further by converting ALE to run on a GPU ( Dalton et al. , 2019 ) . A third class of algorithms is evolutionary algorithms . With simplicity and massive scale , they have achieved impressive results on a number of tasks ( Salimans et al. , 2017 ; Such et al. , 2017 ) . Besides algorithms , there exist a number of useful libraries and frameworks for reinforcement learning . ELF ( Tian et al. , 2017 ) is a framework for efficiently interacting with environments , avoiding Python global-interpreter-lock contention . Dopamine ( Castro et al. , 2018 ) is a flexible research focused RL framework with a strong emphasis on reproducibility . It has state of the art agent implementations such as Rainbow ( Hessel et al. , 2017 ) but is single-threaded . TF-Agents ( Guadarrama et al. , 2018 ) and rlpyt ( Stooke & Abbeel , 2019 ) both have a broader focus with implementations for several classes of algorithms but as of writing , they do not have distributed capability for largescale RL . RLLib ( Liang et al. , 2017 ) provides a number of composable distributed components and a communication abstraction with a number of algorithm implementations such as IMPALA and Ape-X . Concurrent with this work , TorchBeast ( Küttler et al. , 2019 ) was released which is an implementation of single-machine IMPALA with remote environments . SEED is closest related to IMPALA , but has a number of key differences that combine the benefits of single-machine training with a scalable architecture . Inference is moved to the learner but environments run remotely . This is combined with a fast communication layer to mitigate latency issues from the increased number of remote calls . The result is significantly faster training at reduced costs by as much as 80 % for the scenarios we consider . Along with a policy gradients ( V-trace ) implementation we also provide an implementation of state of the art Q-learning ( R2D2 ) . In the work we use TPUs but in principle , any modern accelerator could be used in their place . TPUs are particularly well-suited given they high throughput for machine learning applications and the scalability . Up to 2048 cores are connected with a fast interconnect providing 100+ petaflops of compute . 3 ARCHITECTURE . Before introducing the architecture of SEED , we first analyze the generic actor-learner architecture used by IMPALA , which is also used in various forms in Ape-X , OpenAI Rapid and others . An overview of the architecture is shown in Figure 1a . A large number of actors repeatedly read model parameters from the learner ( or parameter servers ) . Each actor then proceeds the local model to sample actions and generate a full trajectory of observations , actions , policy logits/Q-values . Finally , this trajectory along with recurrent state is transferred to a shared queue or replay buffer . Asynchronously , the learner reads batches of trajectories from the queue/replay buffer and optimizes the model . There are a number of reasons for why this architecture falls short : 1 . Using CPUs for neural network inference : The actor machines are usually CPU-based ( occasionally GPU-based for expensive environments ) . CPUs are known to be computationally inefficient for neural networks ( Raina et al. , 2009 ) . When the computational needs of a model increase , the time spent on inference starts to outweigh the environment step computation . The solution is to increase the number of actors which increases the cost and affects convergence ( Espeholt et al. , 2018 ) . 2 . Inefficient resource utilization : Actors alternate between two tasks : environment steps and inference steps . The compute requirements for the two tasks are often not similar which leads to poor utilization or slow actors . E.g . some environments are inherently single-threading while neural networks are easily parallelizable . 3 . Bandwidth requirements : Model parameters , recurrent state and observations are transferred between actors and learners . Relatively to model parameters , the size of the observation trajectory often only accounts for a few percents.1 Furthermore , memory-based models send large states , increase bandwidth requirements . While single-machine approaches such as GA3C ( Mahmood , 2017 ) and single-machine IMPALA avoid using CPU for inference ( 1 ) and do not have network bandwidth requirements ( 3 ) , they are restricted by resource usage ( 2 ) and the scale required for many types of environments . The architecture used in SEED ( Figure 1b ) solves the problems mentioned above . Inference and trajectory accumulation is moved to the learner which makes it conceptually a single-machine setup with remote environments ( besides handling failures ) . Moving the logic effectively makes the actors a small loop around the environments . For every single environment step , the observations are sent to the learner , which runs the inference and sends actions back to the actors . This introduces a new problem : 4 . Latency . To minimize latency , we created a simple framework that uses gRPC ( grpc.io ) - a high performance RPC library . Specifically , we employ streaming RPCs where the connection from actor to learner is kept open and metadata sent only once . Furthermore , the framework includes a batching module that efficiently batches multiple actor inference calls together . In cases where actors can fit on the same machine as learners , gRPC uses unix domain sockets and thus reduces latency , CPU and syscall overhead . Overall , the end-to-end latency , including network and inference , is faster for a number of the models we consider ( see Appendix A.7 ) . 1With 100,000 observations send per second ( 96 x 72 x 3 bytes each ) , a trajectory length of 20 and a 30MB model , the total bandwidth requirement is 148 GB/s . Transferring observations uses only 2 GB/s . The IMPALA and SEED architectures differ in that for SEED , at any point in time , only one copy of the model exists whereas for distributed IMPALA each actor has its own copy . This changes the way the trajectories are off-policy . In IMPALA ( Figure 2a ) , an actor uses the same policy πθt for an entire trajectory . For SEED ( Figure 2b ) , the policy during an unroll of a trajectory may change multiple times with later steps using more recent policies closer to the one used at optimization time . A detailed view of the learner in the SEED architecture is shown on Figure 3 . Three types of threads are running : 1 . Inference 2 . Data prefetching and 3 . Training . Inference threads receive a batch of observations , rewards and episode termination flags . They load the recurrent states and send the data to the inference TPU core . The sampled actions and new recurrent states are received , and the actions are sent back to the actors while the latest recurrent states are stored . When a trajectory is fully unrolled it is added to a FIFO queue or replay buffer and later sampled by data prefetching threads . Finally , the trajectories are pushed to a device buffer for each of the TPU cores taking part in training . The training thread ( the main Python thread ) takes the prefetched trajectories , computes gradients using the training TPU cores and applies the gradients on the models of all TPU cores ( inference and training ) synchronously . The ratio of inference and training cores can be adjusted for maximum throughput and utilization . The architecture scales to a TPU pod ( 2048 cores ) by roundrobin assigning actors to TPU host machines , and having separate inference threads for each TPU host . When actors wait for a response from the learner , they are idle so in order to fully utilize the machines , we run multiple environments on a single actor . To summarize , we solve the issues listed previously by : 1 . Moving inference to the learner and thus eliminating any neural network related computations from the actors . Increasing the model size in this architecture will not increase the need for more actors ( in fact the opposite is true ) . 2 . Batching inference on the learner and having multiple environments on the actor . This fully utilize both the accelerators on the learner and CPUs on the actors . The number of TPU cores for inference and training is finely tuned to match the inference and training workloads . All factors help reducing the cost of experiments . 3 . Everything involving the model stays on the learner and only observations and actions are sent between the actors and the learner . This reduces bandwidth requirements by as much as 99 % . 4 . Using streaming gRPC that has minimal latency and minimal overhead and integrating batching into the server module . We provide the following two algorithms implemented in the SEED framework : V-trace and Qlearning .
The paper presents SEED RL, which is a scalable reinforcement learning agent. The approach restructure the interface / division of functionality between the actors (environments) and the learner as compared to the distributed approach in IMPALA (a state-of-the-art distributed RL framework). Most importantly, the model is only in the learner in SEED while it is distributed in IMPALA.
SP:7f1af600e64c0ad693a9b1cc198bbaf39cd884c6
SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference
Github : http : //github.com/google-research/seed_rl . 1 INTRODUCTION . The field of reinforcement learning ( RL ) has recently seen impressive results across a variety of tasks . This has in part been fueled by the introduction of deep learning in RL and the introduction of accelerators such as GPUs . In the very recent history , focus on massive scale has been key to solve a number of complicated games such as AlphaGo ( Silver et al. , 2016 ) , Dota ( OpenAI , 2018 ) and StarCraft 2 ( Vinyals et al. , 2017 ) . The sheer amount of environment data needed to solve tasks trivial to humans , makes distributed machine learning unavoidable for fast experiment turnaround time . RL is inherently comprised of heterogeneous tasks : running environments , model inference , model training , replay buffer , etc . and current state-of-the-art distributed algorithms do not efficiently use compute resources for the tasks . The amount of data and inefficient use of resources makes experiments unreasonably expensive . The two main challenges addressed in this paper are scaling of reinforcement learning and optimizing the use of modern accelerators , CPUs and other resources . We introduce SEED ( Scalable , Efficient , Deep-RL ) , a modern RL agent that scales well , is flexible and efficiently utilizes available resources . It is a distributed agent where model inference is done centrally combined with fast streaming RPCs to reduce the overhead of inference calls . We show that with simple methods , one can achieve state-of-the-art results faster on a number of tasks . For optimal performance , we use TPUs ( cloud.google.com/tpu/ ) and TensorFlow 2 ( Abadi et al. , 2015 ) to simplify the implementation . The cost of running SEED is analyzed against IMPALA ( Espeholt et al. , 2018 ) which is a commonly used state-of-the-art distributed RL algorithm ( Veeriah et al . ( 2019 ) ; Li et al . ( 2019 ) ; Deverett et al . ( 2019 ) ; Omidshafiei et al . ( 2019 ) ; Vezhnevets et al . ( 2019 ) ; Hansen et al . ( 2019 ) ; Schaarschmidt et al . ; Tirumala et al . ( 2019 ) , ... ) . We show cost reductions of up to 80 % while being significantly faster . When scaling SEED to many accelerators , it can train on millions of frames per second . Finally , the implementation is open-sourced together with examples of running it at scale on Google Cloud ( see Appendix A.4 for details ) making it easy to reproduce results and try novel ideas . ∗Equal contribution 2 RELATED WORK . For value-based methods , an early attempt for scaling DQN was Nair et al . ( 2015 ) that used asynchronous SGD ( Dean et al. , 2012 ) together with a distributed setup consisting of actors , replay buffers , parameter servers and learners . Since then , it has been shown that asynchronous SGD leads to poor sample complexity while not being significantly faster ( Chen et al. , 2016 ; Espeholt et al. , 2018 ) . Along with advances for Q-learning such as prioritized replay ( Schaul et al. , 2015 ) , dueling networks ( Wang et al. , 2016 ) , and double-Q learning ( van Hasselt , 2010 ; Van Hasselt et al. , 2016 ) the state-of-the-art distributed Q-learning was improved with Ape-X ( Horgan et al. , 2018 ) . Recently , R2D2 ( Kapturowski et al. , 2018 ) achieved impressive results across all the Arcade Learning Environment ( ALE ) ( Bellemare et al. , 2013 ) games by incorporating value-function rescaling ( Pohlen et al. , 2018 ) and LSTMs ( Hochreiter & Schmidhuber , 1997 ) on top of the advancements of Ape-X . There have also been many approaches for scaling policy gradients methods . A3C ( Mnih et al. , 2016 ) introduced asynchronous single-machine training using asynchronous SGD and relied exclusively on CPUs . GPUs were later introduced in GA3C ( Mahmood , 2017 ) with improved speed but poor convergence results due to an inherently on-policy method being used in an off-policy setting . This was corrected by V-trace ( Espeholt et al. , 2018 ) in the IMPALA agent both for singlemachine training and also scaled using a simple actor-learner architecture to more than a thousand machines . PPO ( Schulman et al. , 2017 ) serves a similar purpose to V-trace and was used in OpenAI Rapid ( Petrov et al. , 2018 ) with the actor-learner architecture extended with Redis ( redis.io ) , an in-memory data store , and was scaled to 128,000 CPUs . For inexpensive environments like ALE , a single machine with multiple accelerators can achieve results quickly ( Stooke & Abbeel , 2018 ) . This approach was taken a step further by converting ALE to run on a GPU ( Dalton et al. , 2019 ) . A third class of algorithms is evolutionary algorithms . With simplicity and massive scale , they have achieved impressive results on a number of tasks ( Salimans et al. , 2017 ; Such et al. , 2017 ) . Besides algorithms , there exist a number of useful libraries and frameworks for reinforcement learning . ELF ( Tian et al. , 2017 ) is a framework for efficiently interacting with environments , avoiding Python global-interpreter-lock contention . Dopamine ( Castro et al. , 2018 ) is a flexible research focused RL framework with a strong emphasis on reproducibility . It has state of the art agent implementations such as Rainbow ( Hessel et al. , 2017 ) but is single-threaded . TF-Agents ( Guadarrama et al. , 2018 ) and rlpyt ( Stooke & Abbeel , 2019 ) both have a broader focus with implementations for several classes of algorithms but as of writing , they do not have distributed capability for largescale RL . RLLib ( Liang et al. , 2017 ) provides a number of composable distributed components and a communication abstraction with a number of algorithm implementations such as IMPALA and Ape-X . Concurrent with this work , TorchBeast ( Küttler et al. , 2019 ) was released which is an implementation of single-machine IMPALA with remote environments . SEED is closest related to IMPALA , but has a number of key differences that combine the benefits of single-machine training with a scalable architecture . Inference is moved to the learner but environments run remotely . This is combined with a fast communication layer to mitigate latency issues from the increased number of remote calls . The result is significantly faster training at reduced costs by as much as 80 % for the scenarios we consider . Along with a policy gradients ( V-trace ) implementation we also provide an implementation of state of the art Q-learning ( R2D2 ) . In the work we use TPUs but in principle , any modern accelerator could be used in their place . TPUs are particularly well-suited given they high throughput for machine learning applications and the scalability . Up to 2048 cores are connected with a fast interconnect providing 100+ petaflops of compute . 3 ARCHITECTURE . Before introducing the architecture of SEED , we first analyze the generic actor-learner architecture used by IMPALA , which is also used in various forms in Ape-X , OpenAI Rapid and others . An overview of the architecture is shown in Figure 1a . A large number of actors repeatedly read model parameters from the learner ( or parameter servers ) . Each actor then proceeds the local model to sample actions and generate a full trajectory of observations , actions , policy logits/Q-values . Finally , this trajectory along with recurrent state is transferred to a shared queue or replay buffer . Asynchronously , the learner reads batches of trajectories from the queue/replay buffer and optimizes the model . There are a number of reasons for why this architecture falls short : 1 . Using CPUs for neural network inference : The actor machines are usually CPU-based ( occasionally GPU-based for expensive environments ) . CPUs are known to be computationally inefficient for neural networks ( Raina et al. , 2009 ) . When the computational needs of a model increase , the time spent on inference starts to outweigh the environment step computation . The solution is to increase the number of actors which increases the cost and affects convergence ( Espeholt et al. , 2018 ) . 2 . Inefficient resource utilization : Actors alternate between two tasks : environment steps and inference steps . The compute requirements for the two tasks are often not similar which leads to poor utilization or slow actors . E.g . some environments are inherently single-threading while neural networks are easily parallelizable . 3 . Bandwidth requirements : Model parameters , recurrent state and observations are transferred between actors and learners . Relatively to model parameters , the size of the observation trajectory often only accounts for a few percents.1 Furthermore , memory-based models send large states , increase bandwidth requirements . While single-machine approaches such as GA3C ( Mahmood , 2017 ) and single-machine IMPALA avoid using CPU for inference ( 1 ) and do not have network bandwidth requirements ( 3 ) , they are restricted by resource usage ( 2 ) and the scale required for many types of environments . The architecture used in SEED ( Figure 1b ) solves the problems mentioned above . Inference and trajectory accumulation is moved to the learner which makes it conceptually a single-machine setup with remote environments ( besides handling failures ) . Moving the logic effectively makes the actors a small loop around the environments . For every single environment step , the observations are sent to the learner , which runs the inference and sends actions back to the actors . This introduces a new problem : 4 . Latency . To minimize latency , we created a simple framework that uses gRPC ( grpc.io ) - a high performance RPC library . Specifically , we employ streaming RPCs where the connection from actor to learner is kept open and metadata sent only once . Furthermore , the framework includes a batching module that efficiently batches multiple actor inference calls together . In cases where actors can fit on the same machine as learners , gRPC uses unix domain sockets and thus reduces latency , CPU and syscall overhead . Overall , the end-to-end latency , including network and inference , is faster for a number of the models we consider ( see Appendix A.7 ) . 1With 100,000 observations send per second ( 96 x 72 x 3 bytes each ) , a trajectory length of 20 and a 30MB model , the total bandwidth requirement is 148 GB/s . Transferring observations uses only 2 GB/s . The IMPALA and SEED architectures differ in that for SEED , at any point in time , only one copy of the model exists whereas for distributed IMPALA each actor has its own copy . This changes the way the trajectories are off-policy . In IMPALA ( Figure 2a ) , an actor uses the same policy πθt for an entire trajectory . For SEED ( Figure 2b ) , the policy during an unroll of a trajectory may change multiple times with later steps using more recent policies closer to the one used at optimization time . A detailed view of the learner in the SEED architecture is shown on Figure 3 . Three types of threads are running : 1 . Inference 2 . Data prefetching and 3 . Training . Inference threads receive a batch of observations , rewards and episode termination flags . They load the recurrent states and send the data to the inference TPU core . The sampled actions and new recurrent states are received , and the actions are sent back to the actors while the latest recurrent states are stored . When a trajectory is fully unrolled it is added to a FIFO queue or replay buffer and later sampled by data prefetching threads . Finally , the trajectories are pushed to a device buffer for each of the TPU cores taking part in training . The training thread ( the main Python thread ) takes the prefetched trajectories , computes gradients using the training TPU cores and applies the gradients on the models of all TPU cores ( inference and training ) synchronously . The ratio of inference and training cores can be adjusted for maximum throughput and utilization . The architecture scales to a TPU pod ( 2048 cores ) by roundrobin assigning actors to TPU host machines , and having separate inference threads for each TPU host . When actors wait for a response from the learner , they are idle so in order to fully utilize the machines , we run multiple environments on a single actor . To summarize , we solve the issues listed previously by : 1 . Moving inference to the learner and thus eliminating any neural network related computations from the actors . Increasing the model size in this architecture will not increase the need for more actors ( in fact the opposite is true ) . 2 . Batching inference on the learner and having multiple environments on the actor . This fully utilize both the accelerators on the learner and CPUs on the actors . The number of TPU cores for inference and training is finely tuned to match the inference and training workloads . All factors help reducing the cost of experiments . 3 . Everything involving the model stays on the learner and only observations and actions are sent between the actors and the learner . This reduces bandwidth requirements by as much as 99 % . 4 . Using streaming gRPC that has minimal latency and minimal overhead and integrating batching into the server module . We provide the following two algorithms implemented in the SEED framework : V-trace and Qlearning .
This paper presents a scalable reinforcement learning training architecture which combines a number of modern engineering advances to address the inefficiencies of prior methods. The proposed architecture shows good performance on a wide variety of benchmarks from ALE to DeepMind Lab and Google Research Football. Important to the community, authors also open source their code and provide an estimate which shows that the proposed framework is cheaper to run on cloud platforms.
SP:7f1af600e64c0ad693a9b1cc198bbaf39cd884c6
ROBUST GENERATIVE ADVERSARIAL NETWORK
INTRODUCTION . Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have been enjoying much attention recently due to their great success on different tasks and datasets ( Radford et al. , 2015 ) ( Salimans et al. , 2016 ) ( Ho & Ermon , 2016 ) ( Li et al. , 2017 ) ( Chongxuan et al. , 2017 ) . The framework of GANs can be formulated as the game between generator and discriminator . The generator tries to produce the fake distribution which approximates the real data distribution , while the discriminator attempts to distinguish the fake distribution from the real distribution . These two players compete with each other iteratively . GANs are also popular for their theoretical value . Training the discriminator is showed to equivalent to training a good estimator for the density ratio between the fake distribution and the real one ( Nowozin et al. , 2016 ) ( Uehara et al. , 2016 ) ( Mohamed & Lakshminarayanan , 2016 ) . The discriminator generally measures the departure between the model distribution and real data distribution with certain divergence measure , e.g . Jensen-Shannon divergence or f - divergence ( Nowozin et al. , 2016 ) . Arjovsky et al . proved that the supports of the fake and real distributions are typically disjoint on low dimensional manifolds and there is a nearly trivial discriminator which can correctly classify the real and fake data ( Arjovsky et al. , 2017 ) . The loss of such discriminator converges quickly to zero which causes the vanishing gradient for generator . To alleviate such problem , Arjovsky et al . proposed the Wasserstein GAN based on Wasserstein metric requiring no joint supports . Since it is inconvenient to minimize directly the Wasserstein distance , they solve the dual problem by clipping the weights to ensure the Lipschitz condition for discriminator . Later Gulrajani et al . proposed the gradient penalty to guarantee the Lipschitz condition ( Gulrajani et al. , 2017 ) . Spectral normalization is also proposed to stabilize the training of the discriminator ( Miyato et al. , 2018 ) . Most existing methods try to improve the stability of GANs by controlling the discriminator . However , the robustness of GANs have not adequately been considered . When the discriminator is not robust to noise ( i.e. , the discriminator can not measure the distance between the fake and real distribution accurately ) , some examples might be mis-classified which consequently misleads the training of generator . Meanwhile , the poor generalization performance of generator might cause the “ blurry ” generated images for some potential input noise . Robust Conditional Generative Adversarial Net- works is proposed to improve the robustness of conditional GAN for noised data . However , this method can merely implement on conditional GAN which improves the ability of generator only to defend the noise ( Chrysos et al. , 2018 ) . Some other researchers focus on the robustness of GAN to label noise ( Thekumparampil et al. , 2018 ) ( Kaneko et al. , 2019 ) . In this paper , we attempt to improve the robustness of GANs in a systematical way by promoting the robustness of both discriminator and generator . We propose a novel robust method called robust generative adversarial network ( RGAN ) where the generator and discriminator still compete with each other iteratively , but in a worst-case setting . Specifically , a robust optimization is designed with considering the worst distribution within the small Wasserstein ball . The generator tries to map the worst input distribution ( rather than a specific distribution ) to the real data distribution , while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation . We provide some theoretical analysis for the proposed Robust GAN including generalization . We also implement our robust framework on different baseline GANs ( i.e. , DCGAN , WGAN-GP , and BWGAN ) ( Radford et al. , 2015 ) ( Adler & Lunz , 2018 ) , observing substantial improvements consistently on all the datasets used in this paper . GENERATIVE ADVERSARIAL NETWORK . The principle of GAN is a game between two players : generator and discriminator , both of which are usually formulated as the deep neural networks . The generator tries to generate a fake example to fool discriminator , while the discriminator attempts to distinguish between fake and real images . Formally , the training procedure of GAN can be formulated as : min G max D S ( G , D ) , Ex∼Pr [ logD ( x ) ] + Ex̃∼Pg [ ( 1− logD ( G ( zi ) ) ) ] ( 1 ) where , x and x̃ = G ( z ) are real and fake examples sampled from the real data distribution Pr and generation distribution Pg respectively . The generation distribution is defined by G ( z ) where z ∼ Pz ( Pz is a specific input noise distribution ) . The minmax problem can not be solved directly since the expectation of the real and generation distribution is usually intractable . Therefore , the approximation problem is defined as : min G max D Sm ( G , D ) , 1 m m∑ i=1 [ logD ( xi ) ] + 1 m m∑ i=1 [ ( 1− logD ( G ( zi ) ) ) ] ( 2 ) where m examples of xi and zi are sampled from distributions Pr and Pz and the mean value of loss is used to approximate the original problem . However , such a way might not ensure a good robustness of discriminator and generator . Some noised images might not be classified correctly and potential input noise points will cause degraded generation . In this paper , for alleviating such problem , we design a distributionally robust optimization . Particularly , we consider the worst distribution ( rather than a specific single distribution ) within the small range . ROBUST GENERATIVE ADVERSARIAL NETWORK . As we discussed in the previous sections , although most existing GAN methods can stabilize the training of the discriminator , the robustness might not be adequately considered . In other words , the discriminator might not perform well on some noised data which consequently misleads the training of generator . Similarly , the generator might produce poor generations for certain input noise points if its robustnessis not good . To alleviate such problem , we design the distributionally robust optimization on GAN . Before we discuss how we can achieve this , we first elaborate the distributionally robust optimization . DISTRIBUTIONALLY ROBUST OPTIMIZATION . Let d : X ×X → R+ ∪ { ∞ } . The departure between x and x0 can then be represented by d ( x , x0 ) . For distributionally robust optimization , the robustness region P = { P : D ( P , P0 ) ≤ ρ } is considered , a ρ -neighborhood of the distribution P0 under the divergence D ( . , . ) instead of a single distribution1 . The distributionally robust optimization can be formulated as ( Sinha et al. , 2017 ) : min θ sup P∈P EP [ l ( X ; θ ) ] ( 3 ) where l ( . ) is a loss function parameterized by θ . The problem of ( 3 ) is typically intractable for arbitrary ρ . In order to solve this problem , we first present a proposition : Proposition 0.1 Let l : θ ×X → R and d : X ×X → R+ be continuous . Then , for any distribution P0 and ρ > 0 we have sup P∈P { EP [ l ( X ; θ ) ] − γW ( P , P0 ) } = EP0 [ sup x∈X { l ( x ; θ ) − γd ( x , x0 ) } ] ( 4 ) ( Proof is provided in ( Sinha et al. , 2017 ) ) . With Proposition 0.1 , we can reformulate ( 3 ) with the Lagrangian relaxation as follows : min θ EP0 sup x∈X [ l ( x ; θ ) − λd ( x , x0 ) ] ( 5 ) where the second term d ( x , x0 ) is to restrict the distance between two points . ROBUST TRAINING OVER GENERATOR . With the distributionally robust optimization , we first discuss how we can perform robust training over generator . The generator of GAN tries to map a noise distribution Pz to the image distribution Pr . The objective of generator is described as follows : min G 1 m m∑ i=1 [ log ( 1−D ( G ( zi ) ) ) ] , where zi ∼ Pz ( 6 ) Typically , Pz is a Gaussian distribution . For improving the robustness , we consider all the possible distributions within the robust region Pz = { P : W ( P , Pz ) ≤ ρz } rather than a single specific distribution ( typically a Gaussain in most existing GANs ) . Here we use the Wasserstein metric to measure the distance between P and Pz , where P is the ρz-neighbor of the original distribution Pz . However , it is difficult to consider all the distributions in this small region , the alternative way is to consider their upper bound ( the worst distribution ) . The robust optimization problem for G is then described as follows : min G sup P∈Pz 1 m m∑ i=1 [ log ( 1−D ( G ( zi ) ) ) ] , where zi ∼ P ( 7 ) According to Proposition 0.1 , we can relax ( 7 ) as : min G max r 1 m m∑ i=1 [ log ( 1−D ( G ( zi + ri ) ) ) − λz‖r‖22 ] , where zi ∼ Pz ( 8 ) Different from those previous methods , our method attempts to map the worst distribution ( in the ρz-neighborhood of the original distribution Pz ) to the image distribution . Intuitively , we sample the noise points which are most likely ( or the worst ) to generate the blurry images and optimize the generator based on these risky points . Therefore , such generator would be robust against poor input noises and might be less likely to generate the low-quality images . ROBUST TRAINING OVER DISCRIMINATOR . In traditional GANs described by ( 2 ) , the generator attempts to generate a fake distribution to approximate the real data distribution , while the discriminator tries to learn the decision boundary to separate real and fake distributions . Apparently , a discriminator with a poor robustness would inevitably mislead the training of generator . In this section , we utilize the popular adversarial learning 1Normally , the Wasserstein metric W ( . , . ) is used and corresponding d ( x , x0 ) = ‖x− x0‖2p where p > 0 method and propose the robust optimization method to improve the discriminator ’ s robustness both for clean and noised data . Specifically , we define the robust regions for both the fake distribution Pg = { P : W ( P , Pg ) ≤ ρg } and real distribution Pr = { P : W ( P , Pr ) ≤ ρr } . The generator tries to reduce the distance between the fake distribution Pg and real distribution Pr . The discriminator attempts to separate the worst distributions in Pg and Pr . Intuitively , the worst distributions are closer to decision boundary ( less discriminative ) and they are able to guide the training of discriminator to perform well on ” confusing ” data points near the classification boundary ( such discriminator can be more robust than original one ) . We can reformulate ( 2 ) in the robust version : max D sup P1∈Pr 1 m m∑ i=1 [ logD ( x′i ) ] + sup P2∈Pg 1 m m∑ i=1 [ log ( 1−D ( G′ ( zi ) ) ) ] ( 9 ) where zi ∼ Pz , x′i ∼ P1 and G′ ∼ P2 . Using Proposition 0.1 , we can relax the alternate problem as : max D min r1 , r2 1 m m∑ i=1 [ logD ( xi + r i 1 ) ] + 1 m m∑ i=1 [ log ( 1−D ( G ( zi ) + ri2 ) ) ] + λd m m∑ i=1 [ ‖ri1‖22 ] + ‖ri2‖22 ] with zi ∼ Pz , xi ∼ Pr ( 10 ) Here r1 = { ri1 } mi=1 is the set of small perturbations for the points sampled from real distribution Pd which tries to make the real distribution closer to the fake distribution . r2 = { ri2 } mi=1 tries to make fake distribution closer to real one . Intuitively , these perturbations try to enhance the difficulty of classification task for discriminator by making real and fake data less distinguishable and it can help promote the robustness of discriminator .
Developing stable GAN training method has gained much attention these years. This paper propose to tackle this issue via involving distributionally robust optimization into GAN training. Its main contribution is to combine Sinha et al with GAN, proposing a new GAN training method on the basis of vanilla GAN. Relative theory results are proved and detailed experiments are conducted.
SP:1e09b69a3d713355bd967d8205fde97a911042e7
ROBUST GENERATIVE ADVERSARIAL NETWORK
INTRODUCTION . Generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) have been enjoying much attention recently due to their great success on different tasks and datasets ( Radford et al. , 2015 ) ( Salimans et al. , 2016 ) ( Ho & Ermon , 2016 ) ( Li et al. , 2017 ) ( Chongxuan et al. , 2017 ) . The framework of GANs can be formulated as the game between generator and discriminator . The generator tries to produce the fake distribution which approximates the real data distribution , while the discriminator attempts to distinguish the fake distribution from the real distribution . These two players compete with each other iteratively . GANs are also popular for their theoretical value . Training the discriminator is showed to equivalent to training a good estimator for the density ratio between the fake distribution and the real one ( Nowozin et al. , 2016 ) ( Uehara et al. , 2016 ) ( Mohamed & Lakshminarayanan , 2016 ) . The discriminator generally measures the departure between the model distribution and real data distribution with certain divergence measure , e.g . Jensen-Shannon divergence or f - divergence ( Nowozin et al. , 2016 ) . Arjovsky et al . proved that the supports of the fake and real distributions are typically disjoint on low dimensional manifolds and there is a nearly trivial discriminator which can correctly classify the real and fake data ( Arjovsky et al. , 2017 ) . The loss of such discriminator converges quickly to zero which causes the vanishing gradient for generator . To alleviate such problem , Arjovsky et al . proposed the Wasserstein GAN based on Wasserstein metric requiring no joint supports . Since it is inconvenient to minimize directly the Wasserstein distance , they solve the dual problem by clipping the weights to ensure the Lipschitz condition for discriminator . Later Gulrajani et al . proposed the gradient penalty to guarantee the Lipschitz condition ( Gulrajani et al. , 2017 ) . Spectral normalization is also proposed to stabilize the training of the discriminator ( Miyato et al. , 2018 ) . Most existing methods try to improve the stability of GANs by controlling the discriminator . However , the robustness of GANs have not adequately been considered . When the discriminator is not robust to noise ( i.e. , the discriminator can not measure the distance between the fake and real distribution accurately ) , some examples might be mis-classified which consequently misleads the training of generator . Meanwhile , the poor generalization performance of generator might cause the “ blurry ” generated images for some potential input noise . Robust Conditional Generative Adversarial Net- works is proposed to improve the robustness of conditional GAN for noised data . However , this method can merely implement on conditional GAN which improves the ability of generator only to defend the noise ( Chrysos et al. , 2018 ) . Some other researchers focus on the robustness of GAN to label noise ( Thekumparampil et al. , 2018 ) ( Kaneko et al. , 2019 ) . In this paper , we attempt to improve the robustness of GANs in a systematical way by promoting the robustness of both discriminator and generator . We propose a novel robust method called robust generative adversarial network ( RGAN ) where the generator and discriminator still compete with each other iteratively , but in a worst-case setting . Specifically , a robust optimization is designed with considering the worst distribution within the small Wasserstein ball . The generator tries to map the worst input distribution ( rather than a specific distribution ) to the real data distribution , while the discriminator attempts to distinguish the real and fake distribution with the worst perturbation . We provide some theoretical analysis for the proposed Robust GAN including generalization . We also implement our robust framework on different baseline GANs ( i.e. , DCGAN , WGAN-GP , and BWGAN ) ( Radford et al. , 2015 ) ( Adler & Lunz , 2018 ) , observing substantial improvements consistently on all the datasets used in this paper . GENERATIVE ADVERSARIAL NETWORK . The principle of GAN is a game between two players : generator and discriminator , both of which are usually formulated as the deep neural networks . The generator tries to generate a fake example to fool discriminator , while the discriminator attempts to distinguish between fake and real images . Formally , the training procedure of GAN can be formulated as : min G max D S ( G , D ) , Ex∼Pr [ logD ( x ) ] + Ex̃∼Pg [ ( 1− logD ( G ( zi ) ) ) ] ( 1 ) where , x and x̃ = G ( z ) are real and fake examples sampled from the real data distribution Pr and generation distribution Pg respectively . The generation distribution is defined by G ( z ) where z ∼ Pz ( Pz is a specific input noise distribution ) . The minmax problem can not be solved directly since the expectation of the real and generation distribution is usually intractable . Therefore , the approximation problem is defined as : min G max D Sm ( G , D ) , 1 m m∑ i=1 [ logD ( xi ) ] + 1 m m∑ i=1 [ ( 1− logD ( G ( zi ) ) ) ] ( 2 ) where m examples of xi and zi are sampled from distributions Pr and Pz and the mean value of loss is used to approximate the original problem . However , such a way might not ensure a good robustness of discriminator and generator . Some noised images might not be classified correctly and potential input noise points will cause degraded generation . In this paper , for alleviating such problem , we design a distributionally robust optimization . Particularly , we consider the worst distribution ( rather than a specific single distribution ) within the small range . ROBUST GENERATIVE ADVERSARIAL NETWORK . As we discussed in the previous sections , although most existing GAN methods can stabilize the training of the discriminator , the robustness might not be adequately considered . In other words , the discriminator might not perform well on some noised data which consequently misleads the training of generator . Similarly , the generator might produce poor generations for certain input noise points if its robustnessis not good . To alleviate such problem , we design the distributionally robust optimization on GAN . Before we discuss how we can achieve this , we first elaborate the distributionally robust optimization . DISTRIBUTIONALLY ROBUST OPTIMIZATION . Let d : X ×X → R+ ∪ { ∞ } . The departure between x and x0 can then be represented by d ( x , x0 ) . For distributionally robust optimization , the robustness region P = { P : D ( P , P0 ) ≤ ρ } is considered , a ρ -neighborhood of the distribution P0 under the divergence D ( . , . ) instead of a single distribution1 . The distributionally robust optimization can be formulated as ( Sinha et al. , 2017 ) : min θ sup P∈P EP [ l ( X ; θ ) ] ( 3 ) where l ( . ) is a loss function parameterized by θ . The problem of ( 3 ) is typically intractable for arbitrary ρ . In order to solve this problem , we first present a proposition : Proposition 0.1 Let l : θ ×X → R and d : X ×X → R+ be continuous . Then , for any distribution P0 and ρ > 0 we have sup P∈P { EP [ l ( X ; θ ) ] − γW ( P , P0 ) } = EP0 [ sup x∈X { l ( x ; θ ) − γd ( x , x0 ) } ] ( 4 ) ( Proof is provided in ( Sinha et al. , 2017 ) ) . With Proposition 0.1 , we can reformulate ( 3 ) with the Lagrangian relaxation as follows : min θ EP0 sup x∈X [ l ( x ; θ ) − λd ( x , x0 ) ] ( 5 ) where the second term d ( x , x0 ) is to restrict the distance between two points . ROBUST TRAINING OVER GENERATOR . With the distributionally robust optimization , we first discuss how we can perform robust training over generator . The generator of GAN tries to map a noise distribution Pz to the image distribution Pr . The objective of generator is described as follows : min G 1 m m∑ i=1 [ log ( 1−D ( G ( zi ) ) ) ] , where zi ∼ Pz ( 6 ) Typically , Pz is a Gaussian distribution . For improving the robustness , we consider all the possible distributions within the robust region Pz = { P : W ( P , Pz ) ≤ ρz } rather than a single specific distribution ( typically a Gaussain in most existing GANs ) . Here we use the Wasserstein metric to measure the distance between P and Pz , where P is the ρz-neighbor of the original distribution Pz . However , it is difficult to consider all the distributions in this small region , the alternative way is to consider their upper bound ( the worst distribution ) . The robust optimization problem for G is then described as follows : min G sup P∈Pz 1 m m∑ i=1 [ log ( 1−D ( G ( zi ) ) ) ] , where zi ∼ P ( 7 ) According to Proposition 0.1 , we can relax ( 7 ) as : min G max r 1 m m∑ i=1 [ log ( 1−D ( G ( zi + ri ) ) ) − λz‖r‖22 ] , where zi ∼ Pz ( 8 ) Different from those previous methods , our method attempts to map the worst distribution ( in the ρz-neighborhood of the original distribution Pz ) to the image distribution . Intuitively , we sample the noise points which are most likely ( or the worst ) to generate the blurry images and optimize the generator based on these risky points . Therefore , such generator would be robust against poor input noises and might be less likely to generate the low-quality images . ROBUST TRAINING OVER DISCRIMINATOR . In traditional GANs described by ( 2 ) , the generator attempts to generate a fake distribution to approximate the real data distribution , while the discriminator tries to learn the decision boundary to separate real and fake distributions . Apparently , a discriminator with a poor robustness would inevitably mislead the training of generator . In this section , we utilize the popular adversarial learning 1Normally , the Wasserstein metric W ( . , . ) is used and corresponding d ( x , x0 ) = ‖x− x0‖2p where p > 0 method and propose the robust optimization method to improve the discriminator ’ s robustness both for clean and noised data . Specifically , we define the robust regions for both the fake distribution Pg = { P : W ( P , Pg ) ≤ ρg } and real distribution Pr = { P : W ( P , Pr ) ≤ ρr } . The generator tries to reduce the distance between the fake distribution Pg and real distribution Pr . The discriminator attempts to separate the worst distributions in Pg and Pr . Intuitively , the worst distributions are closer to decision boundary ( less discriminative ) and they are able to guide the training of discriminator to perform well on ” confusing ” data points near the classification boundary ( such discriminator can be more robust than original one ) . We can reformulate ( 2 ) in the robust version : max D sup P1∈Pr 1 m m∑ i=1 [ logD ( x′i ) ] + sup P2∈Pg 1 m m∑ i=1 [ log ( 1−D ( G′ ( zi ) ) ) ] ( 9 ) where zi ∼ Pz , x′i ∼ P1 and G′ ∼ P2 . Using Proposition 0.1 , we can relax the alternate problem as : max D min r1 , r2 1 m m∑ i=1 [ logD ( xi + r i 1 ) ] + 1 m m∑ i=1 [ log ( 1−D ( G ( zi ) + ri2 ) ) ] + λd m m∑ i=1 [ ‖ri1‖22 ] + ‖ri2‖22 ] with zi ∼ Pz , xi ∼ Pr ( 10 ) Here r1 = { ri1 } mi=1 is the set of small perturbations for the points sampled from real distribution Pd which tries to make the real distribution closer to the fake distribution . r2 = { ri2 } mi=1 tries to make fake distribution closer to real one . Intuitively , these perturbations try to enhance the difficulty of classification task for discriminator by making real and fake data less distinguishable and it can help promote the robustness of discriminator .
The present work proposes to combine GANs with adversarial training replacing the original GAN lass with a mixture of the original GAN loss and an adversarial loss that applies an adversarial perturbation to both the input image of the discriminator, and to the input noise of the generator. The resulting algorithm is called robust GAN (RGAN). Existing results of [Goodfellow et al 2014] (characterizing optimal generators and discriminators in terms of the density of the true data) are adapted to the new loss functions and generalization bounds akin to [Arora et al 2017] are proved. Extensive experiments show a small but consistent improvement over a baseline method.
SP:1e09b69a3d713355bd967d8205fde97a911042e7
Deep Symbolic Superoptimization Without Human Knowledge
1 INTRODUCTION . Superoptimization refers to the task of simplifying and optimizing over a set of machine instructions , or code ( Massalin , 1987 ; Schkufza et al. , 2013 ) , which is a fundamental problem in computer science . As an important direction in superoptimization , symbolic expression simplification , or symbolic superoptimization , aims at transforming symbolic expression to a simpler form in an effective way , so as to avoid unnecessary computations . Symbolic superoptimization is an important component in compilers , e.g . LLVM and Halide , and it also has a wide application in mathematical engines including Wolfram2 , Matlab , and Sympy . Over recent years , applying deep learning methods to address symbolic superoptimization has attracted great attention . Despite their variety , existing algorithms can be roughly divided into two categories . The first category is supervised learning , i.e . to learn a mapping between the input expressions and the output simplified expressions from a large number of human-constructed expression pairs ( Arabshahi et al. , 2018 ; Zaremba & Sutskever , 2014 ) . Such methods rely heavily on a human-constructed dataset , which is time- and labor-consuming . What is worse , such systems are highly susceptible to bias , because it is generally very hard to define a minimum and comprehensive axiom set for training . It is highly possible that some forms of equivalence are not covered in the training set , and fail to be recognized by the model . In order to remove the dependence on human annotations , the second category of methods leverages reinforcement learning to autonomously discover simplifying equivalence ( Chen et al. , 2018 ) . However , to make the action space tractable , such systems still rely on a set of equivalent transformation actions defined by human beings , which again suffers from the labeling cost and learning bias . In short , the existing neural symbolic superoptimization algorithms all require human input to define equivalences . It would have benefited from improved efficiency and better simplification if there were algorithms independent of human knowledge . In fact , symbolic superoptimization should have been a task that naturally keeps human outside the loop , because it directly operates on machine code , whose consumers and evaluators are machines , not humans . ∗Authors contributed equally to this paper . 1The code is available at https : //github.com/shihui2010/symbolic_simplifier . 2https : //www.wolframalpha.com/ Therefore , we propose Human-Independent Symbolic Superoptimization ( HISS ) , a reinforcement learning framework for symbolic superoptimization that is completely independent of human knowledge . Instead of using human-defined equivalence , HISS adopts a set of unsupervised techniques to maintain the tractability of action space . First , HISS introduces a tree-LSTM encoder-decoder architecture with attention to ensure that its exploration is confined within the set syntactically correct expressions . Second , the process of generating a simplified expression is broken into two stages . The first stage selects a sub-expression that can be simplified and the second stage simplifies the sub-expression . We performed a set of evaluations on artificially generated expressions as well as a publicly available code dataset , called the Halide dataset ( Chen & Tian , 2018 ) , and show that HISS can achieve competitive performance . We also find out that HISS can automatically discover simplification rules that are not included in the human predefined rules in the Halide dataset . 2 RELATED WORK . Superoptimization origins from 1987 with the first design of Massalin ( 1987 ) . With the probabilistic testing to reduce the testing cost , the brute force searching is aided with a pruning strategy to avoid searching sub-spaces that contains pieces of code that have known shorter alternatives . Due to the explosive searching space for exhaustive searching , the capability of the first superoptimizer is limited to only very short programs . More than a decades later , Joshi et al . ( 2002 ) presented Denali , which splits the superoptimization problem into two phases to expand the capability to optimize longer programs . STOKE ( Schkufza et al. , 2013 ) follows the two phases but sacrifices the completeness for efficiency in the second phase . Recent attempts to improve superoptimization are categorized into two fields : exploring transformation rules and accelerating trajectory searching . Searching the rules are similar to the problem of superoptimization on limited size program , but targeting more on the comprehensiveness of the rules . Buchwald ( 2015 ) exhaustively enumerates all possible expressions given the syntax and checks the equivalence of pairs of expressions by SMT solver . A similar method with an adaption of the SMT solver to reuse the previous result is proposed by Jangda & Yorsh ( 2017 ) . On the other hand , deep neural networks are trained to guide the trajectory searching ( Cai et al. , 2018 ; Chen & Tian , 2018 ) . Considering transformation rule discovery as a limited space superoptimization , the large action space and sparse reward are the main challenges for using neural networks . Special neural generator structures are proposed for decoding valid symbolic programs , which leverage the syntax constraints to reduce the searching space as well as learn the reasoning of operations , and are gaining popularity in program synthesis ( Parisotto et al. , 2016 ; Zhong et al. , 2017 ; Bunel et al. , 2018 ) , program translation ( Chen et al. , 2018 ; Drissi et al. , 2018 ) , and other code generation tasks ( Ling et al. , 2016 ; Alvarez-Melis & Jaakkola , 2016 ) . Among the symbolic expression decoders , the family of tree structure RNNs ( Parisotto et al. , 2016 ; Drissi et al. , 2018 ; Alvarez-Melis & Jaakkola , 2016 ; Chen et al. , 2018 ) are more flexible than template-based predictors ( Ling et al. , 2016 ; Zhong et al. , 2017 ) . 3 THE HISS ARCHITECTURE . In this section , we will detail our proposed HISS architecture . We will first introduce a few notations . T denotes a tree ; a denotes a vector , and A denotes a matrix . We introduce an LSTM ( · ) function that summarizes standard one-step LSTM operation as [ ht , ct ] = LSTM ( xt , ht−1 , ct−1 ) , ( 1 ) where ht , ct and xt denote the output , cell and input at time t of a standard LSTM respectively . 3.1 FRAMEWORK OVERVIEW . Our problem can be formulated as follows . Given a symbolic expression TI , represented in the expression tree form , our goal is to find a simplified expression TO , such that 1 ) the two expressions are equivalent , and 2 ) TO contains a smaller number of nodes than TI . It is important to write the symbolic expressions in their expression tree form , rather than strings because HISS will be operating on tree structures . An expression tree assigns a node for each operation or variable . Each non-terminal node represents an operation , and each terminal node , or leaf node , LSTM1 LSTM2 LP LP Att LP LSTMout Softmax ⋮ ⋮ LSTM LP LSTM LSTM LP Sigmoid LP Softmax⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ # $ # % # & # ' ( % ( ' ( & ) % →+ , - . / % →+ , - . ) % →0 . / % →0 . ) % . / % . ) & . / & . ) ' . / ' . ) % → & . / % → & . ) ' /' ) & / & ) % / % { ) % } ) % →+ , - . Prob . of choosing sub-tree ) % D % Copy & Split Concatenate LSTM Output & Cell Input Output Previous Output & Cell LP Linear Projection ( Feed Forward ) Att Attention Module ( Eq . ( 6 ) ) ( a ) The tree encoder ( green ) and subtree selector ( orange ) . LSTM1 . LSTM2 LP LP Att LP LSTMout Softmax ⋮ ⋮ LSTM LP LSTM LSTM LP Sigmoid LP Softmax⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ # $ # % # & # ' ( % ( ' ( & ) % →+ , - . / % →+ , - . ) % →0 . / % →0 . ) % . / % . ) & . / & . ) ' . / ' . ) % → & . / % → & . ) ' /' ) & / & ) % / % { ) % } ) % →+ , - . Prob . of choosing sub-tree ) % D % Copy & S lit Concatenate LSTM Output & Cell Input Output Previous Output & Cell LP Linear Projection ( Feed Forward ) Att Attention Module ( Eq . ( 6 ) ) ( b ) The tree decoder . LSTM1 LSTM2 LP LP Att LP LSTMout Softmax ⋮ ⋮ LSTM LP LSTM LSTM LP Sigmoid LP Softmax⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ # $ # % # & # ' ( % ( ' ( & ) % →+ , - . / % →+ , - . ) % →0 . / % →0 . ) % . / % . ) & . / & . ) ' . / ' . ) % → & . / % → & . ) ' /' ) & / & ) % / % { ) % } ) % →+ , - . Prob . of choosing sub-tree ) % D % Copy & Split Concatenate LSTM Output & Cell Input Output Previous Output & Cell LP Linear Projection ( Feed Forward ) Att Attention Module ( Eq . ( 6 ) ) Figure 1 : The HISS architecture , illustrated on a three-node binary subtree , where i is the parent of j , k , and p is the parent of i. represents a variable or a constant . The arguments of operation are represented as the descendant subtrees of the corresponding node . Compared to string representation , tree representation naturally ensures any randomly generated expression in its form is syntactically correct . It also makes working with subexpressions easier : simply by working with subtrees . HISS approaches the problem using the reinforcement learning framework , where the action of generating simplified expressions are divided into two consecutive actions . The first action is to pick a subexpression ( or subtree ) that can be simplified , and the second action generates the simplified expression for the selected subexpression . Accordingly , HISS contains three modules . The first module is a tree encoder , which computes an embedding for each subtree ( including the entire tree ) of the input expression . The embeddings are useful for picking subtree for simplification as well as simplifying a subtree . The second module is a subtree selector , which selects a subtree for simplification . The third module is a tree decoder with an attention mechanism , which generates a simplified expression based on the input subtree embedding . The subsequent subsections will introduce each module respectively . 3.2 THE TREE ENCODER . The tree encoder generates embedding for every subtree of the input expression . We apply the N -ary Tree LSTM as proposed in Tai et al . ( 2015 ) , where N represents the maximum number of arguments that an operation has . It is important to note that although different operations have a different number of arguments , for structural uniformity , we assume that all operations have N arguments , with the excessive arguments being a NULL symbol . The tree encoder consists of two layers . The first layer is called the embedding layer , which is a fullyconnected layer that converts the one-hot representation of each input symbol to an embedding . The second layer is the tree LSTM layer , which is almost the same as the regular LSTM , except that the cell information now flows from the children nodes to their parent node . Formally , denote ci , hi , and xi as the cell , output and input of node i respectively . Then the tree LSTM encoder performs the following information [ hi , ci ] = LSTM xi , ⋃ j∈D ( i ) hj , ⋃ j∈D ( i ) cj , ( 2 ) where D ( i ) denotes the set of children of node i . Fig . 1 ( a ) plots the architecture of the tree LSTM encoder ( in green ) . Since each node fuses the information from its children , which again fuse the information from their own children , it is easy to see that the output hi summarizes the information of the entire subtree led by node i , and thus can be regarded as an embedding for this subtree .
This paper presents a method for symbolic superoptimization — the task of simplifying equations into equivalent expressions. The main goal is to design a method that does not rely on human input in defining equivalence classes, which should improve scalability of the simplification method to a larger set of expressions. The solution uses a reinforcement learning method for training a neural model that transforms an equation tree into a simpler but equivalent one. The model consists of (i) a tree encoder, a recursive LSTM that operates over the input equation tree, (ii) a sub-tree selector, a probability distribution over the nodes in the input equation tree, and (iii) a tree decoder, a two layer LSTM that includes a tree layer and a symbol generation layer. The RL reward uses an existing method for determining soft equivalence between the output tree and the input tree along with a positive score for compressing.
SP:c163bfc3f289c0c63ec25bbd21a63a921518ed22
Deep Symbolic Superoptimization Without Human Knowledge
1 INTRODUCTION . Superoptimization refers to the task of simplifying and optimizing over a set of machine instructions , or code ( Massalin , 1987 ; Schkufza et al. , 2013 ) , which is a fundamental problem in computer science . As an important direction in superoptimization , symbolic expression simplification , or symbolic superoptimization , aims at transforming symbolic expression to a simpler form in an effective way , so as to avoid unnecessary computations . Symbolic superoptimization is an important component in compilers , e.g . LLVM and Halide , and it also has a wide application in mathematical engines including Wolfram2 , Matlab , and Sympy . Over recent years , applying deep learning methods to address symbolic superoptimization has attracted great attention . Despite their variety , existing algorithms can be roughly divided into two categories . The first category is supervised learning , i.e . to learn a mapping between the input expressions and the output simplified expressions from a large number of human-constructed expression pairs ( Arabshahi et al. , 2018 ; Zaremba & Sutskever , 2014 ) . Such methods rely heavily on a human-constructed dataset , which is time- and labor-consuming . What is worse , such systems are highly susceptible to bias , because it is generally very hard to define a minimum and comprehensive axiom set for training . It is highly possible that some forms of equivalence are not covered in the training set , and fail to be recognized by the model . In order to remove the dependence on human annotations , the second category of methods leverages reinforcement learning to autonomously discover simplifying equivalence ( Chen et al. , 2018 ) . However , to make the action space tractable , such systems still rely on a set of equivalent transformation actions defined by human beings , which again suffers from the labeling cost and learning bias . In short , the existing neural symbolic superoptimization algorithms all require human input to define equivalences . It would have benefited from improved efficiency and better simplification if there were algorithms independent of human knowledge . In fact , symbolic superoptimization should have been a task that naturally keeps human outside the loop , because it directly operates on machine code , whose consumers and evaluators are machines , not humans . ∗Authors contributed equally to this paper . 1The code is available at https : //github.com/shihui2010/symbolic_simplifier . 2https : //www.wolframalpha.com/ Therefore , we propose Human-Independent Symbolic Superoptimization ( HISS ) , a reinforcement learning framework for symbolic superoptimization that is completely independent of human knowledge . Instead of using human-defined equivalence , HISS adopts a set of unsupervised techniques to maintain the tractability of action space . First , HISS introduces a tree-LSTM encoder-decoder architecture with attention to ensure that its exploration is confined within the set syntactically correct expressions . Second , the process of generating a simplified expression is broken into two stages . The first stage selects a sub-expression that can be simplified and the second stage simplifies the sub-expression . We performed a set of evaluations on artificially generated expressions as well as a publicly available code dataset , called the Halide dataset ( Chen & Tian , 2018 ) , and show that HISS can achieve competitive performance . We also find out that HISS can automatically discover simplification rules that are not included in the human predefined rules in the Halide dataset . 2 RELATED WORK . Superoptimization origins from 1987 with the first design of Massalin ( 1987 ) . With the probabilistic testing to reduce the testing cost , the brute force searching is aided with a pruning strategy to avoid searching sub-spaces that contains pieces of code that have known shorter alternatives . Due to the explosive searching space for exhaustive searching , the capability of the first superoptimizer is limited to only very short programs . More than a decades later , Joshi et al . ( 2002 ) presented Denali , which splits the superoptimization problem into two phases to expand the capability to optimize longer programs . STOKE ( Schkufza et al. , 2013 ) follows the two phases but sacrifices the completeness for efficiency in the second phase . Recent attempts to improve superoptimization are categorized into two fields : exploring transformation rules and accelerating trajectory searching . Searching the rules are similar to the problem of superoptimization on limited size program , but targeting more on the comprehensiveness of the rules . Buchwald ( 2015 ) exhaustively enumerates all possible expressions given the syntax and checks the equivalence of pairs of expressions by SMT solver . A similar method with an adaption of the SMT solver to reuse the previous result is proposed by Jangda & Yorsh ( 2017 ) . On the other hand , deep neural networks are trained to guide the trajectory searching ( Cai et al. , 2018 ; Chen & Tian , 2018 ) . Considering transformation rule discovery as a limited space superoptimization , the large action space and sparse reward are the main challenges for using neural networks . Special neural generator structures are proposed for decoding valid symbolic programs , which leverage the syntax constraints to reduce the searching space as well as learn the reasoning of operations , and are gaining popularity in program synthesis ( Parisotto et al. , 2016 ; Zhong et al. , 2017 ; Bunel et al. , 2018 ) , program translation ( Chen et al. , 2018 ; Drissi et al. , 2018 ) , and other code generation tasks ( Ling et al. , 2016 ; Alvarez-Melis & Jaakkola , 2016 ) . Among the symbolic expression decoders , the family of tree structure RNNs ( Parisotto et al. , 2016 ; Drissi et al. , 2018 ; Alvarez-Melis & Jaakkola , 2016 ; Chen et al. , 2018 ) are more flexible than template-based predictors ( Ling et al. , 2016 ; Zhong et al. , 2017 ) . 3 THE HISS ARCHITECTURE . In this section , we will detail our proposed HISS architecture . We will first introduce a few notations . T denotes a tree ; a denotes a vector , and A denotes a matrix . We introduce an LSTM ( · ) function that summarizes standard one-step LSTM operation as [ ht , ct ] = LSTM ( xt , ht−1 , ct−1 ) , ( 1 ) where ht , ct and xt denote the output , cell and input at time t of a standard LSTM respectively . 3.1 FRAMEWORK OVERVIEW . Our problem can be formulated as follows . Given a symbolic expression TI , represented in the expression tree form , our goal is to find a simplified expression TO , such that 1 ) the two expressions are equivalent , and 2 ) TO contains a smaller number of nodes than TI . It is important to write the symbolic expressions in their expression tree form , rather than strings because HISS will be operating on tree structures . An expression tree assigns a node for each operation or variable . Each non-terminal node represents an operation , and each terminal node , or leaf node , LSTM1 LSTM2 LP LP Att LP LSTMout Softmax ⋮ ⋮ LSTM LP LSTM LSTM LP Sigmoid LP Softmax⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ # $ # % # & # ' ( % ( ' ( & ) % →+ , - . / % →+ , - . ) % →0 . / % →0 . ) % . / % . ) & . / & . ) ' . / ' . ) % → & . / % → & . ) ' /' ) & / & ) % / % { ) % } ) % →+ , - . Prob . of choosing sub-tree ) % D % Copy & Split Concatenate LSTM Output & Cell Input Output Previous Output & Cell LP Linear Projection ( Feed Forward ) Att Attention Module ( Eq . ( 6 ) ) ( a ) The tree encoder ( green ) and subtree selector ( orange ) . LSTM1 . LSTM2 LP LP Att LP LSTMout Softmax ⋮ ⋮ LSTM LP LSTM LSTM LP Sigmoid LP Softmax⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ # $ # % # & # ' ( % ( ' ( & ) % →+ , - . / % →+ , - . ) % →0 . / % →0 . ) % . / % . ) & . / & . ) ' . / ' . ) % → & . / % → & . ) ' /' ) & / & ) % / % { ) % } ) % →+ , - . Prob . of choosing sub-tree ) % D % Copy & S lit Concatenate LSTM Output & Cell Input Output Previous Output & Cell LP Linear Projection ( Feed Forward ) Att Attention Module ( Eq . ( 6 ) ) ( b ) The tree decoder . LSTM1 LSTM2 LP LP Att LP LSTMout Softmax ⋮ ⋮ LSTM LP LSTM LSTM LP Sigmoid LP Softmax⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋮ ⋮ ⋯ ⋮ ⋮ ⋯ # $ # % # & # ' ( % ( ' ( & ) % →+ , - . / % →+ , - . ) % →0 . / % →0 . ) % . / % . ) & . / & . ) ' . / ' . ) % → & . / % → & . ) ' /' ) & / & ) % / % { ) % } ) % →+ , - . Prob . of choosing sub-tree ) % D % Copy & Split Concatenate LSTM Output & Cell Input Output Previous Output & Cell LP Linear Projection ( Feed Forward ) Att Attention Module ( Eq . ( 6 ) ) Figure 1 : The HISS architecture , illustrated on a three-node binary subtree , where i is the parent of j , k , and p is the parent of i. represents a variable or a constant . The arguments of operation are represented as the descendant subtrees of the corresponding node . Compared to string representation , tree representation naturally ensures any randomly generated expression in its form is syntactically correct . It also makes working with subexpressions easier : simply by working with subtrees . HISS approaches the problem using the reinforcement learning framework , where the action of generating simplified expressions are divided into two consecutive actions . The first action is to pick a subexpression ( or subtree ) that can be simplified , and the second action generates the simplified expression for the selected subexpression . Accordingly , HISS contains three modules . The first module is a tree encoder , which computes an embedding for each subtree ( including the entire tree ) of the input expression . The embeddings are useful for picking subtree for simplification as well as simplifying a subtree . The second module is a subtree selector , which selects a subtree for simplification . The third module is a tree decoder with an attention mechanism , which generates a simplified expression based on the input subtree embedding . The subsequent subsections will introduce each module respectively . 3.2 THE TREE ENCODER . The tree encoder generates embedding for every subtree of the input expression . We apply the N -ary Tree LSTM as proposed in Tai et al . ( 2015 ) , where N represents the maximum number of arguments that an operation has . It is important to note that although different operations have a different number of arguments , for structural uniformity , we assume that all operations have N arguments , with the excessive arguments being a NULL symbol . The tree encoder consists of two layers . The first layer is called the embedding layer , which is a fullyconnected layer that converts the one-hot representation of each input symbol to an embedding . The second layer is the tree LSTM layer , which is almost the same as the regular LSTM , except that the cell information now flows from the children nodes to their parent node . Formally , denote ci , hi , and xi as the cell , output and input of node i respectively . Then the tree LSTM encoder performs the following information [ hi , ci ] = LSTM xi , ⋃ j∈D ( i ) hj , ⋃ j∈D ( i ) cj , ( 2 ) where D ( i ) denotes the set of children of node i . Fig . 1 ( a ) plots the architecture of the tree LSTM encoder ( in green ) . Since each node fuses the information from its children , which again fuse the information from their own children , it is easy to see that the output hi summarizes the information of the entire subtree led by node i , and thus can be regarded as an embedding for this subtree .
This paper provides a novel approach to the problem of simplifying symbolic expressions without relying on human input and information. To achieve this, they apply a REINFORCE framework with a reward function involving the number of symbols in the final output together with a probabilistic testing scheme to determine equivalence. The model itself consists of a tree-LSTM-based encoder-decoder module with attention, together with a sub tree selector. Their main contribution is that this framework works entirely independent of human-labelled data. In their experiments, they show that this deep learning approach outperforms their provided human-independent baselines, while sharing similar performance with human-dependent ones.
SP:c163bfc3f289c0c63ec25bbd21a63a921518ed22
Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints
In most practical settings and theoretical analyses , one assumes that a model can be trained until convergence . However , the growing complexity of machine learning datasets and models may violate such assumptions . Indeed , current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints . Therefore , we introduce a formal setting for studying training under the non-asymptotic , resource-constrained regime , i.e. , budgeted training . We analyze the following problem : “ given a dataset , algorithm , and fixed resource budget , what is the best achievable performance ? ” We focus on the number of optimization iterations as the representative resource . Under such a setting , we show that it is critical to adjust the learning rate schedule according to the given budget . Among budget-aware learning schedules , we find simple linear decay to be both robust and high-performing . We support our claim through extensive experiments with state-of-the-art models on ImageNet ( image classification ) , Kinetics ( video classification ) , MS COCO ( object detection and instance segmentation ) , and Cityscapes ( semantic segmentation ) . We also analyze our results and find that the key to a good schedule is budgeted convergence , a phenomenon whereby the gradient vanishes at the end of each allowed budget . We also revisit existing approaches for fast convergence and show that budgetaware learning schedules readily outperform such approaches under ( the practical but under-explored ) budgeted training setting . 1 INTRODUCTION . Deep neural networks have made an undeniable impact in advancing the state-of-the-art for many machine learning tasks . Improvements have been particularly transformative in computer vision ( Huang et al. , 2017b ; He et al. , 2017 ) . Much of these performance improvements were enabled by an ever-increasing amount of labeled visual data ( Russakovsky et al. , 2015 ; Kuznetsova et al. , 2018 ) and innovations in training architectures ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) . However , as training datasets continue to grow in size , we argue that an additional limiting factor is that of resource constraints for training . Conservative prognostications of dataset sizes – particularly for practical endeavors such as self-driving cars ( Bojarski et al. , 2016 ) , assistive medical robots ( Taylor et al. , 2008 ) , and medical analysis ( Fatima & Pasha , 2017 ) – suggest one will train on datasets orders of magnitude larger than those that are publicly available today . Such planning efforts will become more and more crucial , because in the limit , it might not even be practical to visit every training example before running out of resources ( Bottou , 1998 ; Rai et al. , 2009 ) . We note that resource-constrained training already is implicitly widespread , as the vast majority of practitioners have access to limited compute . This is particularly true for those pursuing research directions that require a massive number of training runs , such as hyper-parameter tuning ( Li et al. , 2017 ) and neural architecture search ( Zoph & Le , 2017 ; Cao et al. , 2019 ; Liu et al. , 2019 ) . †Work done while at Argo AI . Instead of asking “ what is the best performance one can achieve given this data and algorithm ? ” , which has been the primary focus in the field so far , we decorate this question with budgeted training constraints as follows : “ what is the best performance one can achieve given this data and algorithm within the allowed budget ? ” . Here , the allowed budget refers to a limitation on the total time , compute , or cost spent on training . More specifically , we focus on limiting the number of iterations . This allows us to abstract out the specific constraint without loss of generality since any one of the aforementioned constraints could be converted to a finite iteration limit . We make the underlying assumption that the network architecture is constant throughout training , though it may be interesting to entertain changes in architecture during training ( Rusu et al. , 2016 ; Wang et al. , 2017 ) . Much of the theoretical analysis of optimization algorithms focuses on asymptotic convergence and optimality ( Robbins & Monro , 1951 ; Nemirovski et al. , 2009 ; Bottou et al. , 2018 ) , which implicitly makes use of an infinite compute budget . That said , there exists a wide body of work ( Zinkevich , 2003 ; Kingma & Ba , 2015 ; Reddi et al. , 2018 ; Luo et al. , 2019 ) that provide performance bounds which depend on the iteration number , which apply even in the non-asymptotic regime . Our work differs in its exploration of maximizing performance for a fixed number of iterations . Importantly , the globally optimal solution may not even be achievable in our budgeted setting . Given a limited budget , one obvious strategy might be data subsampling ( Bachem et al. , 2017 ; Sener & Savarese , 2018 ) . However , we discover that a much more effective , simpler , and under-explored strategy is adopting budget-aware learning rate schedules — if we know that we are limited to a single epoch , one should tune the learning schedule accordingly . Such budget-aware schedules have been proposed in previous work ( Feyzmahdavian et al. , 2016 ; Lian et al. , 2017 ) , but often for a fixed learning rate that depends on dataset statistics . In this paper , we specifically point out linearly decaying the learning rate to 0 at the end of the budget , may be more robust than more complicated strategies suggested in prior work . Though we are motivated by budget-aware training , we find that a linear schedule is quite competitive for general learning settings as well . We verify our findings with state-of-the-art models on ImageNet ( image classification ) , Kinetics ( video classification ) , MS COCO ( object detection and instance segmentation ) , and Cityscapes ( semantic segmentation ) . We conduct several diagnostic experiments that analyze learning rate decays under the budgeted setting . We first observe a statistical correlation between the learning rate and the full gradient magnitude ( over the entire dataset ) . Decreasing the learning rate empirically results in a decrease in the full gradient magnitude . Eventually , as the former goes to zero , the latter vanishes as well , suggesting that the optimization has reached a critical point , if not a local minimum1 . We call this phenomenon budgeted convergence and we find it generalizes across budgets . On one hand , it implies that one should decay the learning rate to zero at the end of the training , even given a small budget . On the other hand , it implies one should not aggressively decay the learning rate early in the optimization ( such as the case with an exponential schedule ) since this may slow down later progress . Finally , we show that linear budget-aware schedules outperform recently-proposed fast-converging methods that make use of adaptive learning rates and restarts . Our main contributions are as follows : 1Whether such a solution is exactly a local minimum or not is debatable ( see Sec 2 ) . • We introduce a formal setting for budgeted training based on training iterations and provide an alternative perspective for existing learning rate schedules . • We discover that budget-aware schedules are handy solutions to budgeted training . Specif- ically , our proposed linear schedule is more simple , robust , and effective than prior approaches , for both budgeted and general training . • We provide an empirical justification of the effectiveness of learning rate decay based on the correlation between the learning rate and the full gradient norm . 2 RELATED WORK . Learning rates . Stochastic gradient descent dates back to Robbins & Monro ( 1951 ) . The core is its update step : wt = wt−1 − αtgt , where t ( from 1 to T ) is the iteration , w are the parameters to be learned , g is the gradient estimator for the objective function2 F , and αt is the learning rate , also known as step size . Given base learning rate α0 , we can define the ratio βt = αt/α0 . Then the set of { βt } Tt=1 is called the learning rate schedule , which specifies how the learning rate should vary over the course of training . Our definition differs slighter from prior art as it separates the base learning rate and learning rate schedule . Learning rates are well studied for ( strongly ) convex cost surfaces and we include a brief review in Appendix H. Learning rate schedule for deep learning . In deep learning , there is no consensus on the exact role of the learning rate . Most theoretical analysis makes the assumption of a small and constant learning rate ( Du et al. , 2018a ; b ; Hardt et al. , 2016 ) . For variable rates , one hypothesis is that large rates help move the optimization over large energy barriers while small rates help converge to a local minimum ( Loshchilov & Hutter , 2017 ; Huang et al. , 2017a ; Kleinberg et al. , 2018 ) . Such hypothesis is questioned by recent analysis on mode connectivity , which has revealed that there does exist a descent path between solutions that were previously thought to be isolated local minima ( Garipov et al. , 2018 ; Dräxler et al. , 2018 ; Gotmare et al. , 2019 ) . Despite a lack of theoretical explanation , the community has adopted a variety of heuristic schedules for practical purposes , two of which are particularly common : • step decay : drop the learning rate by a multiplicative factor γ after every d epochs . The default for γ is 0.1 , but d varies significantly across tasks . • exponential : βt = γt . There is no default parameter for γ and it requires manual tuning . State-of-the-art codebases for standard vision benchmarks tend to employ step decay ( Xie & Tu , 2015 ; Huang et al. , 2017b ; He et al. , 2017 ; Carreira & Zisserman , 2017 ; Wang et al. , 2018 ; Yin et al. , 2019 ; Ma et al. , 2019 ) , whereas exponential decay has been successfully used to train Inception networks ( Szegedy et al. , 2015 ; 2016 ; 2017 ) . In spite of their prevalence , these heuristics have not been well studied . Recent work proposes several new schedules ( Loshchilov & Hutter , 2017 ; Smith , 2017 ; Hsueh et al. , 2019 ) , but much of this past work limits their evaluation to CIFAR and ImageNet . For example , SGDR ( Loshchilov & Hutter , 2017 ) advocates for learning-rate restarts based on the results on CIFAR , however , we find the unexplained form of cosine decay in SGDR is more effective than the restart technique . Notably , Mishkin et al . ( 2017 ) demonstrate the effectiveness of linear rate decay with CaffeNet on downsized ImageNet . In our work , we rigorously evaluate on 5 standard vision benchmarks with state-of-the-art networks and under various budgets . Gotmare et al . ( 2019 ) also analyze learning rate restarts and in addition , the warm-up technique , but do not analyze the specific form of learning rate decay . Adaptive learning rates . Adaptive learning rate methods ( Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ; Reddi et al. , 2018 ; Luo et al. , 2019 ) adjust the learning rate according to the local statistics of the cost surface . Despite having better theoretical bounds under certain conditions , they do not generalize as well as momentum SGD for benchmark tasks that are much larger than CIFAR ( Wilson et al. , 2017 ) . We offer new insights by evaluating them under the budgeted setting . We show fast descent can be trivially achieved through budget-aware schedules and aggressive early descent is not desirable for achieving good performance in the end . 2Note that g can be based on a single example , a mini-batch , the full training set , or the true data distribution . In most practical settings , momentum SGD is used , but we omit the momentum here for simplicity .
This work presents a simple technique for tuning the learning rate for Neural Network training when under a "budget" -- the budget here is specified as a fixed number of epochs that is expected to be a small fraction of the total number of epochs required to achieve maximum accuracy. The main contribution of this paper is in showing that a simpler linear decay schedule that goes to zero at the end of the proposed budget achieves good performance. The paper proposes a framework called budget-aware schedule which represents any learning rate schedule where the ratio of learning rate at time `'t' base learning rate is only a function of the ratio of 't' to total budget 'T'. In this family of schedules, the paper shows that a simple linear decay works best for all budgets. In the appendix, the authors compare their proposed schedule with adaptive techniques and show that under a given budget, it outperforms latets adaptive techniques like adabound, amsgrad, etc.
SP:2fa2a5ffa0193c0e5840bd18dc500739d2c369e0
Budgeted Training: Rethinking Deep Neural Network Training Under Resource Constraints
In most practical settings and theoretical analyses , one assumes that a model can be trained until convergence . However , the growing complexity of machine learning datasets and models may violate such assumptions . Indeed , current approaches for hyper-parameter tuning and neural architecture search tend to be limited by practical resource constraints . Therefore , we introduce a formal setting for studying training under the non-asymptotic , resource-constrained regime , i.e. , budgeted training . We analyze the following problem : “ given a dataset , algorithm , and fixed resource budget , what is the best achievable performance ? ” We focus on the number of optimization iterations as the representative resource . Under such a setting , we show that it is critical to adjust the learning rate schedule according to the given budget . Among budget-aware learning schedules , we find simple linear decay to be both robust and high-performing . We support our claim through extensive experiments with state-of-the-art models on ImageNet ( image classification ) , Kinetics ( video classification ) , MS COCO ( object detection and instance segmentation ) , and Cityscapes ( semantic segmentation ) . We also analyze our results and find that the key to a good schedule is budgeted convergence , a phenomenon whereby the gradient vanishes at the end of each allowed budget . We also revisit existing approaches for fast convergence and show that budgetaware learning schedules readily outperform such approaches under ( the practical but under-explored ) budgeted training setting . 1 INTRODUCTION . Deep neural networks have made an undeniable impact in advancing the state-of-the-art for many machine learning tasks . Improvements have been particularly transformative in computer vision ( Huang et al. , 2017b ; He et al. , 2017 ) . Much of these performance improvements were enabled by an ever-increasing amount of labeled visual data ( Russakovsky et al. , 2015 ; Kuznetsova et al. , 2018 ) and innovations in training architectures ( Krizhevsky et al. , 2012 ; He et al. , 2016 ) . However , as training datasets continue to grow in size , we argue that an additional limiting factor is that of resource constraints for training . Conservative prognostications of dataset sizes – particularly for practical endeavors such as self-driving cars ( Bojarski et al. , 2016 ) , assistive medical robots ( Taylor et al. , 2008 ) , and medical analysis ( Fatima & Pasha , 2017 ) – suggest one will train on datasets orders of magnitude larger than those that are publicly available today . Such planning efforts will become more and more crucial , because in the limit , it might not even be practical to visit every training example before running out of resources ( Bottou , 1998 ; Rai et al. , 2009 ) . We note that resource-constrained training already is implicitly widespread , as the vast majority of practitioners have access to limited compute . This is particularly true for those pursuing research directions that require a massive number of training runs , such as hyper-parameter tuning ( Li et al. , 2017 ) and neural architecture search ( Zoph & Le , 2017 ; Cao et al. , 2019 ; Liu et al. , 2019 ) . †Work done while at Argo AI . Instead of asking “ what is the best performance one can achieve given this data and algorithm ? ” , which has been the primary focus in the field so far , we decorate this question with budgeted training constraints as follows : “ what is the best performance one can achieve given this data and algorithm within the allowed budget ? ” . Here , the allowed budget refers to a limitation on the total time , compute , or cost spent on training . More specifically , we focus on limiting the number of iterations . This allows us to abstract out the specific constraint without loss of generality since any one of the aforementioned constraints could be converted to a finite iteration limit . We make the underlying assumption that the network architecture is constant throughout training , though it may be interesting to entertain changes in architecture during training ( Rusu et al. , 2016 ; Wang et al. , 2017 ) . Much of the theoretical analysis of optimization algorithms focuses on asymptotic convergence and optimality ( Robbins & Monro , 1951 ; Nemirovski et al. , 2009 ; Bottou et al. , 2018 ) , which implicitly makes use of an infinite compute budget . That said , there exists a wide body of work ( Zinkevich , 2003 ; Kingma & Ba , 2015 ; Reddi et al. , 2018 ; Luo et al. , 2019 ) that provide performance bounds which depend on the iteration number , which apply even in the non-asymptotic regime . Our work differs in its exploration of maximizing performance for a fixed number of iterations . Importantly , the globally optimal solution may not even be achievable in our budgeted setting . Given a limited budget , one obvious strategy might be data subsampling ( Bachem et al. , 2017 ; Sener & Savarese , 2018 ) . However , we discover that a much more effective , simpler , and under-explored strategy is adopting budget-aware learning rate schedules — if we know that we are limited to a single epoch , one should tune the learning schedule accordingly . Such budget-aware schedules have been proposed in previous work ( Feyzmahdavian et al. , 2016 ; Lian et al. , 2017 ) , but often for a fixed learning rate that depends on dataset statistics . In this paper , we specifically point out linearly decaying the learning rate to 0 at the end of the budget , may be more robust than more complicated strategies suggested in prior work . Though we are motivated by budget-aware training , we find that a linear schedule is quite competitive for general learning settings as well . We verify our findings with state-of-the-art models on ImageNet ( image classification ) , Kinetics ( video classification ) , MS COCO ( object detection and instance segmentation ) , and Cityscapes ( semantic segmentation ) . We conduct several diagnostic experiments that analyze learning rate decays under the budgeted setting . We first observe a statistical correlation between the learning rate and the full gradient magnitude ( over the entire dataset ) . Decreasing the learning rate empirically results in a decrease in the full gradient magnitude . Eventually , as the former goes to zero , the latter vanishes as well , suggesting that the optimization has reached a critical point , if not a local minimum1 . We call this phenomenon budgeted convergence and we find it generalizes across budgets . On one hand , it implies that one should decay the learning rate to zero at the end of the training , even given a small budget . On the other hand , it implies one should not aggressively decay the learning rate early in the optimization ( such as the case with an exponential schedule ) since this may slow down later progress . Finally , we show that linear budget-aware schedules outperform recently-proposed fast-converging methods that make use of adaptive learning rates and restarts . Our main contributions are as follows : 1Whether such a solution is exactly a local minimum or not is debatable ( see Sec 2 ) . • We introduce a formal setting for budgeted training based on training iterations and provide an alternative perspective for existing learning rate schedules . • We discover that budget-aware schedules are handy solutions to budgeted training . Specif- ically , our proposed linear schedule is more simple , robust , and effective than prior approaches , for both budgeted and general training . • We provide an empirical justification of the effectiveness of learning rate decay based on the correlation between the learning rate and the full gradient norm . 2 RELATED WORK . Learning rates . Stochastic gradient descent dates back to Robbins & Monro ( 1951 ) . The core is its update step : wt = wt−1 − αtgt , where t ( from 1 to T ) is the iteration , w are the parameters to be learned , g is the gradient estimator for the objective function2 F , and αt is the learning rate , also known as step size . Given base learning rate α0 , we can define the ratio βt = αt/α0 . Then the set of { βt } Tt=1 is called the learning rate schedule , which specifies how the learning rate should vary over the course of training . Our definition differs slighter from prior art as it separates the base learning rate and learning rate schedule . Learning rates are well studied for ( strongly ) convex cost surfaces and we include a brief review in Appendix H. Learning rate schedule for deep learning . In deep learning , there is no consensus on the exact role of the learning rate . Most theoretical analysis makes the assumption of a small and constant learning rate ( Du et al. , 2018a ; b ; Hardt et al. , 2016 ) . For variable rates , one hypothesis is that large rates help move the optimization over large energy barriers while small rates help converge to a local minimum ( Loshchilov & Hutter , 2017 ; Huang et al. , 2017a ; Kleinberg et al. , 2018 ) . Such hypothesis is questioned by recent analysis on mode connectivity , which has revealed that there does exist a descent path between solutions that were previously thought to be isolated local minima ( Garipov et al. , 2018 ; Dräxler et al. , 2018 ; Gotmare et al. , 2019 ) . Despite a lack of theoretical explanation , the community has adopted a variety of heuristic schedules for practical purposes , two of which are particularly common : • step decay : drop the learning rate by a multiplicative factor γ after every d epochs . The default for γ is 0.1 , but d varies significantly across tasks . • exponential : βt = γt . There is no default parameter for γ and it requires manual tuning . State-of-the-art codebases for standard vision benchmarks tend to employ step decay ( Xie & Tu , 2015 ; Huang et al. , 2017b ; He et al. , 2017 ; Carreira & Zisserman , 2017 ; Wang et al. , 2018 ; Yin et al. , 2019 ; Ma et al. , 2019 ) , whereas exponential decay has been successfully used to train Inception networks ( Szegedy et al. , 2015 ; 2016 ; 2017 ) . In spite of their prevalence , these heuristics have not been well studied . Recent work proposes several new schedules ( Loshchilov & Hutter , 2017 ; Smith , 2017 ; Hsueh et al. , 2019 ) , but much of this past work limits their evaluation to CIFAR and ImageNet . For example , SGDR ( Loshchilov & Hutter , 2017 ) advocates for learning-rate restarts based on the results on CIFAR , however , we find the unexplained form of cosine decay in SGDR is more effective than the restart technique . Notably , Mishkin et al . ( 2017 ) demonstrate the effectiveness of linear rate decay with CaffeNet on downsized ImageNet . In our work , we rigorously evaluate on 5 standard vision benchmarks with state-of-the-art networks and under various budgets . Gotmare et al . ( 2019 ) also analyze learning rate restarts and in addition , the warm-up technique , but do not analyze the specific form of learning rate decay . Adaptive learning rates . Adaptive learning rate methods ( Tieleman & Hinton , 2012 ; Kingma & Ba , 2015 ; Reddi et al. , 2018 ; Luo et al. , 2019 ) adjust the learning rate according to the local statistics of the cost surface . Despite having better theoretical bounds under certain conditions , they do not generalize as well as momentum SGD for benchmark tasks that are much larger than CIFAR ( Wilson et al. , 2017 ) . We offer new insights by evaluating them under the budgeted setting . We show fast descent can be trivially achieved through budget-aware schedules and aggressive early descent is not desirable for achieving good performance in the end . 2Note that g can be based on a single example , a mini-batch , the full training set , or the true data distribution . In most practical settings , momentum SGD is used , but we omit the momentum here for simplicity .
This paper analyzed which learning rate schedule (LRS) should be used when the budget (number of iteration) is limited. First, the authors have introduced the concept of BAS (Budget-Aware Schedule). Various LRSs are classified, and it is experimentally shown that the LRSs based on BAS performed better. Among them, the performance of the linear decay method was shown to be simple and robust.
SP:2fa2a5ffa0193c0e5840bd18dc500739d2c369e0
Translation Between Waves, wave2wave
1 INTRODUCTION . The problem of shortage of training data but can be supplied by other sensor data is called an opportunistic sensor problem ( Roggen et al. , 2013 ) . For example in human activity logs , the video data can be missing in bathrooms by ethical reasons but can be supplied by environmental sensors which have less ethical problems . For this purpose we propose to extend the sequence-to-sequence ( seq2seq ) model ( Cho et al. , 2014 ; Sutskever et al. , 2014 ; Dzmitry Bahdanau , 2014 ; Luong et al. , 2015 ) to translate signal wave x ( continuous time-series signals ) into other signal wave y . The straight-forward extension does not apply by two reasons : ( 1 ) the lengths of x and y are radically different , and ( 2 ) both x and y are high dimensionals . First , while most of the conventional seq2seq models handle the input and output signals whose lengths are in the same order , we need to handle the output signals whose length are sometimes considerably different than the input signals . For example , the sampling rate of ground motion sensor is 100Hz and the duration of an earthquake is about 10sec . That is , the length of the output signal wave is 10000 times longer in this case . Therefore , the segmentation along temporal axis and discarding uninformative signal waves are required . Second , signal waves could be high dimensionals ; motion capture data has 129 dimensions and acceleormeter data has 18 dimensions . While most of the conventional seq2seq does not require the high-dimensional settings , meaning that it is not usual to translate multiple languages simultaneously , we need to translate signal waves in high dimensions into other signal waves in high dimensions simultaneously . To overcome these two problems we propose 1 ) the window-based representation function and 2 ) the wave2wave iterative back-translation model in this paper . Our contributions are the following : • We propose a sliding window-based seq2seq model wave2wave ( Section 4.1 ) , • We propose the wave2wave iterative back-translation model ( Section 4.2 ) which is the key to outperform for high-dimensional data . 2 RELATED WORKS . Related works include various encoder-decoder architectures and generative adversarial networks ( GANs ) . First , the encoder-decoder architecture has several variations : ( 1 ) CNNs in both sides ( Badrinarayanan et al. , 2015 ) , ( 2 ) RNNs in both sides ( Cho et al. , 2014 ; Sutskever et al. , 2014 ; Dzmitry Bahdanau , 2014 ; Luong et al. , 2015 ) , or ( 3 ) one side is CNN and the other is RNN ( Xu et al. , 2015 ) . When one side is related to autoregressive model ( van den Oord et al. , 2016 ) , further variations are appeared . These architectures are considered to be distinctive . The pros of CNN is an efficient extraction of features and overall execution while the pros of RNN is its excellent handling of time-series or sequential data . CNN is relatively weak in handling time-series data . In this reason , the time domain is often handled by RNN . The encoder-decoder architecture using CNNs in both sides is used for semantic segmentation ( Badrinarayanan et al. , 2015 ) , image denoising ( Mao et al. , 2016 ) , and super-resolution ( Chen et al. , 2018 ) , which are often not related to time-series . In the context of time-series , GluonTS ( Alexandrov et al. , 2019 ) uses the encoder-decoder approach which aims at time-series prediction task where parameters in encoder and decoder are shared . Apart from the difference of tasks , our approach does not share the parameters in encoder and those in decoder . All the more our model assumes that the time-series multi-modal data are related to the multi-view of the same targeted object which results in multiple modalities . Second , among various GAN architectures , several GANs aims at handling time-series aspect . Vid2vid ( Wang et al. , 2018 ) is an extension of pix2pix ( Isola et al. , 2016 ) which aims at handling video signals . ForGAN ( Koochali et al. , 2019 ) aims at time-series prediction task . 3 SEQ2SEQ . Architecture with context vector Let x1 : S denotes a source sentence consisting of time-series S words , i.e. , x1 : S = ( x1 , x2 , . . . , xS ) . Meanwhile , y1 : T = ( y1 , . . . , yT ) denotes a target sentence corresponding to x1 : S . With the assumption of a Markov property , the conditional probability p ( y1 : T |x1 : S ) , translation from a source sentence to a target sentence , is decomposed into a time-step translation p ( y|x ) as in ( 1 ) : log p ( y1 : T |x1 : S ) = T∑ t=1 log p ( yt|y < t , ct ) ( 1 ) where y < s = ( y1 , y2 , . . . , ys−1 ) and cs is a context vector representing the information of source sentence x1 : S to generate an output word yt . To realize such time-step translation , the seq2seq architecture consists of ( a ) a RNN ( Reccurent Neural Network ) encoder and ( b ) a RNN decoder . The RNN encoder computes the current hidden state hencs given the previous hidden state h enc s−1 and the current input xs , as in ( 2 ) : hencs = RNNenc ( xs , h enc s−1 ) ( 2 ) where RNNenc denotes a multi-layered RNN unit . The RNN decoder computes a current hidden state hdect given the previous hidden state and then compute an output yt . hdect = RNNdec ( h dec t−1 ) ( 3 ) pθ ( yt|y < t , ct ) = softmax ( gθ ( h dec t , ct ) ) ( 4 ) where RNNdec denotes a conditional RNN unit , gθ ( · ) is the output function to convert h dec t and ct to the logit of yt , and θ denotes parameters in RNN units . With training data D = { yn1 : T , xn1 : S } Nn=1 , the parameters θ are optimized so as to minimize the loss function L ( θ ) of log-likelihood : L ( θ ) = − 1 N N∑ n=1 T∑ t=1 log pθ ( y n t |yn < t , ct ) ( 5 ) or squared error : L ( θ ) = 1 N N∑ n=1 T∑ t=1 ( ynt − gθ ( hdec n t , c n t ) ) 2 ( 6 ) Global Attention To obtain the context vector cs , we use global attention mechanism ( Luong et al. , 2015 ) . The global attention considers an alignment mapping in a global manner , between encoder hidden states hencs and a decoder hidden step h dec t . at ( s ) = align ( hdect , h enc s ) ( 7 ) = exp ( score ( hdect , h enc s ) ∑T s exp ( score ( h dec t , h enc s ) ) ( 8 ) where the score is computed by weighted inner product as follows score ( hdect , h enc s ) = h dec t > Wah enc s ( 9 ) where the weight parameter Wa is obtained so as to minimize the loss function L ( θ ) . Then , the context vector ct is obtained as a weighted average of encoder hidden states as ct = S∑ s=1 at ( s ) h enc s ( 10 ) 4 PROPOSED METHOD : WAVE2WAVE . The problems of global attention model are that ( 1 ) the lengths of input and output are radically different , and that ( 2 ) both input and output sequences are high dimensionals . For example in activity translation , there are 48 motion sensors and 3 accelerometer sensors . Their frequency rates are as high as 50Hz and 30Hz respectively . Therefore , the number of steps S , T in both encoder and decoders are prohibitively large so that the capturing information of source sentence x1 : S is precluded in the context vector c . 4.1 WINDOW-BASED REPRESENTATION . Let us consider the case that source and target sentences are multi-dimensional continuous timeseries , signal waves , as shown in Figure 1 1 . That is , each signal at time-step x1 : S is expressed as dx-dimensional vector xs—there are dx sensors in the source side . Then a source signal wave x1 : S consists of S-step dx-dimensional signal vectors , i.e. , x1 : S = ( x1 , x2 , . . . , xS ) . 1We note that signal waves in Figure 1 are depicted as one-dimensional waves for clear visualization . To capture an important shape informaion from complex signal waves ( see Figure 1 ) , we introduce trainable window-based representation function R ( · ) as rencs′ = R ( W enc s′ ) ( 11 ) where W encs′ is a s ′-th window with fixed window-width wenc , expressed as dx × wenc-matrix as W encs′ = [ xwenc ( s′−1 ) +1 , xwenc ( s′−1 ) +2 , . . . , xwenc ( s′−1 ) +wenc ] , ( 12 ) and rencs′ is extracted representation vector inputted to the seq2seq encoder as shown in Figure 1 —the dimension of renc is the same as the one of the hidden vector h enc . Similarly , to approximate the complex target waves well , we introduce inverse representation function , R−1 ( · ) which is separately trained from R−1 ( · ) as W dect′ = R −1 ( rdect′ ) ( 13 ) where rdect′ is the t ′-th output vector from seq2seq decoder as shown in Figure 1 , and W dect′ is a window matrix which is corresponding to a partial wave of target waves y1 : T = ( y1 , . . . , yT ) . The advantage of window-based architecture are three-fold : firstly , the number of steps in both encoder and decoder could be largely reduced and make the seq2seq with context vector work stably . Secondly , the complexity and variation in the shape inside windows are also largely reduced in comparison with the entire waves . Thus , important information could be extracted from source waves and the output sequence could be accurately approximated by relatively simple representation R ( · ) and inverse-representation R−1 ( · ) functions respectively . Thirdly , both representation R ( · ) and inverse-representation R−1 ( · ) functions are trained end-to-end manner by minimizing the loss L ( θ ) where both functions are modeled by fully-connected ( FC ) networks . Figure 1 depicts the overall architecture of our wave2wave with an example of toy-data . The wave2wave consists of encoder and decoder with long-short term memory ( LSTM ) nodes in their inside , representation function R ( W encs′ ) and inverse-representation function R −1 ( W dect′ ) . In this figure , one-dimensional 10000-time-step continuous time-series are considered as an input and an output and the width of window is set to 2000— there are 5 window steps for both encoder and decoder , i.e. , wenc = wdec = 2000 and S′ = T ′ = 5 . Then , 1 × 2000 encoder-window-matrix W encs′ is converted to dr dimensional encoder-representation vector rencs′ by the representation function R ( W encs′ ) . Meanwhile , the output decoder , dr dimensional decoder-representation r dec t′ , is converted to 1× 2000 decoder-window-matrix W dect′ by the inverse representation function R−1 ( rdect′ ) .
In this paper, the authors propose modifications to baseline seq-to-seq systems for wave-to-wave translation. To handle possibly long inputs and outputs, as well as significant length differences, they propose to use sliding windows. For high-dimensional outputs, they use an iterative approach predicting each dimension independently. They evaluate their models on earthquake data on on activity translation (video to motion capture).
SP:4fdd94362be6718ab249cdb8da4e75b9eade64bd
Translation Between Waves, wave2wave
1 INTRODUCTION . The problem of shortage of training data but can be supplied by other sensor data is called an opportunistic sensor problem ( Roggen et al. , 2013 ) . For example in human activity logs , the video data can be missing in bathrooms by ethical reasons but can be supplied by environmental sensors which have less ethical problems . For this purpose we propose to extend the sequence-to-sequence ( seq2seq ) model ( Cho et al. , 2014 ; Sutskever et al. , 2014 ; Dzmitry Bahdanau , 2014 ; Luong et al. , 2015 ) to translate signal wave x ( continuous time-series signals ) into other signal wave y . The straight-forward extension does not apply by two reasons : ( 1 ) the lengths of x and y are radically different , and ( 2 ) both x and y are high dimensionals . First , while most of the conventional seq2seq models handle the input and output signals whose lengths are in the same order , we need to handle the output signals whose length are sometimes considerably different than the input signals . For example , the sampling rate of ground motion sensor is 100Hz and the duration of an earthquake is about 10sec . That is , the length of the output signal wave is 10000 times longer in this case . Therefore , the segmentation along temporal axis and discarding uninformative signal waves are required . Second , signal waves could be high dimensionals ; motion capture data has 129 dimensions and acceleormeter data has 18 dimensions . While most of the conventional seq2seq does not require the high-dimensional settings , meaning that it is not usual to translate multiple languages simultaneously , we need to translate signal waves in high dimensions into other signal waves in high dimensions simultaneously . To overcome these two problems we propose 1 ) the window-based representation function and 2 ) the wave2wave iterative back-translation model in this paper . Our contributions are the following : • We propose a sliding window-based seq2seq model wave2wave ( Section 4.1 ) , • We propose the wave2wave iterative back-translation model ( Section 4.2 ) which is the key to outperform for high-dimensional data . 2 RELATED WORKS . Related works include various encoder-decoder architectures and generative adversarial networks ( GANs ) . First , the encoder-decoder architecture has several variations : ( 1 ) CNNs in both sides ( Badrinarayanan et al. , 2015 ) , ( 2 ) RNNs in both sides ( Cho et al. , 2014 ; Sutskever et al. , 2014 ; Dzmitry Bahdanau , 2014 ; Luong et al. , 2015 ) , or ( 3 ) one side is CNN and the other is RNN ( Xu et al. , 2015 ) . When one side is related to autoregressive model ( van den Oord et al. , 2016 ) , further variations are appeared . These architectures are considered to be distinctive . The pros of CNN is an efficient extraction of features and overall execution while the pros of RNN is its excellent handling of time-series or sequential data . CNN is relatively weak in handling time-series data . In this reason , the time domain is often handled by RNN . The encoder-decoder architecture using CNNs in both sides is used for semantic segmentation ( Badrinarayanan et al. , 2015 ) , image denoising ( Mao et al. , 2016 ) , and super-resolution ( Chen et al. , 2018 ) , which are often not related to time-series . In the context of time-series , GluonTS ( Alexandrov et al. , 2019 ) uses the encoder-decoder approach which aims at time-series prediction task where parameters in encoder and decoder are shared . Apart from the difference of tasks , our approach does not share the parameters in encoder and those in decoder . All the more our model assumes that the time-series multi-modal data are related to the multi-view of the same targeted object which results in multiple modalities . Second , among various GAN architectures , several GANs aims at handling time-series aspect . Vid2vid ( Wang et al. , 2018 ) is an extension of pix2pix ( Isola et al. , 2016 ) which aims at handling video signals . ForGAN ( Koochali et al. , 2019 ) aims at time-series prediction task . 3 SEQ2SEQ . Architecture with context vector Let x1 : S denotes a source sentence consisting of time-series S words , i.e. , x1 : S = ( x1 , x2 , . . . , xS ) . Meanwhile , y1 : T = ( y1 , . . . , yT ) denotes a target sentence corresponding to x1 : S . With the assumption of a Markov property , the conditional probability p ( y1 : T |x1 : S ) , translation from a source sentence to a target sentence , is decomposed into a time-step translation p ( y|x ) as in ( 1 ) : log p ( y1 : T |x1 : S ) = T∑ t=1 log p ( yt|y < t , ct ) ( 1 ) where y < s = ( y1 , y2 , . . . , ys−1 ) and cs is a context vector representing the information of source sentence x1 : S to generate an output word yt . To realize such time-step translation , the seq2seq architecture consists of ( a ) a RNN ( Reccurent Neural Network ) encoder and ( b ) a RNN decoder . The RNN encoder computes the current hidden state hencs given the previous hidden state h enc s−1 and the current input xs , as in ( 2 ) : hencs = RNNenc ( xs , h enc s−1 ) ( 2 ) where RNNenc denotes a multi-layered RNN unit . The RNN decoder computes a current hidden state hdect given the previous hidden state and then compute an output yt . hdect = RNNdec ( h dec t−1 ) ( 3 ) pθ ( yt|y < t , ct ) = softmax ( gθ ( h dec t , ct ) ) ( 4 ) where RNNdec denotes a conditional RNN unit , gθ ( · ) is the output function to convert h dec t and ct to the logit of yt , and θ denotes parameters in RNN units . With training data D = { yn1 : T , xn1 : S } Nn=1 , the parameters θ are optimized so as to minimize the loss function L ( θ ) of log-likelihood : L ( θ ) = − 1 N N∑ n=1 T∑ t=1 log pθ ( y n t |yn < t , ct ) ( 5 ) or squared error : L ( θ ) = 1 N N∑ n=1 T∑ t=1 ( ynt − gθ ( hdec n t , c n t ) ) 2 ( 6 ) Global Attention To obtain the context vector cs , we use global attention mechanism ( Luong et al. , 2015 ) . The global attention considers an alignment mapping in a global manner , between encoder hidden states hencs and a decoder hidden step h dec t . at ( s ) = align ( hdect , h enc s ) ( 7 ) = exp ( score ( hdect , h enc s ) ∑T s exp ( score ( h dec t , h enc s ) ) ( 8 ) where the score is computed by weighted inner product as follows score ( hdect , h enc s ) = h dec t > Wah enc s ( 9 ) where the weight parameter Wa is obtained so as to minimize the loss function L ( θ ) . Then , the context vector ct is obtained as a weighted average of encoder hidden states as ct = S∑ s=1 at ( s ) h enc s ( 10 ) 4 PROPOSED METHOD : WAVE2WAVE . The problems of global attention model are that ( 1 ) the lengths of input and output are radically different , and that ( 2 ) both input and output sequences are high dimensionals . For example in activity translation , there are 48 motion sensors and 3 accelerometer sensors . Their frequency rates are as high as 50Hz and 30Hz respectively . Therefore , the number of steps S , T in both encoder and decoders are prohibitively large so that the capturing information of source sentence x1 : S is precluded in the context vector c . 4.1 WINDOW-BASED REPRESENTATION . Let us consider the case that source and target sentences are multi-dimensional continuous timeseries , signal waves , as shown in Figure 1 1 . That is , each signal at time-step x1 : S is expressed as dx-dimensional vector xs—there are dx sensors in the source side . Then a source signal wave x1 : S consists of S-step dx-dimensional signal vectors , i.e. , x1 : S = ( x1 , x2 , . . . , xS ) . 1We note that signal waves in Figure 1 are depicted as one-dimensional waves for clear visualization . To capture an important shape informaion from complex signal waves ( see Figure 1 ) , we introduce trainable window-based representation function R ( · ) as rencs′ = R ( W enc s′ ) ( 11 ) where W encs′ is a s ′-th window with fixed window-width wenc , expressed as dx × wenc-matrix as W encs′ = [ xwenc ( s′−1 ) +1 , xwenc ( s′−1 ) +2 , . . . , xwenc ( s′−1 ) +wenc ] , ( 12 ) and rencs′ is extracted representation vector inputted to the seq2seq encoder as shown in Figure 1 —the dimension of renc is the same as the one of the hidden vector h enc . Similarly , to approximate the complex target waves well , we introduce inverse representation function , R−1 ( · ) which is separately trained from R−1 ( · ) as W dect′ = R −1 ( rdect′ ) ( 13 ) where rdect′ is the t ′-th output vector from seq2seq decoder as shown in Figure 1 , and W dect′ is a window matrix which is corresponding to a partial wave of target waves y1 : T = ( y1 , . . . , yT ) . The advantage of window-based architecture are three-fold : firstly , the number of steps in both encoder and decoder could be largely reduced and make the seq2seq with context vector work stably . Secondly , the complexity and variation in the shape inside windows are also largely reduced in comparison with the entire waves . Thus , important information could be extracted from source waves and the output sequence could be accurately approximated by relatively simple representation R ( · ) and inverse-representation R−1 ( · ) functions respectively . Thirdly , both representation R ( · ) and inverse-representation R−1 ( · ) functions are trained end-to-end manner by minimizing the loss L ( θ ) where both functions are modeled by fully-connected ( FC ) networks . Figure 1 depicts the overall architecture of our wave2wave with an example of toy-data . The wave2wave consists of encoder and decoder with long-short term memory ( LSTM ) nodes in their inside , representation function R ( W encs′ ) and inverse-representation function R −1 ( W dect′ ) . In this figure , one-dimensional 10000-time-step continuous time-series are considered as an input and an output and the width of window is set to 2000— there are 5 window steps for both encoder and decoder , i.e. , wenc = wdec = 2000 and S′ = T ′ = 5 . Then , 1 × 2000 encoder-window-matrix W encs′ is converted to dr dimensional encoder-representation vector rencs′ by the representation function R ( W encs′ ) . Meanwhile , the output decoder , dr dimensional decoder-representation r dec t′ , is converted to 1× 2000 decoder-window-matrix W dect′ by the inverse representation function R−1 ( rdect′ ) .
This work explains how to use a seq2seq encoder-decoder neural network on the case of multivariate time series. The authors name this particular application of seq2seq the wave2wave network. Given a multivariate time series covering a time interval, it is split into subintervals of equal length, such that each block is a matrix. This matrix becomes an input into a recurrent encoder. On the decoder side, the similar matrix is produced at the output. The proposed neural network is tested on two data sets: an earthquake and activity translation.
SP:4fdd94362be6718ab249cdb8da4e75b9eade64bd
Policy Optimization with Stochastic Mirror Descent
1 INTRODUCTION . Reinforcement learning ( RL ) is one of the most wonderful fields of artificial intelligence , and it has achieved great progress recently ( Mnih et al. , 2015 ; Silver et al. , 2017 ) . To learn the optimal policy from the delayed reward decision system is the fundamental goal of RL . Policy gradient methods ( Williams , 1992 ; Sutton et al. , 2000 ) are powerful algorithms to learn the optimal policy . Despite the successes of policy gradient method , suffering from high sample complexity is still a critical challenge for RL . Many existing popular methods require more samples to be collected for each step to update the parameters ( Silver et al. , 2014 ; Lillicrap et al. , 2016 ; Schulman et al. , 2015 ; Mnih et al. , 2016 ; Haarnoja et al. , 2018 ) , which partially reduces the effectiveness of the sample . Although all the above existing methods claim it improves sample efficiency , they are all empirical results which lack a strong theory analysis of sample complexity . To improve sample efficiency , in this paper , we explore how to design an efficient and stable algorithm with stochastic mirror descent ( SMD ) . Due to its advantage of the simplicity of implementation , low memory requirement , and low computational complexity ( Nemirovsky & Yudin , 1983 ; Beck & Teboulle , 2003 ; Lei & Tang , 2018 ) , SMD is one of the most widely used methods in machine learning . However , it is not sound to apply SMD to policy optimization directly , and the challenges are two-fold : ( I ) The objective of policy-based RL is a typical non-convex function , but Ghadimi et al . ( 2016 ) show that it may cause instability and even divergence when updating the parameter of a non-convex objective function by SMD via a single batch sample . ( II ) Besides , the large variance of gradient estimator is the other bottleneck of applying SMD to policy optimization for improving sample efficiency . In fact , in reinforcement learning , the non-stationary sampling process with the environment leads to the large variance of existing methods on the estimate of policy gradient , which results in poor sample efficiency ( Papini et al. , 2018 ; Liu et al. , 2018 ) . Contributions To address the above two problems correspondingly , in this paper ( I ) We analyze the theoretical dilemma of applying SMD to policy optimization . Our analysis shows that under the common Assumption 1 , for policy-based RL , designing the algorithm via SMD directly can not guarantee the convergence . Hence , we propose the MPO algorithm with a provable convergence guarantee . Designing an efficiently computable , and unbiased gradient estimator by averaging its historical policy gradient is the key to MPO . ( II ) We propose the VRMPO : a sample efficient policy optimization algorithm via constructing a variance reduced policy gradient estimator . Specifically , we propose an efficiently computable policy gradient estimator , utilizing fresh information and yielding a more accurate estimation of the gradient w.r.t the objective , which is the key to improve sample efficiency . We prove VRMPO needs O ( −3 ) sample trajectories to achieve an -approximate firstorder stationary point ( -FOSP ) ( Nesterov , 2004 ) . To our best knowledge , our VRMPO matches the best-known sample complexity among the existing literature . Besides , we conduct extensive experiments , which further show that our algorithm outperforms state-of-the-art bandit algorithms in various settings . 2 BACKGROUND AND NOTATIONS . 2.1 POLICY-BASED REINFORCEMENT LEARNING . We consider the Markov decision processes M = ( S , A , P , R , ρ0 , γ ) , where S is state space , A is action space ; At time t , the agent is in a state St ∈ S and takes an action At ∈ A , then it receives a feedback Rt+1 ; P ass′ = P ( s ′ |s , a ) ∈ P is the probability of the state transition from s to s′ under taking a ∈ A ; The bounded reward function R : S × A → [ −R , R ] , Ras 7→ E [ Rt+1|St = s , At = a ] ; ρ0 : S → [ 0 , 1 ] is the initial state distribution and γ ∈ ( 0 , 1 ) is discounted factor . Policy πθ ( a|s ) is a probability distribution on S × A with the parameter θ ∈ Rp . Let τ = { st , at , rt+1 } Hτt=0 be a trajectory , where s0 ∼ ρ0 ( s0 ) , at ∼ πθ ( ·|st ) , rt+1 = R ( st , at ) , st+1 ∼ P ( ·|st , at ) , and Hτ is the finite horizon of τ . The expected return J ( πθ ) is defined as : J ( θ ) def = J ( πθ ) = ∫ τ P ( τ |θ ) R ( τ ) dτ = Eτ∼πθ [ R ( τ ) ] , ( 1 ) where P ( τ |θ ) = ρ0 ( s0 ) ∏Hτ t=0 P ( st+1|st , at ) πθ ( at|st ) is the probability of generating τ , R ( τ ) =∑Hτ t=0 γ trt+1 is the accumulated discounted return . Let J ( θ ) = −J ( θ ) , the central problem of policy-based RL is to solve the problem : θ∗ = arg max θ J ( θ ) ⇐⇒ θ∗ = arg min θ J ( θ ) . ( 2 ) Computing the∇J ( θ ) analytically , we have ∇J ( θ ) = Eτ∼πθ [ Hτ∑ t=0 ∇θ log πθ ( at|st ) R ( τ ) ] . ( 3 ) For any trajectory τ , let g ( τ |θ ) = ∑Hτ t=0∇θ log πθ ( at|st ) R ( τ ) , which is an unbiased estimator of ∇J ( θ ) . Vanilla policy gradient ( VPG ) is a straightforward way to solve problem ( 2 ) : θ ← θ + αg ( τ |θ ) , where α is step size . Assumption 1 . ( Sutton et al. , 2000 ; Papini et al. , 2018 ) For each pair ( s , a ) , any θ ∈ Rp , and all components i , j , there exists positive constants G , F s.t. , |∇θi log πθ ( a|s ) | ≤ G , | ∂2 ∂θi∂θj log πθ ( a|s ) | ≤ F. ( 4 ) According to the Lemma B.2 of ( Papini et al. , 2018 ) , Assumption 1 implies ∇J ( θ ) is L-Lipschiz , i.e. , ‖∇J ( θ1 ) −∇J ( θ2 ) ‖ ≤ L‖θ1 − θ2‖ , where L = RH ( HG2 + F ) / ( 1− γ ) , ( 5 ) Besides , Assumption 1 implies the following property of the policy gradient estimator . Lemma 1 ( Properties of stochastic differential estimators ( Shen et al. , 2019 ) ) . Under Assumption 1 , for any policy πθ and τ ∼ πθ , we have ‖g ( τ |θ ) −∇J ( θ ) ‖2 ≤ G 2R2 ( 1− γ ) 4 def = σ2 . ( 6 ) 2.2 STOCHASTIC MIRROR DESCENT . Now , we review some basic concepts of SMD ; in this section , the notation follows ( Nemirovski et al. , 2009 ) . Let ’ s consider the stochastic optimization problem , min θ∈Dθ { f ( θ ) = E [ F ( θ ; ξ ) ] } , ( 7 ) where Dθ ∈ Rn is a nonempty convex compact set , ξ is a random vector whose probability distribution , µ is supported on Ξ ∈ Rd and F : Dθ × Ξ → R. We assume that the expectation E [ F ( θ ; ξ ) ] = ∫ Ξ F ( θ ; ξ ) dµ ( ξ ) is well defined and finite-valued for every θ ∈ Dθ . Definition 1 ( Proximal Operator ( Moreau , 1965 ) ) . T is a function defined on a closed convex X , and α > 0.Mψα , T ( z ) is the proximal operator of T , which is defined as : Mψα , T ( z ) = arg min x∈X { T ( x ) + 1 α Dψ ( x , z ) } , ( 8 ) where ψ ( x ) is a continuously-differentiable , ζ-strictly convex function : 〈x− y , ∇ψ ( x ) −∇ψ ( y ) 〉 ≥ ζ‖x−y‖2 , ζ > 0 , Dψ is Bregman distance : Dψ ( x , y ) = ψ ( x ) −ψ ( y ) −〈∇ψ ( y ) , x−y〉 , ∀ x , y ∈ X . Stochastic Mirror Descent The SMD solves ( 7 ) by generating an iterative solution as follows , θt+1 =Mψαt , ` ( θ ) ( θt ) = arg minθ∈Dθ { 〈gt , θ〉+ 1 αt Dψ ( θ , θt ) } , ( 9 ) where αt > 0 is step-size , ` ( θ ) = 〈gt , θ〉 is the first-order approximation of f ( θ ) at θt , gt = g ( θt , ξt ) is stochastic subgradient such that g ( θt ) = E [ g ( θt , ξt ) ] ∈ ∂f ( θ ) |θ=θt , { ξt } t≥0 represents a draw form distribution µ , and ∂f ( θ ) = { g|f ( θ ) − f ( ω ) ≤ gT ( θ − ω ) , ∀ω ∈ dom ( f ) } . If we choose ψ ( x ) = 1 2 ‖x‖22 , which implies Dψ ( x , y ) = 1 2 ‖x − y‖22 , since then iteration ( 9 ) is the proximal gradient ( Rockafellar , 1976 ) view of SGD . Thus , SMD is a generalization of SGD . Convergence Criteria : Bregman Gradient Bregman gradient is a generation of projected gradient ( Ghadimi et al. , 2016 ) . Recently , Zhang & He ( 2018 ) ; Davis & Grimmer ( 2019 ) develop it to measure the convergence of an algorithm for the non-convex optimization problem . Evaluating the difference between each candidate solution x and its proximity is the critical idea of Bregman gradient to measure the stationarity of x . Specifically , let X be a closed convex set on Rn , α > 0 , T ( x ) is defined on X . The Bregman gradient of T at x ∈ X is : Gψα , T ( x ) = α −1 ( x−Mψα , T ( x ) ) , ( 10 ) whereMψα , T ( · ) is defined in Eq. ( 8 ) . If ψ ( x ) = 1 2 ‖x‖22 , then x∗ is a critical point of T if and only if Gψα , T ( x∗ ) = ∇T ( x∗ ) = 0 ( Bauschke et al . ( 2011 ) ; Theorem 27.1 ) . Thus , Bregman gradient ( 10 ) is a generalization of gradient . The following Remark 1 is helpful for us to understand the significance of Bregman gradient , and it gives us some insights to understand this convergence criterion . Remark 1 . Let T be a convex function , by the Proposition 5.4.7 of Bertsekas ( 2009 ) : x∗ is a stationarity point of T if and only if 0 ∈ ∂ ( T + δX ) ( x∗ ) , ( 11 ) where δX ( · ) is the indicator function on X . Furthermore , suppose ψ ( x ) is twice continuously differentiable , let x̃ =Mψα , T ( x ) , by the definition of proximal operatorM ψ α , T ( · ) , we have 0 ∈ ∂ ( T + δX ) ( x̃ ) + ( ∇ψ ( x̃ ) −∇ψ ( x ) ) ( ? ) ≈ ∂ ( T + δX ) ( x̃ ) + αGψα , T ( x ) ∇ 2ψ ( x ) , ( 12 ) Eq. ( ? ) holds due to the first-order Taylor expansion of∇ψ ( x ) . By the criteria of ( 11 ) , if Gψα , T ( x ) ≈ 0 , Eq . ( 12 ) implies the origin point 0 is near the set ∂ ( T + δX ) ( x̃ ) , i.e. , x̃ is close to a stationary point . In practice , we choose T ( θ ) = 〈−∇J ( θt ) , θ〉 , since then discriminant criterion ( 12 ) is suitable to RL problem ( 2 ) . For the non-convex problem ( 2 ) , we are satisfied with finding an -approximate First-Order Stationary Point ( -FOSP ) ( Nesterov , 2004 ) , denoted by θ , such that ‖Gψα , T ( θ ) ( θ ) ‖ ≤ . ( 13 )
This paper proposed a variant of policy gradient algorithm with mirror descent update, which is a natural generalization of projected policy gradient descent. The authors also proposed a variance reduced policy gradient algorithm following the variance reduction techniques in optimization. The authors further proved the convergence of the proposed algorithms and some experiments are conducted to show the effectiveness of their algorithms. The paper is not written in a very good way due to many typos and misleading notations. Moreover, there seem to be some technical issues in the proofs of both main theorems.
SP:13d8b88675da21a709b023c18bf47d6fc2c12924
Policy Optimization with Stochastic Mirror Descent
1 INTRODUCTION . Reinforcement learning ( RL ) is one of the most wonderful fields of artificial intelligence , and it has achieved great progress recently ( Mnih et al. , 2015 ; Silver et al. , 2017 ) . To learn the optimal policy from the delayed reward decision system is the fundamental goal of RL . Policy gradient methods ( Williams , 1992 ; Sutton et al. , 2000 ) are powerful algorithms to learn the optimal policy . Despite the successes of policy gradient method , suffering from high sample complexity is still a critical challenge for RL . Many existing popular methods require more samples to be collected for each step to update the parameters ( Silver et al. , 2014 ; Lillicrap et al. , 2016 ; Schulman et al. , 2015 ; Mnih et al. , 2016 ; Haarnoja et al. , 2018 ) , which partially reduces the effectiveness of the sample . Although all the above existing methods claim it improves sample efficiency , they are all empirical results which lack a strong theory analysis of sample complexity . To improve sample efficiency , in this paper , we explore how to design an efficient and stable algorithm with stochastic mirror descent ( SMD ) . Due to its advantage of the simplicity of implementation , low memory requirement , and low computational complexity ( Nemirovsky & Yudin , 1983 ; Beck & Teboulle , 2003 ; Lei & Tang , 2018 ) , SMD is one of the most widely used methods in machine learning . However , it is not sound to apply SMD to policy optimization directly , and the challenges are two-fold : ( I ) The objective of policy-based RL is a typical non-convex function , but Ghadimi et al . ( 2016 ) show that it may cause instability and even divergence when updating the parameter of a non-convex objective function by SMD via a single batch sample . ( II ) Besides , the large variance of gradient estimator is the other bottleneck of applying SMD to policy optimization for improving sample efficiency . In fact , in reinforcement learning , the non-stationary sampling process with the environment leads to the large variance of existing methods on the estimate of policy gradient , which results in poor sample efficiency ( Papini et al. , 2018 ; Liu et al. , 2018 ) . Contributions To address the above two problems correspondingly , in this paper ( I ) We analyze the theoretical dilemma of applying SMD to policy optimization . Our analysis shows that under the common Assumption 1 , for policy-based RL , designing the algorithm via SMD directly can not guarantee the convergence . Hence , we propose the MPO algorithm with a provable convergence guarantee . Designing an efficiently computable , and unbiased gradient estimator by averaging its historical policy gradient is the key to MPO . ( II ) We propose the VRMPO : a sample efficient policy optimization algorithm via constructing a variance reduced policy gradient estimator . Specifically , we propose an efficiently computable policy gradient estimator , utilizing fresh information and yielding a more accurate estimation of the gradient w.r.t the objective , which is the key to improve sample efficiency . We prove VRMPO needs O ( −3 ) sample trajectories to achieve an -approximate firstorder stationary point ( -FOSP ) ( Nesterov , 2004 ) . To our best knowledge , our VRMPO matches the best-known sample complexity among the existing literature . Besides , we conduct extensive experiments , which further show that our algorithm outperforms state-of-the-art bandit algorithms in various settings . 2 BACKGROUND AND NOTATIONS . 2.1 POLICY-BASED REINFORCEMENT LEARNING . We consider the Markov decision processes M = ( S , A , P , R , ρ0 , γ ) , where S is state space , A is action space ; At time t , the agent is in a state St ∈ S and takes an action At ∈ A , then it receives a feedback Rt+1 ; P ass′ = P ( s ′ |s , a ) ∈ P is the probability of the state transition from s to s′ under taking a ∈ A ; The bounded reward function R : S × A → [ −R , R ] , Ras 7→ E [ Rt+1|St = s , At = a ] ; ρ0 : S → [ 0 , 1 ] is the initial state distribution and γ ∈ ( 0 , 1 ) is discounted factor . Policy πθ ( a|s ) is a probability distribution on S × A with the parameter θ ∈ Rp . Let τ = { st , at , rt+1 } Hτt=0 be a trajectory , where s0 ∼ ρ0 ( s0 ) , at ∼ πθ ( ·|st ) , rt+1 = R ( st , at ) , st+1 ∼ P ( ·|st , at ) , and Hτ is the finite horizon of τ . The expected return J ( πθ ) is defined as : J ( θ ) def = J ( πθ ) = ∫ τ P ( τ |θ ) R ( τ ) dτ = Eτ∼πθ [ R ( τ ) ] , ( 1 ) where P ( τ |θ ) = ρ0 ( s0 ) ∏Hτ t=0 P ( st+1|st , at ) πθ ( at|st ) is the probability of generating τ , R ( τ ) =∑Hτ t=0 γ trt+1 is the accumulated discounted return . Let J ( θ ) = −J ( θ ) , the central problem of policy-based RL is to solve the problem : θ∗ = arg max θ J ( θ ) ⇐⇒ θ∗ = arg min θ J ( θ ) . ( 2 ) Computing the∇J ( θ ) analytically , we have ∇J ( θ ) = Eτ∼πθ [ Hτ∑ t=0 ∇θ log πθ ( at|st ) R ( τ ) ] . ( 3 ) For any trajectory τ , let g ( τ |θ ) = ∑Hτ t=0∇θ log πθ ( at|st ) R ( τ ) , which is an unbiased estimator of ∇J ( θ ) . Vanilla policy gradient ( VPG ) is a straightforward way to solve problem ( 2 ) : θ ← θ + αg ( τ |θ ) , where α is step size . Assumption 1 . ( Sutton et al. , 2000 ; Papini et al. , 2018 ) For each pair ( s , a ) , any θ ∈ Rp , and all components i , j , there exists positive constants G , F s.t. , |∇θi log πθ ( a|s ) | ≤ G , | ∂2 ∂θi∂θj log πθ ( a|s ) | ≤ F. ( 4 ) According to the Lemma B.2 of ( Papini et al. , 2018 ) , Assumption 1 implies ∇J ( θ ) is L-Lipschiz , i.e. , ‖∇J ( θ1 ) −∇J ( θ2 ) ‖ ≤ L‖θ1 − θ2‖ , where L = RH ( HG2 + F ) / ( 1− γ ) , ( 5 ) Besides , Assumption 1 implies the following property of the policy gradient estimator . Lemma 1 ( Properties of stochastic differential estimators ( Shen et al. , 2019 ) ) . Under Assumption 1 , for any policy πθ and τ ∼ πθ , we have ‖g ( τ |θ ) −∇J ( θ ) ‖2 ≤ G 2R2 ( 1− γ ) 4 def = σ2 . ( 6 ) 2.2 STOCHASTIC MIRROR DESCENT . Now , we review some basic concepts of SMD ; in this section , the notation follows ( Nemirovski et al. , 2009 ) . Let ’ s consider the stochastic optimization problem , min θ∈Dθ { f ( θ ) = E [ F ( θ ; ξ ) ] } , ( 7 ) where Dθ ∈ Rn is a nonempty convex compact set , ξ is a random vector whose probability distribution , µ is supported on Ξ ∈ Rd and F : Dθ × Ξ → R. We assume that the expectation E [ F ( θ ; ξ ) ] = ∫ Ξ F ( θ ; ξ ) dµ ( ξ ) is well defined and finite-valued for every θ ∈ Dθ . Definition 1 ( Proximal Operator ( Moreau , 1965 ) ) . T is a function defined on a closed convex X , and α > 0.Mψα , T ( z ) is the proximal operator of T , which is defined as : Mψα , T ( z ) = arg min x∈X { T ( x ) + 1 α Dψ ( x , z ) } , ( 8 ) where ψ ( x ) is a continuously-differentiable , ζ-strictly convex function : 〈x− y , ∇ψ ( x ) −∇ψ ( y ) 〉 ≥ ζ‖x−y‖2 , ζ > 0 , Dψ is Bregman distance : Dψ ( x , y ) = ψ ( x ) −ψ ( y ) −〈∇ψ ( y ) , x−y〉 , ∀ x , y ∈ X . Stochastic Mirror Descent The SMD solves ( 7 ) by generating an iterative solution as follows , θt+1 =Mψαt , ` ( θ ) ( θt ) = arg minθ∈Dθ { 〈gt , θ〉+ 1 αt Dψ ( θ , θt ) } , ( 9 ) where αt > 0 is step-size , ` ( θ ) = 〈gt , θ〉 is the first-order approximation of f ( θ ) at θt , gt = g ( θt , ξt ) is stochastic subgradient such that g ( θt ) = E [ g ( θt , ξt ) ] ∈ ∂f ( θ ) |θ=θt , { ξt } t≥0 represents a draw form distribution µ , and ∂f ( θ ) = { g|f ( θ ) − f ( ω ) ≤ gT ( θ − ω ) , ∀ω ∈ dom ( f ) } . If we choose ψ ( x ) = 1 2 ‖x‖22 , which implies Dψ ( x , y ) = 1 2 ‖x − y‖22 , since then iteration ( 9 ) is the proximal gradient ( Rockafellar , 1976 ) view of SGD . Thus , SMD is a generalization of SGD . Convergence Criteria : Bregman Gradient Bregman gradient is a generation of projected gradient ( Ghadimi et al. , 2016 ) . Recently , Zhang & He ( 2018 ) ; Davis & Grimmer ( 2019 ) develop it to measure the convergence of an algorithm for the non-convex optimization problem . Evaluating the difference between each candidate solution x and its proximity is the critical idea of Bregman gradient to measure the stationarity of x . Specifically , let X be a closed convex set on Rn , α > 0 , T ( x ) is defined on X . The Bregman gradient of T at x ∈ X is : Gψα , T ( x ) = α −1 ( x−Mψα , T ( x ) ) , ( 10 ) whereMψα , T ( · ) is defined in Eq. ( 8 ) . If ψ ( x ) = 1 2 ‖x‖22 , then x∗ is a critical point of T if and only if Gψα , T ( x∗ ) = ∇T ( x∗ ) = 0 ( Bauschke et al . ( 2011 ) ; Theorem 27.1 ) . Thus , Bregman gradient ( 10 ) is a generalization of gradient . The following Remark 1 is helpful for us to understand the significance of Bregman gradient , and it gives us some insights to understand this convergence criterion . Remark 1 . Let T be a convex function , by the Proposition 5.4.7 of Bertsekas ( 2009 ) : x∗ is a stationarity point of T if and only if 0 ∈ ∂ ( T + δX ) ( x∗ ) , ( 11 ) where δX ( · ) is the indicator function on X . Furthermore , suppose ψ ( x ) is twice continuously differentiable , let x̃ =Mψα , T ( x ) , by the definition of proximal operatorM ψ α , T ( · ) , we have 0 ∈ ∂ ( T + δX ) ( x̃ ) + ( ∇ψ ( x̃ ) −∇ψ ( x ) ) ( ? ) ≈ ∂ ( T + δX ) ( x̃ ) + αGψα , T ( x ) ∇ 2ψ ( x ) , ( 12 ) Eq. ( ? ) holds due to the first-order Taylor expansion of∇ψ ( x ) . By the criteria of ( 11 ) , if Gψα , T ( x ) ≈ 0 , Eq . ( 12 ) implies the origin point 0 is near the set ∂ ( T + δX ) ( x̃ ) , i.e. , x̃ is close to a stationary point . In practice , we choose T ( θ ) = 〈−∇J ( θt ) , θ〉 , since then discriminant criterion ( 12 ) is suitable to RL problem ( 2 ) . For the non-convex problem ( 2 ) , we are satisfied with finding an -approximate First-Order Stationary Point ( -FOSP ) ( Nesterov , 2004 ) , denoted by θ , such that ‖Gψα , T ( θ ) ( θ ) ‖ ≤ . ( 13 )
This paper proposes MPO, a policy optimization method with convergence guarantees based on stochastic mirror descent that uses the average of previous gradients to update the policy parameters. A lower-variance method, VRMPO, is then proposed that matches the best known convergence rate in the in the literature. Experiments show that (1) MPO converges faster than basic policy optimization methods on a small task, and (2) VRMPO achieves a performance comparable to, and often better than, popular policy optimization methods (TD3, DDPG, PPO, and TRPO) on MuJoCo.
SP:13d8b88675da21a709b023c18bf47d6fc2c12924
Variance Reduction With Sparse Gradients
1 INTRODUCTION . Optimization tools for machine learning applications seek to minimize the finite sum objective min x∈Rd f ( x ) , 1 n n∑ i=1 fi ( x ) , ( 1 ) where x is a vector of parameters , and fi : Rd → R is the loss associated with sample i. Batch SGD serves as the prototype for modern stochastic gradient methods . It updates the iterate x with x− η∇fI ( x ) , where η is the learning rate and∇fI ( x ) is the batch stochastic gradient , i.e . ∇fI ( x ) = 1 |I| ∑ i∈I ∇fi ( x ) . The batch size |I| in batch SGD directly impacts the stochastic variance and gradient query complexity of each iteration of the update rule . In recent years , variance reduction techniques have been proposed by carefully blending large and small batch gradients ( e.g . Roux et al. , 2012 ; Johnson & Zhang , 2013 ; Defazio et al. , 2014 ; Xiao & Zhang , 2014 ; Allen-Zhu & Yuan , 2016 ; Allen-Zhu & Hazan , 2016 ; Reddi et al. , 2016a ; b ; Allen-Zhu , 2017 ; Lei & Jordan , 2017 ; Lei et al. , 2017 ; Allen-Zhu , 2018b ; Fang et al. , 2018 ; Zhou et al. , 2018 ; Wang et al. , 2018 ; Pham et al. , 2019 ; Nguyen et al. , 2019 ; Lei & Jordan , 2019 ) . They are alternatives to batch SGD and are provably better than SGD in various settings . While these methods allow for greater learning rates than batch SGD and have appealing theoretical guarantees , they require a per-iteration query complexity which is more than double than that of batch SGD . Defazio ( 2019 ) questions the utility of variance reduction techniques in modern machine learning problems , empirically identifying query complexity as one issue . In this paper , we show that gradient sparsity ( Aji & Heafield , 2017 ) can be used to significantly reduce the query complexity of variance reduction methods . Our work is motivated by the observation that gradients tend to be ” sparse , ” having only a small fraction of large coordinates . Specifically , if the indices of large gradient coordinates ( measured in absolute value ) are known before updating model parameters , we compute the derivative of only those coordinates while setting the remaining gradient coordinates to zero . In principle , if sparsity is exhibited , using large gradient coordinates will not effect performance and will significantly reduce the number of operations required to update model parameters . Nevertheless , this heuristic alone has three issues : ( 1 ) bias is introduced by setting other entries to zero ; ( 2 ) the locations of large coordinates are typically unknown ; ( 3 ) accessing a subset of coordinates may not be easily implemented for some problems like deep neural networks . We provide solutions for all three issues . First , we introduce a new sparse gradient operator : The random-top-k operator . The random-top-k operator is a composition of the randomized coordinate descent operator and the top-k operator . In prior work , the top-k operator has been used to reduce the communication complexity of distributed optimization ( Stich et al. , 2018 ; Aji & Heafield , 2017 ) applications . The random-top-k operator has two phases : Given a stochastic gradient and a pair of integers ( k1 , k2 ) that sum to k , the operator retains k1 coordinates which are most ” promising ” in terms of their ” likelihood ” to be large on average , then randomly selects k2 of the remaining coordinates with appropriate rescaling . The first phase captures sparsity patterns while the second phase eliminates bias . Second , we make use of large batch gradients in variance reduction methods to estimate sparsity patterns . Inspired by the use of a memory vector in Aji & Heafield ( 2017 ) , the algorithm maintains a memory vector initialized with the absolute value of the large batch gradient at the beginning of each outer loop and updated by taking an exponential moving average over subsequent stochastic gradients . Coordinates with large values in the memory vector are more ” promising , ” and the random-top-k operator will pick the top k1 coordinate indices based on the memory vector . Since larger batch gradients have lower variance , the initial estimate is quite accurate . Finally , for software that supports dynamic computation graphs , we provide a cost-effective way ( sparse back-propagation ) to implement the random-top-k operator . In this work we apply the random-top-k operator to SpiderBoost ( Wang et al. , 2018 ) , a recent variance reduction method that achieves optimal query complexity , with a slight modification based on the ” geometrization ” technique introduced by Lei & Jordan ( 2019 ) . Theoretically , we show that our algorithm is never worse than SpiderBoost and can strictly outperform it when the random-top-k operator captures gradient sparsity . Empirically , we demonstrate the improvements in computation for various tasks including image classification , natural language processing , and sparse matrix factorization . The rest of the paper is organized as follows . In Section 2 , we define the random-top-k operator , our optimization algorithm , and a description of sparse backpropagation . The theoretical analyses are presented in Section 3 , followed by experimental results in Section 4 . All technical proofs are relegated to Appendix A , and additional experimental details can be found in Appendix B . 2 STOCHASTIC VARIANCE REDUCTION WITH SPARSE GRADIENTS . Generally , variance reduction methods reduce the variance of stochastic gradients by taking a snapshot ∇f ( y ) of the gradient ∇f ( x ) every m steps of optimization , and use the gradient information in this snapshot to reduce the variance of subsequent smaller batch gradients ∇fI ( x ) ( Johnson & Zhang , 2013 ; Wang et al. , 2018 ) . Methods such as SCSG ( Lei & Jordan , 2017 ) utilize a large batch gradient , which is typically some multiple in size of the small batch gradient b , which is much more practical and is what we do in this paper . To reduce the cost of computing additional gradients , we use sparsity by only computing a subset k of the total gradients d , where y ∈ Rd . For d , k , k1 , k2 ∈ Z+ , let k = k1 + k2 , where 1 ≤ k ≤ d for a parametric model of dimension d. In what follows , we define an operator which takes vectors x , y and outputs y′ , where y′ retains only k of the entries in y , k1 of which are selected according to the coordinates in x which have the k1 largest absolute values , and the remaining k2 entries are randomly selected from y . The k1 coordinate indices and k2 coordinate indices are disjoint . Formally , the operator rtopk1 , k2 : R d → Rd is defined for x , y ∈ Rd as ( rtopk1 , k2 ( x , y ) ) ` = y ` if k1 > 0 and |x| ` ≥ |x| ( k1 ) ( d−k1 ) k2 y ` if ` ∈ S 0 otherwise , where |x| denotes a vector of absolute values , |x| ( 1 ) ≥ |x| ( 2 ) ≥ . . . ≥ |x| ( d ) denotes the order statistics of coordinates of x in absolute values , and S denotes a random subset with size k2 that is uniformly drawn from the set { ` : |x| ` < |x| ( k1 ) } . For instance , if x = ( 11 , 12 , 13 , −14 , −15 ) , y = ( −25 , −24 , 13 , 12 , 11 ) and k1 = k2 = 1 , then S is a singleton uniformly drawn from { 1 , 2 , 3 , 4 } . Suppose S = { 2 } , then rtop1,1 ( x , y ) = ( 0 , 4y2 , 0 , 0 , y5 ) = ( 0 , −96 , 0 , 0 , 11 ) . If k1 + k2 = d , rtopk1 , k2 ( x , y ) = y . On the other hand , if k1 = 0 , rtop0 , k2 ( x , y ) does not depend on x and returns a rescaled random subset of y . This is the operator used in coordinate descent methods . Finally , rtopk1 , k2 ( x , y ) is linear in y . The following Lemma shows that rtopk1 , k2 ( x , y ) is an unbiased estimator of y , which is a crucial property in our later analysis . Lemma 1 . Given any x , y ∈ Rd , E ( rtopk1 , k2 ( x , y ) ) = y , Var ( rtopk1 , k2 ( x , y ) ) = d− k1 − k2 k2 ‖ top−k1 ( x , y ) ‖ 2 , where E is taken over the random subset S involved in the rtopk1 , k2 operator and ( top−k1 ( x , y ) ) ` = { y ` if k1 > 0 and |x| ` < |x| ( k1 ) 0 otherwise . Our algorithm is detailed as below . Algorithm 1 : SpiderBoost with Sparse Gradients . Input : Learning rate η , inner loop size m , outer loop size T , large batch size B , small batch size b , initial iterate x0 , memory decay factor α , sparsity parameters k1 , k2 . 1 I0 ∼ Unif ( { 1 , . . . , n } ) with |I0| = B 2 M0 : = |∇fI0 ( x0 ) | 3 for j = 1 , ... , T do 4 x ( j ) 0 : = xj−1 , M ( j ) 0 : = Mj−1 5 Ij ∼ Unif ( { 1 , . . . , n } ) with |Ij | = B 6 ν ( j ) 0 : = ∇fIj ( x ( j ) 0 ) 7 Nj : = m ( for implementation ) or Nj ∼ geometric distribution with mean m ( for theory ) 8 for t = 0 , . . . , Nj − 1 do 9 x ( j ) t+1 : = x ( j ) t − ην ( j ) t 10 I ( j ) t ∼ Unif ( [ n ] ) with |I ( j ) t | = b 11 ν ( j ) t+1 : = ν ( j ) t + rtopk1 , k2 ( M ( j ) t , ∇fI ( j ) t ( x ( j ) t+1 ) −∇fI ( j ) t ( x ( j ) t ) ) 12 M ( j ) t+1 : = α|ν ( j ) t+1|+ ( 1− α ) M ( j ) t 13 xj : = x ( j ) Nj , Mj : = M ( j ) Nj Output : xout = xT ( for implementation ) or xout = xT ′ where T ′ ∼ Unif ( [ T ] ) ( for theory ) The algorithm includes an outer-loop and an inner-loop . In the theoretical analysis , we generate Nj as Geometric random variables . This trick is called ” geometrization ” , proposed by Lei & Jordan ( 2017 ) and dubbed by Lei & Jordan ( 2019 ) . It greatly simplifies analysis ( e.g . Lei et al. , 2017 ; Allen-Zhu , 2018a ) . In practice , as observed by Lei et al . ( 2017 ) , setting Nj to m does not impact performance in any significant way . We only use ” geometrization ” in our theoretical analysis for clarity . Similarly , for our theoretical analysis , the output of our algorithm is selected uniformly at random from the set of outer loop iterations . Like the use of average iterates in convex optimization , this is a common technique for nonconvex optimization proposed by Nemirovski et al . ( 2009 ) . In practice , we simply use the last iterate . Similar to Aji & Heafield ( 2017 ) , we maintain a memory vector M ( j ) t at each iteration of our algorithm . The memory vector is initialized to the large batch gradient computed before every pass through the inner loop , which provides a relatively accurate gradient sparsity estimate of x ( j ) 0 . The exponential moving average gradually incorporates information from subsequent small batch gradients to account for changes to gradient sparsity . We then use M ( j ) t as an approximation to the variance of each gradient coordinate in our rtopk1 , k2 operator . With M ( j ) t as input , the rtopk1 , k2 operator targets k1 high variance gradient coordinates in addition to k2 randomly selected coordinates . The cost of invoking rtopk1 , k2 is dominated by the algorithm for selecting the top k coordinates , which has linear worst case complexity when using the introselect algorithm ( Musser , 1997 ) .
This paper aims at improving the computational cost of variance reduction methods while preserving their benefits regarding the fast provable convergence. The existing variance reduction based methods suffer from higher per-iteration gradient query complexity as compared to the vanilla mini-batch SGD, which limits their utility in many practical settings. This paper notices that, for many models, as the training progresses the gradient vectors start exhibiting structure in the sense that only a small number of coordinates have large magnitude. Based on this observation, the paper proposes a modified variance reduction method (by modifying the SpiderBoost method), where a 'memory vector' keeps track of the coordinates of the gradient vectors with large variance. Let $d$ be the size of the model parameter. During each iteration, one computes the gradient for $k_1$ coordinates with the highest variance (according to the memory vector) and an additional $k_2$ random coordinates.
SP:24fb2650085abd5599f3dcd187a62a514608423a
Variance Reduction With Sparse Gradients
1 INTRODUCTION . Optimization tools for machine learning applications seek to minimize the finite sum objective min x∈Rd f ( x ) , 1 n n∑ i=1 fi ( x ) , ( 1 ) where x is a vector of parameters , and fi : Rd → R is the loss associated with sample i. Batch SGD serves as the prototype for modern stochastic gradient methods . It updates the iterate x with x− η∇fI ( x ) , where η is the learning rate and∇fI ( x ) is the batch stochastic gradient , i.e . ∇fI ( x ) = 1 |I| ∑ i∈I ∇fi ( x ) . The batch size |I| in batch SGD directly impacts the stochastic variance and gradient query complexity of each iteration of the update rule . In recent years , variance reduction techniques have been proposed by carefully blending large and small batch gradients ( e.g . Roux et al. , 2012 ; Johnson & Zhang , 2013 ; Defazio et al. , 2014 ; Xiao & Zhang , 2014 ; Allen-Zhu & Yuan , 2016 ; Allen-Zhu & Hazan , 2016 ; Reddi et al. , 2016a ; b ; Allen-Zhu , 2017 ; Lei & Jordan , 2017 ; Lei et al. , 2017 ; Allen-Zhu , 2018b ; Fang et al. , 2018 ; Zhou et al. , 2018 ; Wang et al. , 2018 ; Pham et al. , 2019 ; Nguyen et al. , 2019 ; Lei & Jordan , 2019 ) . They are alternatives to batch SGD and are provably better than SGD in various settings . While these methods allow for greater learning rates than batch SGD and have appealing theoretical guarantees , they require a per-iteration query complexity which is more than double than that of batch SGD . Defazio ( 2019 ) questions the utility of variance reduction techniques in modern machine learning problems , empirically identifying query complexity as one issue . In this paper , we show that gradient sparsity ( Aji & Heafield , 2017 ) can be used to significantly reduce the query complexity of variance reduction methods . Our work is motivated by the observation that gradients tend to be ” sparse , ” having only a small fraction of large coordinates . Specifically , if the indices of large gradient coordinates ( measured in absolute value ) are known before updating model parameters , we compute the derivative of only those coordinates while setting the remaining gradient coordinates to zero . In principle , if sparsity is exhibited , using large gradient coordinates will not effect performance and will significantly reduce the number of operations required to update model parameters . Nevertheless , this heuristic alone has three issues : ( 1 ) bias is introduced by setting other entries to zero ; ( 2 ) the locations of large coordinates are typically unknown ; ( 3 ) accessing a subset of coordinates may not be easily implemented for some problems like deep neural networks . We provide solutions for all three issues . First , we introduce a new sparse gradient operator : The random-top-k operator . The random-top-k operator is a composition of the randomized coordinate descent operator and the top-k operator . In prior work , the top-k operator has been used to reduce the communication complexity of distributed optimization ( Stich et al. , 2018 ; Aji & Heafield , 2017 ) applications . The random-top-k operator has two phases : Given a stochastic gradient and a pair of integers ( k1 , k2 ) that sum to k , the operator retains k1 coordinates which are most ” promising ” in terms of their ” likelihood ” to be large on average , then randomly selects k2 of the remaining coordinates with appropriate rescaling . The first phase captures sparsity patterns while the second phase eliminates bias . Second , we make use of large batch gradients in variance reduction methods to estimate sparsity patterns . Inspired by the use of a memory vector in Aji & Heafield ( 2017 ) , the algorithm maintains a memory vector initialized with the absolute value of the large batch gradient at the beginning of each outer loop and updated by taking an exponential moving average over subsequent stochastic gradients . Coordinates with large values in the memory vector are more ” promising , ” and the random-top-k operator will pick the top k1 coordinate indices based on the memory vector . Since larger batch gradients have lower variance , the initial estimate is quite accurate . Finally , for software that supports dynamic computation graphs , we provide a cost-effective way ( sparse back-propagation ) to implement the random-top-k operator . In this work we apply the random-top-k operator to SpiderBoost ( Wang et al. , 2018 ) , a recent variance reduction method that achieves optimal query complexity , with a slight modification based on the ” geometrization ” technique introduced by Lei & Jordan ( 2019 ) . Theoretically , we show that our algorithm is never worse than SpiderBoost and can strictly outperform it when the random-top-k operator captures gradient sparsity . Empirically , we demonstrate the improvements in computation for various tasks including image classification , natural language processing , and sparse matrix factorization . The rest of the paper is organized as follows . In Section 2 , we define the random-top-k operator , our optimization algorithm , and a description of sparse backpropagation . The theoretical analyses are presented in Section 3 , followed by experimental results in Section 4 . All technical proofs are relegated to Appendix A , and additional experimental details can be found in Appendix B . 2 STOCHASTIC VARIANCE REDUCTION WITH SPARSE GRADIENTS . Generally , variance reduction methods reduce the variance of stochastic gradients by taking a snapshot ∇f ( y ) of the gradient ∇f ( x ) every m steps of optimization , and use the gradient information in this snapshot to reduce the variance of subsequent smaller batch gradients ∇fI ( x ) ( Johnson & Zhang , 2013 ; Wang et al. , 2018 ) . Methods such as SCSG ( Lei & Jordan , 2017 ) utilize a large batch gradient , which is typically some multiple in size of the small batch gradient b , which is much more practical and is what we do in this paper . To reduce the cost of computing additional gradients , we use sparsity by only computing a subset k of the total gradients d , where y ∈ Rd . For d , k , k1 , k2 ∈ Z+ , let k = k1 + k2 , where 1 ≤ k ≤ d for a parametric model of dimension d. In what follows , we define an operator which takes vectors x , y and outputs y′ , where y′ retains only k of the entries in y , k1 of which are selected according to the coordinates in x which have the k1 largest absolute values , and the remaining k2 entries are randomly selected from y . The k1 coordinate indices and k2 coordinate indices are disjoint . Formally , the operator rtopk1 , k2 : R d → Rd is defined for x , y ∈ Rd as ( rtopk1 , k2 ( x , y ) ) ` = y ` if k1 > 0 and |x| ` ≥ |x| ( k1 ) ( d−k1 ) k2 y ` if ` ∈ S 0 otherwise , where |x| denotes a vector of absolute values , |x| ( 1 ) ≥ |x| ( 2 ) ≥ . . . ≥ |x| ( d ) denotes the order statistics of coordinates of x in absolute values , and S denotes a random subset with size k2 that is uniformly drawn from the set { ` : |x| ` < |x| ( k1 ) } . For instance , if x = ( 11 , 12 , 13 , −14 , −15 ) , y = ( −25 , −24 , 13 , 12 , 11 ) and k1 = k2 = 1 , then S is a singleton uniformly drawn from { 1 , 2 , 3 , 4 } . Suppose S = { 2 } , then rtop1,1 ( x , y ) = ( 0 , 4y2 , 0 , 0 , y5 ) = ( 0 , −96 , 0 , 0 , 11 ) . If k1 + k2 = d , rtopk1 , k2 ( x , y ) = y . On the other hand , if k1 = 0 , rtop0 , k2 ( x , y ) does not depend on x and returns a rescaled random subset of y . This is the operator used in coordinate descent methods . Finally , rtopk1 , k2 ( x , y ) is linear in y . The following Lemma shows that rtopk1 , k2 ( x , y ) is an unbiased estimator of y , which is a crucial property in our later analysis . Lemma 1 . Given any x , y ∈ Rd , E ( rtopk1 , k2 ( x , y ) ) = y , Var ( rtopk1 , k2 ( x , y ) ) = d− k1 − k2 k2 ‖ top−k1 ( x , y ) ‖ 2 , where E is taken over the random subset S involved in the rtopk1 , k2 operator and ( top−k1 ( x , y ) ) ` = { y ` if k1 > 0 and |x| ` < |x| ( k1 ) 0 otherwise . Our algorithm is detailed as below . Algorithm 1 : SpiderBoost with Sparse Gradients . Input : Learning rate η , inner loop size m , outer loop size T , large batch size B , small batch size b , initial iterate x0 , memory decay factor α , sparsity parameters k1 , k2 . 1 I0 ∼ Unif ( { 1 , . . . , n } ) with |I0| = B 2 M0 : = |∇fI0 ( x0 ) | 3 for j = 1 , ... , T do 4 x ( j ) 0 : = xj−1 , M ( j ) 0 : = Mj−1 5 Ij ∼ Unif ( { 1 , . . . , n } ) with |Ij | = B 6 ν ( j ) 0 : = ∇fIj ( x ( j ) 0 ) 7 Nj : = m ( for implementation ) or Nj ∼ geometric distribution with mean m ( for theory ) 8 for t = 0 , . . . , Nj − 1 do 9 x ( j ) t+1 : = x ( j ) t − ην ( j ) t 10 I ( j ) t ∼ Unif ( [ n ] ) with |I ( j ) t | = b 11 ν ( j ) t+1 : = ν ( j ) t + rtopk1 , k2 ( M ( j ) t , ∇fI ( j ) t ( x ( j ) t+1 ) −∇fI ( j ) t ( x ( j ) t ) ) 12 M ( j ) t+1 : = α|ν ( j ) t+1|+ ( 1− α ) M ( j ) t 13 xj : = x ( j ) Nj , Mj : = M ( j ) Nj Output : xout = xT ( for implementation ) or xout = xT ′ where T ′ ∼ Unif ( [ T ] ) ( for theory ) The algorithm includes an outer-loop and an inner-loop . In the theoretical analysis , we generate Nj as Geometric random variables . This trick is called ” geometrization ” , proposed by Lei & Jordan ( 2017 ) and dubbed by Lei & Jordan ( 2019 ) . It greatly simplifies analysis ( e.g . Lei et al. , 2017 ; Allen-Zhu , 2018a ) . In practice , as observed by Lei et al . ( 2017 ) , setting Nj to m does not impact performance in any significant way . We only use ” geometrization ” in our theoretical analysis for clarity . Similarly , for our theoretical analysis , the output of our algorithm is selected uniformly at random from the set of outer loop iterations . Like the use of average iterates in convex optimization , this is a common technique for nonconvex optimization proposed by Nemirovski et al . ( 2009 ) . In practice , we simply use the last iterate . Similar to Aji & Heafield ( 2017 ) , we maintain a memory vector M ( j ) t at each iteration of our algorithm . The memory vector is initialized to the large batch gradient computed before every pass through the inner loop , which provides a relatively accurate gradient sparsity estimate of x ( j ) 0 . The exponential moving average gradually incorporates information from subsequent small batch gradients to account for changes to gradient sparsity . We then use M ( j ) t as an approximation to the variance of each gradient coordinate in our rtopk1 , k2 operator . With M ( j ) t as input , the rtopk1 , k2 operator targets k1 high variance gradient coordinates in addition to k2 randomly selected coordinates . The cost of invoking rtopk1 , k2 is dominated by the algorithm for selecting the top k coordinates , which has linear worst case complexity when using the introselect algorithm ( Musser , 1997 ) .
The author(s) provide a method which combines some property of SCGS method and SpiderBoost. Theoretical results are provided and achieve the state-of-the-art complexity, which match the one of SpiderBoost. Numerical experiments show some advantage compared to SpiderBoost on some deep neural network architecture for some standard datasets MNIST, SVHN, and CIFAR-10.
SP:24fb2650085abd5599f3dcd187a62a514608423a
Recurrent Event Network : Global Structure Inference Over Temporal Knowledge Graph
1 INTRODUCTION . Representation learning on dynamically-evolving , graph-structured data has emerged as an important problem in a wide range of applications , including social network analysis ( Zhou et al. , 2018a ; Trivedi et al. , 2019 ) , knowledge graph reasoning ( Trivedi et al. , 2017 ; Nguyen et al. , 2018 ; Kazemi et al. , 2019 ) , event forecasting ( Du et al. , 2016 ) , and recommender systems ( Kumar et al. , 2019 ; You et al. , 2019 ) . Previous methods over dynamic graphs mainly focus on learning time-sensitive structure representations for node classification and link prediction in single-relational graphs . However , the rapid growth of heterogeneous event data ( Mahdisoltani et al. , 2014 ; Boschee et al. , 2015 ) has created new challenges on modeling temporal , complex interactions between entities ( i.e. , viewed as a temporal knowledge graph or a TKG ) , and calls for approaches that can predict new events in different future time stamps based on the history—i.e. , structure inference of a TKG over time . Recent attempts on learning over temporal knowledge graphs have focused on either predicting missing events ( facts ) for the observed time stamps ( García-Durán et al. , 2018 ; Dasgupta et al. , 2018 ; Leblay & Chekol , 2018 ) , or estimating the conditional probability of observing a future event using temporal point process ( Trivedi et al. , 2017 ; 2019 ) . However , the former group of methods adopts an interpolation problem formulation over TKGs and thus can not predict future events , as representations of unseen time stamps are unavailable . The latter group of methods , including Know-Evolve and its extension , DyRep , computes the probability of future events using ground-truths of the proceeding events during inference time , and can not model concurrent events occurring within the same time window—which often happens when event time stamps are discrete . It is thus desirable to have a principled method that can infer graph structure sequentially over time and can incorporate local structural information ( e.g. , concurrent events ) during temporal modeling . To this end , we propose a sequential structure inference architecture , called Recurrent Event Network ( RE-NET ) , for modeling heterogeneous event data in the form of temporal knowledge graphs . Key ideas of RE-NET are based on the following observations : ( 1 ) predicting future events can be viewed as a sequential ( multi-step ) inference of multi-relational interactions between entities over time ; ( 2 ) 1Code and data have been uploaded and will be published upon acceptance of the paper . temporally adjacent events may carry related semantics and informative patterns , which can further help inform future events ( i.e. , temporal information ) ; and ( 3 ) multiple events may co-occur within the same time window and exhibit structural dependencies as they share entities ( i.e. , local structural information ) . To incorporate these ideas , RE-NET defines the joint probability distribution of all the events in a TKG in an autoregressive fashion , where it models the probability distribution of the concurrent events at the current time step conditioned on all the preceding events ( see Fig . 1b for an illustration ) . Specifically , a recurrent event encoder , parametrized by RNNs , is used to summarize information of the past event sequences , and a neighborhood aggregator is employed to aggregate the information of concurrent events for the related entity within each time stamp . With the summarized information of the past event sequences , our decoder defines the joint probability of a current event . Such an autoregressive model can be effectively trained by using teacher forcing . Global structure inference for predicting future events can be achieved by performing sampling in a sequential manner . We evaluate our proposed method on temporal link prediction task , by testing the performance of multi-step inference over time on five public temporal knowledge graph datasets . Experimental results demonstrate that RE-NET outperforms state-of-the-art models of both static and temporal graph reasoning , showing its better capacity to model temporal , multi-relational graph data with concurrent events . We further show that RE-NET can perform effective multi-step inference to predict unseen entity relationships in a distant future . 2 RELATED WORK . Our work is related to previous studies on temporal knowledge graph reasoning , temporal modeling on homogeneous graphs , recurrent graph neural networks , and deep autoregressive models . Temporal KG Reasoning . There are some recent attempts on incorporating temporal information in modeling dynamic knowledge graphs . ( Trivedi et al. , 2017 ) presented Know-Evolve which models the occurrence of a fact as a temporal point process . However , this method is built on a problematic formulation when dealing with concurrent events , as shown in Section F. Several embedding-based methods have been proposed ( García-Durán et al. , 2018 ; Leblay & Chekol , 2018 ; Dasgupta et al. , 2018 ) to model time information . They embed the associate into a low dimensional space such as relation embeddings with RNN on the text of time ( García-Durán et al. , 2018 ) , time embeddings ( Leblay & Chekol , 2018 ) , and temporal hyperplanes ( Leblay & Chekol , 2018 ) . However , these models do not capture temporal dependency and can not generalize to unobserved time stamps . Temporal Modeling on Homogeneous Graphs . There are attempts on predicting future links on homogeneous graphs ( Pareja et al. , 2019 ; Goyal et al. , 2018 ; 2019 ; Zhou et al. , 2018b ; Singer et al. , 2019 ) . Some of the methods try to incorporate and learn graphical structures to predict future links ( Pareja et al. , 2019 ; Zhou et al. , 2018b ; Singer et al. , 2019 ) , while other methods predict by reconstructing an adjacency matrix by using an autoencoder ( Goyal et al. , 2018 ; 2019 ) . These methods seek to predict on single-relational graphs , and are designed to predict future edges in one future step ( i.e. , for t+ 1 ) . However , our work focuses on “ multi-relational ” knowledge graphs and aims for multi-step prediction ( i.e. , for t+ 1 , . . . , t+ k ) . Recurrent Graph Neural Models . There have been some studies on recurrent graph neural models for sequential or temporal graph-structured data ( Sanchez-Gonzalez et al. , 2018 ; Battaglia et al. , 2018 ; Palm et al. , 2018 ; Seo et al. , 2017 ; Pareja et al. , 2019 ) . These methods adopt a messagepassing framework for aggregating nodes ’ neighborhood information ( e.g. , via graph convolutional operations ) . GN ( Sanchez-Gonzalez et al. , 2018 ; Battaglia et al. , 2018 ) and RRN ( Palm et al. , 2018 ) update node representations by a message-passing scheme between time stamps . Some prior methods adopt an RNN to memorize and update the states of node embeddings that are dynamically evolving ( Seo et al. , 2017 ) , or memorize and update the model parameters for different time stamps ( Pareja et al. , 2019 ) . In contrast , our proposed method , RE-NET , aims to leverage autoregressive modeling to parameterize the joint probability distributions of events with RNNs . Deep Autoregressive Models . Deep autoregressive models define joint probability distributions as a product of conditionals . DeepGMG ( Li et al. , 2018 ) and GraphRNN ( You et al. , 2018 ) are deep generative models of graphs and focus on generating homogeneous graphs where there is only a single type of edge . In contrast to these studies , our work focuses on generating heterogeneous graphs , in which multiple types of edges exist , and thus our problem is more challenging . To the best of my knowledge , this is the first paper to formulate the structure inference ( prediction ) problem for temporal , multi-relational ( knowledge ) graphs in an autoregressive fashion . 3 PROPOSED METHOD : RE-NET . We consider a temporal knowledge graph ( TKG ) as a multi-relational , directed graph with timestamped edges ( relationships ) between nodes ( entities ) . An event is defined as a time-stamped edge , i.e. , ( subject entity , relation , object entity , time ) and is denoted by a quadruple ( s , r , o , t ) or ( st , rt , ot ) . We denote a set of events at time t asGt . A TKG is built upon a sequence of event quadruples ordered ascending based on their time stamps , i.e. , { Gt } t = { ( si , ri , oi , ti ) } i ( with ti < tj , ∀i < j ) , where each time-stamped edge has a direction pointing from the subject entity to the object entity.2 The goal of learning generative models of events is to learn a distribution P ( G ) over temporal knowledge graphs , based on a set of observed event sets { G1 , ... , GT } . To model lasting events which span over a time range , i.e. , ( s , r , o , [ t1 , t2 ] ) , we simply partition such event into a sequence of time-stamp events { Gt1 , ... , Gt2 } . We leave more sophisticated modeling of lasting events as future work . 3.1 RECURRENT EVENT NETWORK . Sequential Structure Inference in TKG . The key idea in RE-NET is to define the joint distribution of all the eventsG = { G1 , ... , GT } in an autoregressive manner , i.e. , P ( G ) = ∏T t=1 P ( Gt|Gt−m : t−1 ) . Basically , we decompose the joint distribution into a sequence of conditional distributions ( e.g. , P ( Gt|Gt−m : t−1 ) ) , where we assume the probability of the events at a time step , e.g . Gt , only depends on the events at the previous m steps , e.g. , Gt−m : t−1 . For each conditional distribution P ( Gt|Gt−m : t−1 ) , we further assume that the events inGt are mutually independent given the previous events Gt−m : t−1 . In this way , the joint distribution can be rewritten as follows . P ( G ) = ∏ t ∏ ( st , rt , ot ) ∈Gt P ( st , rt , ot|Gt−m : t−1 ) = ∏ t ∏ ( st , rt , ot ) ∈Gt P ( ot|st , rt , Gt−m : t−1 ) · P ( rt|st , Gt−m : t−1 ) · P ( st|Gt−m : t−1 ) . ( 1 ) Intuitively , the generation process of each triplet ( st , rt , ot ) is defined as below . Given all the past events Gt−m : t−1 , we fist generate a subject entity st through the distribution P ( st|Gt−m : t−1 ) . Then we further generate a relation rt with P ( rt|st , Gt−m : t−1 ) , and finally the object entity ot is generated by defining P ( ot|st , rt , Gt−m : t−1 ) . In this work , we assume that P ( ot|st , rt , Gt−m : t−1 ) and P ( rt|st , Gt−m : t−1 ) depend only on events that are related to s , and focus on modeling the following joint probability : P ( st , rt , ot|Gt−m : t−1 ) = P ( ot|s , r , N ( s ) t−m : t−1 ) · P ( rt|s , N ( s ) t−m : t−1 ) · P ( st|Gt−m : t−1 ) , ( 2 ) 2The same triple ( s , r , o ) may occur multiple times in different time stamps , yielding different event quadruples . where Gt becomes N ( s ) t which is a set of neighboring entities interacted with subject entity s under all relations at time stamp t. For the third probability , the event sets should be considered since subject is not given . Next , we introduce how we parameterize these distributions . Recurrent Event Encoder . RE-NET parameterizes P ( ot|s , r , Gt−m : t−1 ) in the following way : P ( ot|s , r , N ( s ) t−m : t−1 ) ∝ exp ( [ es : er : ht−1 ( s , r ) ] > ·wot ) , ( 3 ) where es , er ∈ Rd are learnable embedding vectors specified for subject entity s and relation r. ht−1 ( s , r ) ∈ Rd is a history vector which encodes the information from the neighbor sets interacted with s in the past , as well as the global information from graph structures of Gt−1 : t−m . Basically , [ es : er : ht−1 ( s , r ) ] is an encoding to summarize all the past information . Based on that , we further compute the probability of different object entities ot by passing the encoding into a linear softmax classifier parameterized by { wot } . Similarly , we define the probabilities for relations and subjects as follows : P ( rt|s , N ( s ) t−m : t−1 ) ∝ exp ( [ es : ht−1 ( s ) ) ] > ·wrt ) , ( 4 ) P ( st|Gt−m : t−1 ) ∝ exp ( H > t−1 ·wst ) , ( 5 ) where ht−1 ( s ) captures all the local information about s in the past , and Ht−1 ∈ Rd is a vector representation to encode the global graph structures Gt−1 : t−m . For each time step t , since the hidden vectors ht−1 ( s ) , ht−1 ( s , r ) and Ht−1 preserve the information from the past events , and we update them in the following recurrent way : ht ( s , r ) = RNN1 ( g ( N ( s ) t ) , Ht , ht−1 ( s , r ) ) , ( 6 ) ht ( s ) = RNN2 ( g ( N ( s ) t ) , Ht , ht−1 ( s ) ) , ( 7 ) Ht = RNN3 ( g ( Gt ) , Ht−1 ) , ( 8 ) where g is an aggregation function , and N ( s ) t stands for all the events related to s at the current time step t. Intuitively , we obtain the current information related to s by aggregating all the related events at time t , i.e. , g ( N ( s ) t ) . Then we update the hidden vector ht ( s , r ) by using the aggregated information g ( N ( s ) t ) at the current step , the past value ht−1 ( s , r ) and also the global hidden vector Ht . The hidden vector ht ( s ) is updated in a similar way . For the aggregation of all events g ( Gt ) , we define g ( Gt ) = max ( { g ( N ( s ) t ) } s ) , which is from the element-wise max-pooling operation over all g ( N ( s ) t ) . We use Gated Recurrent Units Cho et al . ( 2014 ) as RNN . Details are described in Section A . For each subject entity s , it can interact with multiple relations and object entities at each time step t. In other words , the set N ( s ) t can contain multiple events . Designing effective aggregation functions g to aggregate information from N ( s ) t for s is therefore a nontrivial problem . Next , we introduce how we design g ( · ) in RE-NET .
This paper properly applied several technique from RNN and graph neural networks to model dynamically-evolving, multi-relational graph data. There are two key component: a RNN to encode temporal information from the past event sequences, and a neighborhood aggregator collects the information from the neighbor nodes. The contribution on RNN part is design the loss and parameterizes the tuple of the graph. The contribution of the second part was adapting Multi-Relational Aggregator to this network. The paper is well-written. Although I'm familiar with the dataset, the analysis and comparison seems thorough.
SP:d94b0a398257e68c8888f0fdb9e6765881f798af
Recurrent Event Network : Global Structure Inference Over Temporal Knowledge Graph
1 INTRODUCTION . Representation learning on dynamically-evolving , graph-structured data has emerged as an important problem in a wide range of applications , including social network analysis ( Zhou et al. , 2018a ; Trivedi et al. , 2019 ) , knowledge graph reasoning ( Trivedi et al. , 2017 ; Nguyen et al. , 2018 ; Kazemi et al. , 2019 ) , event forecasting ( Du et al. , 2016 ) , and recommender systems ( Kumar et al. , 2019 ; You et al. , 2019 ) . Previous methods over dynamic graphs mainly focus on learning time-sensitive structure representations for node classification and link prediction in single-relational graphs . However , the rapid growth of heterogeneous event data ( Mahdisoltani et al. , 2014 ; Boschee et al. , 2015 ) has created new challenges on modeling temporal , complex interactions between entities ( i.e. , viewed as a temporal knowledge graph or a TKG ) , and calls for approaches that can predict new events in different future time stamps based on the history—i.e. , structure inference of a TKG over time . Recent attempts on learning over temporal knowledge graphs have focused on either predicting missing events ( facts ) for the observed time stamps ( García-Durán et al. , 2018 ; Dasgupta et al. , 2018 ; Leblay & Chekol , 2018 ) , or estimating the conditional probability of observing a future event using temporal point process ( Trivedi et al. , 2017 ; 2019 ) . However , the former group of methods adopts an interpolation problem formulation over TKGs and thus can not predict future events , as representations of unseen time stamps are unavailable . The latter group of methods , including Know-Evolve and its extension , DyRep , computes the probability of future events using ground-truths of the proceeding events during inference time , and can not model concurrent events occurring within the same time window—which often happens when event time stamps are discrete . It is thus desirable to have a principled method that can infer graph structure sequentially over time and can incorporate local structural information ( e.g. , concurrent events ) during temporal modeling . To this end , we propose a sequential structure inference architecture , called Recurrent Event Network ( RE-NET ) , for modeling heterogeneous event data in the form of temporal knowledge graphs . Key ideas of RE-NET are based on the following observations : ( 1 ) predicting future events can be viewed as a sequential ( multi-step ) inference of multi-relational interactions between entities over time ; ( 2 ) 1Code and data have been uploaded and will be published upon acceptance of the paper . temporally adjacent events may carry related semantics and informative patterns , which can further help inform future events ( i.e. , temporal information ) ; and ( 3 ) multiple events may co-occur within the same time window and exhibit structural dependencies as they share entities ( i.e. , local structural information ) . To incorporate these ideas , RE-NET defines the joint probability distribution of all the events in a TKG in an autoregressive fashion , where it models the probability distribution of the concurrent events at the current time step conditioned on all the preceding events ( see Fig . 1b for an illustration ) . Specifically , a recurrent event encoder , parametrized by RNNs , is used to summarize information of the past event sequences , and a neighborhood aggregator is employed to aggregate the information of concurrent events for the related entity within each time stamp . With the summarized information of the past event sequences , our decoder defines the joint probability of a current event . Such an autoregressive model can be effectively trained by using teacher forcing . Global structure inference for predicting future events can be achieved by performing sampling in a sequential manner . We evaluate our proposed method on temporal link prediction task , by testing the performance of multi-step inference over time on five public temporal knowledge graph datasets . Experimental results demonstrate that RE-NET outperforms state-of-the-art models of both static and temporal graph reasoning , showing its better capacity to model temporal , multi-relational graph data with concurrent events . We further show that RE-NET can perform effective multi-step inference to predict unseen entity relationships in a distant future . 2 RELATED WORK . Our work is related to previous studies on temporal knowledge graph reasoning , temporal modeling on homogeneous graphs , recurrent graph neural networks , and deep autoregressive models . Temporal KG Reasoning . There are some recent attempts on incorporating temporal information in modeling dynamic knowledge graphs . ( Trivedi et al. , 2017 ) presented Know-Evolve which models the occurrence of a fact as a temporal point process . However , this method is built on a problematic formulation when dealing with concurrent events , as shown in Section F. Several embedding-based methods have been proposed ( García-Durán et al. , 2018 ; Leblay & Chekol , 2018 ; Dasgupta et al. , 2018 ) to model time information . They embed the associate into a low dimensional space such as relation embeddings with RNN on the text of time ( García-Durán et al. , 2018 ) , time embeddings ( Leblay & Chekol , 2018 ) , and temporal hyperplanes ( Leblay & Chekol , 2018 ) . However , these models do not capture temporal dependency and can not generalize to unobserved time stamps . Temporal Modeling on Homogeneous Graphs . There are attempts on predicting future links on homogeneous graphs ( Pareja et al. , 2019 ; Goyal et al. , 2018 ; 2019 ; Zhou et al. , 2018b ; Singer et al. , 2019 ) . Some of the methods try to incorporate and learn graphical structures to predict future links ( Pareja et al. , 2019 ; Zhou et al. , 2018b ; Singer et al. , 2019 ) , while other methods predict by reconstructing an adjacency matrix by using an autoencoder ( Goyal et al. , 2018 ; 2019 ) . These methods seek to predict on single-relational graphs , and are designed to predict future edges in one future step ( i.e. , for t+ 1 ) . However , our work focuses on “ multi-relational ” knowledge graphs and aims for multi-step prediction ( i.e. , for t+ 1 , . . . , t+ k ) . Recurrent Graph Neural Models . There have been some studies on recurrent graph neural models for sequential or temporal graph-structured data ( Sanchez-Gonzalez et al. , 2018 ; Battaglia et al. , 2018 ; Palm et al. , 2018 ; Seo et al. , 2017 ; Pareja et al. , 2019 ) . These methods adopt a messagepassing framework for aggregating nodes ’ neighborhood information ( e.g. , via graph convolutional operations ) . GN ( Sanchez-Gonzalez et al. , 2018 ; Battaglia et al. , 2018 ) and RRN ( Palm et al. , 2018 ) update node representations by a message-passing scheme between time stamps . Some prior methods adopt an RNN to memorize and update the states of node embeddings that are dynamically evolving ( Seo et al. , 2017 ) , or memorize and update the model parameters for different time stamps ( Pareja et al. , 2019 ) . In contrast , our proposed method , RE-NET , aims to leverage autoregressive modeling to parameterize the joint probability distributions of events with RNNs . Deep Autoregressive Models . Deep autoregressive models define joint probability distributions as a product of conditionals . DeepGMG ( Li et al. , 2018 ) and GraphRNN ( You et al. , 2018 ) are deep generative models of graphs and focus on generating homogeneous graphs where there is only a single type of edge . In contrast to these studies , our work focuses on generating heterogeneous graphs , in which multiple types of edges exist , and thus our problem is more challenging . To the best of my knowledge , this is the first paper to formulate the structure inference ( prediction ) problem for temporal , multi-relational ( knowledge ) graphs in an autoregressive fashion . 3 PROPOSED METHOD : RE-NET . We consider a temporal knowledge graph ( TKG ) as a multi-relational , directed graph with timestamped edges ( relationships ) between nodes ( entities ) . An event is defined as a time-stamped edge , i.e. , ( subject entity , relation , object entity , time ) and is denoted by a quadruple ( s , r , o , t ) or ( st , rt , ot ) . We denote a set of events at time t asGt . A TKG is built upon a sequence of event quadruples ordered ascending based on their time stamps , i.e. , { Gt } t = { ( si , ri , oi , ti ) } i ( with ti < tj , ∀i < j ) , where each time-stamped edge has a direction pointing from the subject entity to the object entity.2 The goal of learning generative models of events is to learn a distribution P ( G ) over temporal knowledge graphs , based on a set of observed event sets { G1 , ... , GT } . To model lasting events which span over a time range , i.e. , ( s , r , o , [ t1 , t2 ] ) , we simply partition such event into a sequence of time-stamp events { Gt1 , ... , Gt2 } . We leave more sophisticated modeling of lasting events as future work . 3.1 RECURRENT EVENT NETWORK . Sequential Structure Inference in TKG . The key idea in RE-NET is to define the joint distribution of all the eventsG = { G1 , ... , GT } in an autoregressive manner , i.e. , P ( G ) = ∏T t=1 P ( Gt|Gt−m : t−1 ) . Basically , we decompose the joint distribution into a sequence of conditional distributions ( e.g. , P ( Gt|Gt−m : t−1 ) ) , where we assume the probability of the events at a time step , e.g . Gt , only depends on the events at the previous m steps , e.g. , Gt−m : t−1 . For each conditional distribution P ( Gt|Gt−m : t−1 ) , we further assume that the events inGt are mutually independent given the previous events Gt−m : t−1 . In this way , the joint distribution can be rewritten as follows . P ( G ) = ∏ t ∏ ( st , rt , ot ) ∈Gt P ( st , rt , ot|Gt−m : t−1 ) = ∏ t ∏ ( st , rt , ot ) ∈Gt P ( ot|st , rt , Gt−m : t−1 ) · P ( rt|st , Gt−m : t−1 ) · P ( st|Gt−m : t−1 ) . ( 1 ) Intuitively , the generation process of each triplet ( st , rt , ot ) is defined as below . Given all the past events Gt−m : t−1 , we fist generate a subject entity st through the distribution P ( st|Gt−m : t−1 ) . Then we further generate a relation rt with P ( rt|st , Gt−m : t−1 ) , and finally the object entity ot is generated by defining P ( ot|st , rt , Gt−m : t−1 ) . In this work , we assume that P ( ot|st , rt , Gt−m : t−1 ) and P ( rt|st , Gt−m : t−1 ) depend only on events that are related to s , and focus on modeling the following joint probability : P ( st , rt , ot|Gt−m : t−1 ) = P ( ot|s , r , N ( s ) t−m : t−1 ) · P ( rt|s , N ( s ) t−m : t−1 ) · P ( st|Gt−m : t−1 ) , ( 2 ) 2The same triple ( s , r , o ) may occur multiple times in different time stamps , yielding different event quadruples . where Gt becomes N ( s ) t which is a set of neighboring entities interacted with subject entity s under all relations at time stamp t. For the third probability , the event sets should be considered since subject is not given . Next , we introduce how we parameterize these distributions . Recurrent Event Encoder . RE-NET parameterizes P ( ot|s , r , Gt−m : t−1 ) in the following way : P ( ot|s , r , N ( s ) t−m : t−1 ) ∝ exp ( [ es : er : ht−1 ( s , r ) ] > ·wot ) , ( 3 ) where es , er ∈ Rd are learnable embedding vectors specified for subject entity s and relation r. ht−1 ( s , r ) ∈ Rd is a history vector which encodes the information from the neighbor sets interacted with s in the past , as well as the global information from graph structures of Gt−1 : t−m . Basically , [ es : er : ht−1 ( s , r ) ] is an encoding to summarize all the past information . Based on that , we further compute the probability of different object entities ot by passing the encoding into a linear softmax classifier parameterized by { wot } . Similarly , we define the probabilities for relations and subjects as follows : P ( rt|s , N ( s ) t−m : t−1 ) ∝ exp ( [ es : ht−1 ( s ) ) ] > ·wrt ) , ( 4 ) P ( st|Gt−m : t−1 ) ∝ exp ( H > t−1 ·wst ) , ( 5 ) where ht−1 ( s ) captures all the local information about s in the past , and Ht−1 ∈ Rd is a vector representation to encode the global graph structures Gt−1 : t−m . For each time step t , since the hidden vectors ht−1 ( s ) , ht−1 ( s , r ) and Ht−1 preserve the information from the past events , and we update them in the following recurrent way : ht ( s , r ) = RNN1 ( g ( N ( s ) t ) , Ht , ht−1 ( s , r ) ) , ( 6 ) ht ( s ) = RNN2 ( g ( N ( s ) t ) , Ht , ht−1 ( s ) ) , ( 7 ) Ht = RNN3 ( g ( Gt ) , Ht−1 ) , ( 8 ) where g is an aggregation function , and N ( s ) t stands for all the events related to s at the current time step t. Intuitively , we obtain the current information related to s by aggregating all the related events at time t , i.e. , g ( N ( s ) t ) . Then we update the hidden vector ht ( s , r ) by using the aggregated information g ( N ( s ) t ) at the current step , the past value ht−1 ( s , r ) and also the global hidden vector Ht . The hidden vector ht ( s ) is updated in a similar way . For the aggregation of all events g ( Gt ) , we define g ( Gt ) = max ( { g ( N ( s ) t ) } s ) , which is from the element-wise max-pooling operation over all g ( N ( s ) t ) . We use Gated Recurrent Units Cho et al . ( 2014 ) as RNN . Details are described in Section A . For each subject entity s , it can interact with multiple relations and object entities at each time step t. In other words , the set N ( s ) t can contain multiple events . Designing effective aggregation functions g to aggregate information from N ( s ) t for s is therefore a nontrivial problem . Next , we introduce how we design g ( · ) in RE-NET .
The paper proposes a recurrent and autorgressive architecture to model temporal knowledge graphs and perform multi-time-step inference in the form of future link prediction. Specifically, given a historical sequence of graphs at discrete time points, the authors build sequential probabilistic approach to infer the next graph using joint over all previous graphs factorized into conditional distributions of subject, relation and the objects. The model is parameterized by a recurrent architecture that employs a multi-step aggregation to capture information within the graph at particular time step. The authors also propose a sequential approach to perform multi-step inference. The proposed method is evaluated on the task of future link prediction across several baselines, both static and dynamic, and ablation analysis is provided to measure the effect of each component in the architecture.
SP:d94b0a398257e68c8888f0fdb9e6765881f798af
Identifying Weights and Architectures of Unknown ReLU Networks
1 INTRODUCTION . The behavior of deep neural networks is as complex as it is powerful . The relation of individual parameters to the network ’ s output is highly nonlinear and is generally unclear to an external observer . Consequently , it has been widely supposed in the field that it is impossible to recover the parameters of a network merely by observing its output on different inputs . Beyond informing our understanding of deep learning , going from function to parameters could have serious implications for security and privacy . In many deployed deep learning systems , the output is freely available , but the network used to generate that output is not disclosed . The ability to uncover a confidential network not only would make it available for public use but could even expose data used to train the network if such data could be reconstructed from the network ’ s weights . This topic also has implications for the study of biological neural networks . Experimental neuroscientists can record some variables within the brain ( e.g . the output of a complex cell in primary visual cortex ) but not others ( e.g . the pre-synaptic simple cells ) , and many biological neurons appear to be well modeled as the ReLU of a linear combination of their inputs ( Chance et al. , 2002 ) . It would be highly useful if we could reverse engineer the internal components of a neural circuit based on recordings of the output and our choice of input stimuli . In this work , we show that it is , in fact , possible in many cases to recover the structure and weights of an unknown ReLU network by querying it . Our method leverages the fact that a ReLU network is piecewise linear and transitions between linear pieces exactly when one of the ReLUs of the network transitions from its inactive to its active state . We attempt to identify the piecewise linear surfaces in input space where individual neurons transition from inactive to active . For neurons in the first layer , such boundaries are hyperplanes , for which the equations determine the weights and biases of the first layer ( up to sign and scaling ) . For neurons in subsequent layers , the boundaries are “ bent hyperplanes ” that bend where they intersect boundaries associated with earlier layers . Measuring these intersections allows us to recover the weights between the corresponding neurons . Our major contributions are : • We identify how the architecture , weights , and biases of a network can be recovered from the arrangement of boundaries between linear regions in the network . • We implement this procedure and demonstrate its success in recovering trained and untrained ReLU networks . • We show that this algorithm “ degrades gracefully , ” providing partial weights even when full 2 RELATED WORK . Various works within the deep learning literature have considered the problem of learning a network given its output on inputs drawn ( non-adaptively ) from a given distribution . It is known that this problem is in general hard ( Goel et al. , 2017 ) , though positive results have been found for certain specific choices of distribution in the case that the network has only one or two layers ( Ge et al. , 2019 ; Goel & Klivans , 2017 ) . By contrast , we consider the problem of learning about a network of arbitrary depth , given the ability to issue queries at specified input points . In this work , we leverage the theory of linear regions within a ReLU network , an area that has been studied e.g . by Telgarsky ( 2015 ) ; Raghu et al . ( 2017 ) ; Hanin & Rolnick ( 2019a ) . Most recently Hanin & Rolnick ( 2019b ) considered the boundaries between linear regions as arrangements of “ bent hyperplanes ” . Milli et al . ( 2019 ) ; Jagielski et al . ( 2019 ) show the effectiveness of this strategy for networks with one hidden layer . For inference of other properties of unknown networks , see e.g . Oh et al . ( 2019 ) . Neuroscientists have long considered similar problems with biological neural networks , albeit armed with prior knowledge about network structure . For example , it is believed that complex cells in the primary visual cortex , which are often seen as translation-invariant edge detectors , obtain their invariance through what is effectively a two-layer neural network ( Kording et al. , 2004 ) . A first layer is believed to extract edges , while a second layer essentially implements maxpooling . Heggelund ( 1981 ) perform physical experiments akin to our approach of identifying one ReLU at a time , by applying inputs that move individual neurons above their critical threshold one by one . Being able to solve such problems more generically would be useful for a range of neuroscience applications . 3 PRELIMINARIES . 3.1 DEFINITIONS . In general , we will consider fully connected , feed-forward neural networks ( multilayer perceptrons ) with ReLU activations . Each such network N defines a function N ( x ) from input space Rnin to output space Rnout . We denote the layer widths of the network by nin ( input layer ) , n1 , n2 , . . . , nd , nout ( output layer ) . We use Wk to denote the weight matrix from layer ( k − 1 ) to layer k , where layer 0 is the input ; and bk denotes the bias vector for layer k. Given a neuron z in the network , we use z ( x ) to denote its preactivation for input x ∈ Rnin . Thus , for the jth neuron in layer k , we have zkj ( x ) = nk−1∑ i=1 Wkij ReLU ( z k−1 i ( x ) + b k i ) . For each neuron z , we will use Bz to denote the set of x for which z ( x ) = 0 . In general1 , Bz will be an ( nin−1 ) -dimensional piecewise linear surface in Rnin ( see Figure 1 , in which input dimension is 2 and the Bz are simply lines ) . We call Bz the boundary associated with neuron z , and we say that B = ⋃ Bz is the boundary of the overall network . We refer to the connected components of Rnin \B as regions . Throughout this paper , we will make the Linear Regions Assumption : The set of regions is the set of linear pieces of the piecewise linear function N ( x ) . While this assumption has tacitly been made in the prior literature , it is noted in Hanin & Rolnick ( 2019b ) that there are cases where it does not hold – for example , if an entire layer of the network is zeroed out for some inputs . 3.2 ISOMORPHISMS OF NETWORKS . Before showing how to infer the parameters of a neural network , we must consider to what extent these parameters can be inferred unambiguously . Given a network N , there are a number of other networks N ′ that define exactly the same function from input space to output space . We say that such networks are isomorphic to N . For multilayer perceptrons with ReLU activation , we consider the following network isomorphisms : Permutation . The order of neurons in each layer of a network N does not affect the underlying function . Formally , let pk , σ ( N ) be the network obtained fromN by permuting layer k according to σ ( along with the corresponding weight vectors and biases ) . Then , pk , σ ( N ) is isomorphic to N for every layer k and permutation σ . Scaling . Due to the ReLU ’ s equivariance under multiplication , it is possible to scale the incoming weights and biases of any neuron , while inversely scaling the outgoing weights , leaving the overall function unchanged . Formally , for z the ith neuron in layer k and c any positive constant , let sz , c ( N ) be the network obtained from N by replacing Wk·i , bki , and Wk+1 by cWk·i , cbki , and ( 1/c ) W k+1 i· , respectively . It is simple to prove that sz , c ( N ) is isomorphic to N ( see Appendix A ) . Thus , we can hope to recover a network only up to layer-wise permutation and neuron-wise scaling . Formally , pi , σ ( N ) and sz , c ( N ) are generators for a group of isomorphisms of N . ( As we shall see in §5 , some networks also possess additional isomorphisms . ) 4 THE ALGORITHM . 4.1 INTUITION . Consider a network N and neuron z ∈ N , so that Bz is the boundary associated with neuron z . Recall that Bz is piecewise linear . We say that Bz bends at a point if Bz is nonlinear at that point ( that is , if the point lies on the boundary of several regions ) . As observed in Hanin & Rolnick ( 2019b ) , Bz can bend only at points where it intersects boundaries Bz′ for z′ in an earlier layer of 1More precisely , this holds for all but a measure zero set of networks , and any network for which this is not true may simply be perturbed slightly . the network . In general , the converse also holds ; Bz bends wherever it intersects such a boundary Bz′ ( see Appendix A ) . Then , for any two boundaries Bz and Bz′ , one of the following must hold : Bz bends at their intersection ( in which case z occurs in a deeper layer of the network ) , Bz′ bends ( in which case z′ occurs in a deeper layer ) , or neither bends ( in which case z and z′ occur in the same layer ) . It is not possible for both Bz and Bz′ to bend at their intersection – unless that intersection is also contained in another boundary , which is vanishingly unlikely in general . Thus , the architecture of the network can be determined by evaluating the boundaries Bz and where they bend in relation to one another . Moving beyond architecture , the weights and biases of the network can also be determined from the boundaries , one layer at a time . Boundaries for neurons in the first layer do not bend and are simply hyperplanes ; the equations of these hyperplanes expose the weights from the input to the first layer ( up to permutation , scaling , and sign ) . For each subsequent layer , the weight between neurons z and z′ can be determined by calculating how Bz′ bends when it crosses Bz . The details of our algorithm below are intended to make these intuitions concrete and perform efficiently even when the input space is high-dimensional . Algorithm 1 The first layer Initialize P1 = P2 = S1 = { } for t = 1 , . . . , L do Sample line segment ` P1 ← P1 ∪ PointsOnLine ( ` ) end for for p ∈ P1 do H = InferHyperplane ( p ) if TestHyperplane ( H ) then S1 ← S1 ∪ GetParams ( H ) else P2 ← P2 ∪ { p } end if end for return Parameters S1 , unused sample points P2 Algorithm 2 Additional layers Input Pk and S1 , . . . , Sk−1 Initialize Sk = { } for p1 ∈ Pk−1 on boundary Bz do Initialize Az = { p1 } , Lz = Hz = { } while Lz 6⊇ Layer k − 1 do Pick pi ∈ A and v p′ , Bz′ = ClosestBoundary ( pi , v ) if p′ on boundary then Az ← Az ∪ { p′ + } Lz ← Lz ∪ { z′ } Hz ← Hz ∪ { InferHyperplane ( pi ) } else Pk ← Pk ∪ { p1 } ; break end if end while if Lz ⊇ Layer k − 1 then Sk ← GetParams ( Tz ) end if end for return Parameters Sk , unused sample points Pk+1
This paper introduces an approach to recover weights of ReLU neural networks by querying the network with specifically constructed inputs. The authors notice that the decision regions of such networks are piece-wise linear corresponding to activations of individual neurons. This allows to identify hyperplanes that constitute the decision boundary and find intersection points of the decision boundaries corresponding to neurons at different layers of the network. However, weights can be recovered only up to permutations of neurons in each layer and up to a constant scaling factor for each layer.
SP:6ecf7180d11e9eaf100d489c1c20123cde7a258d
Identifying Weights and Architectures of Unknown ReLU Networks
1 INTRODUCTION . The behavior of deep neural networks is as complex as it is powerful . The relation of individual parameters to the network ’ s output is highly nonlinear and is generally unclear to an external observer . Consequently , it has been widely supposed in the field that it is impossible to recover the parameters of a network merely by observing its output on different inputs . Beyond informing our understanding of deep learning , going from function to parameters could have serious implications for security and privacy . In many deployed deep learning systems , the output is freely available , but the network used to generate that output is not disclosed . The ability to uncover a confidential network not only would make it available for public use but could even expose data used to train the network if such data could be reconstructed from the network ’ s weights . This topic also has implications for the study of biological neural networks . Experimental neuroscientists can record some variables within the brain ( e.g . the output of a complex cell in primary visual cortex ) but not others ( e.g . the pre-synaptic simple cells ) , and many biological neurons appear to be well modeled as the ReLU of a linear combination of their inputs ( Chance et al. , 2002 ) . It would be highly useful if we could reverse engineer the internal components of a neural circuit based on recordings of the output and our choice of input stimuli . In this work , we show that it is , in fact , possible in many cases to recover the structure and weights of an unknown ReLU network by querying it . Our method leverages the fact that a ReLU network is piecewise linear and transitions between linear pieces exactly when one of the ReLUs of the network transitions from its inactive to its active state . We attempt to identify the piecewise linear surfaces in input space where individual neurons transition from inactive to active . For neurons in the first layer , such boundaries are hyperplanes , for which the equations determine the weights and biases of the first layer ( up to sign and scaling ) . For neurons in subsequent layers , the boundaries are “ bent hyperplanes ” that bend where they intersect boundaries associated with earlier layers . Measuring these intersections allows us to recover the weights between the corresponding neurons . Our major contributions are : • We identify how the architecture , weights , and biases of a network can be recovered from the arrangement of boundaries between linear regions in the network . • We implement this procedure and demonstrate its success in recovering trained and untrained ReLU networks . • We show that this algorithm “ degrades gracefully , ” providing partial weights even when full 2 RELATED WORK . Various works within the deep learning literature have considered the problem of learning a network given its output on inputs drawn ( non-adaptively ) from a given distribution . It is known that this problem is in general hard ( Goel et al. , 2017 ) , though positive results have been found for certain specific choices of distribution in the case that the network has only one or two layers ( Ge et al. , 2019 ; Goel & Klivans , 2017 ) . By contrast , we consider the problem of learning about a network of arbitrary depth , given the ability to issue queries at specified input points . In this work , we leverage the theory of linear regions within a ReLU network , an area that has been studied e.g . by Telgarsky ( 2015 ) ; Raghu et al . ( 2017 ) ; Hanin & Rolnick ( 2019a ) . Most recently Hanin & Rolnick ( 2019b ) considered the boundaries between linear regions as arrangements of “ bent hyperplanes ” . Milli et al . ( 2019 ) ; Jagielski et al . ( 2019 ) show the effectiveness of this strategy for networks with one hidden layer . For inference of other properties of unknown networks , see e.g . Oh et al . ( 2019 ) . Neuroscientists have long considered similar problems with biological neural networks , albeit armed with prior knowledge about network structure . For example , it is believed that complex cells in the primary visual cortex , which are often seen as translation-invariant edge detectors , obtain their invariance through what is effectively a two-layer neural network ( Kording et al. , 2004 ) . A first layer is believed to extract edges , while a second layer essentially implements maxpooling . Heggelund ( 1981 ) perform physical experiments akin to our approach of identifying one ReLU at a time , by applying inputs that move individual neurons above their critical threshold one by one . Being able to solve such problems more generically would be useful for a range of neuroscience applications . 3 PRELIMINARIES . 3.1 DEFINITIONS . In general , we will consider fully connected , feed-forward neural networks ( multilayer perceptrons ) with ReLU activations . Each such network N defines a function N ( x ) from input space Rnin to output space Rnout . We denote the layer widths of the network by nin ( input layer ) , n1 , n2 , . . . , nd , nout ( output layer ) . We use Wk to denote the weight matrix from layer ( k − 1 ) to layer k , where layer 0 is the input ; and bk denotes the bias vector for layer k. Given a neuron z in the network , we use z ( x ) to denote its preactivation for input x ∈ Rnin . Thus , for the jth neuron in layer k , we have zkj ( x ) = nk−1∑ i=1 Wkij ReLU ( z k−1 i ( x ) + b k i ) . For each neuron z , we will use Bz to denote the set of x for which z ( x ) = 0 . In general1 , Bz will be an ( nin−1 ) -dimensional piecewise linear surface in Rnin ( see Figure 1 , in which input dimension is 2 and the Bz are simply lines ) . We call Bz the boundary associated with neuron z , and we say that B = ⋃ Bz is the boundary of the overall network . We refer to the connected components of Rnin \B as regions . Throughout this paper , we will make the Linear Regions Assumption : The set of regions is the set of linear pieces of the piecewise linear function N ( x ) . While this assumption has tacitly been made in the prior literature , it is noted in Hanin & Rolnick ( 2019b ) that there are cases where it does not hold – for example , if an entire layer of the network is zeroed out for some inputs . 3.2 ISOMORPHISMS OF NETWORKS . Before showing how to infer the parameters of a neural network , we must consider to what extent these parameters can be inferred unambiguously . Given a network N , there are a number of other networks N ′ that define exactly the same function from input space to output space . We say that such networks are isomorphic to N . For multilayer perceptrons with ReLU activation , we consider the following network isomorphisms : Permutation . The order of neurons in each layer of a network N does not affect the underlying function . Formally , let pk , σ ( N ) be the network obtained fromN by permuting layer k according to σ ( along with the corresponding weight vectors and biases ) . Then , pk , σ ( N ) is isomorphic to N for every layer k and permutation σ . Scaling . Due to the ReLU ’ s equivariance under multiplication , it is possible to scale the incoming weights and biases of any neuron , while inversely scaling the outgoing weights , leaving the overall function unchanged . Formally , for z the ith neuron in layer k and c any positive constant , let sz , c ( N ) be the network obtained from N by replacing Wk·i , bki , and Wk+1 by cWk·i , cbki , and ( 1/c ) W k+1 i· , respectively . It is simple to prove that sz , c ( N ) is isomorphic to N ( see Appendix A ) . Thus , we can hope to recover a network only up to layer-wise permutation and neuron-wise scaling . Formally , pi , σ ( N ) and sz , c ( N ) are generators for a group of isomorphisms of N . ( As we shall see in §5 , some networks also possess additional isomorphisms . ) 4 THE ALGORITHM . 4.1 INTUITION . Consider a network N and neuron z ∈ N , so that Bz is the boundary associated with neuron z . Recall that Bz is piecewise linear . We say that Bz bends at a point if Bz is nonlinear at that point ( that is , if the point lies on the boundary of several regions ) . As observed in Hanin & Rolnick ( 2019b ) , Bz can bend only at points where it intersects boundaries Bz′ for z′ in an earlier layer of 1More precisely , this holds for all but a measure zero set of networks , and any network for which this is not true may simply be perturbed slightly . the network . In general , the converse also holds ; Bz bends wherever it intersects such a boundary Bz′ ( see Appendix A ) . Then , for any two boundaries Bz and Bz′ , one of the following must hold : Bz bends at their intersection ( in which case z occurs in a deeper layer of the network ) , Bz′ bends ( in which case z′ occurs in a deeper layer ) , or neither bends ( in which case z and z′ occur in the same layer ) . It is not possible for both Bz and Bz′ to bend at their intersection – unless that intersection is also contained in another boundary , which is vanishingly unlikely in general . Thus , the architecture of the network can be determined by evaluating the boundaries Bz and where they bend in relation to one another . Moving beyond architecture , the weights and biases of the network can also be determined from the boundaries , one layer at a time . Boundaries for neurons in the first layer do not bend and are simply hyperplanes ; the equations of these hyperplanes expose the weights from the input to the first layer ( up to permutation , scaling , and sign ) . For each subsequent layer , the weight between neurons z and z′ can be determined by calculating how Bz′ bends when it crosses Bz . The details of our algorithm below are intended to make these intuitions concrete and perform efficiently even when the input space is high-dimensional . Algorithm 1 The first layer Initialize P1 = P2 = S1 = { } for t = 1 , . . . , L do Sample line segment ` P1 ← P1 ∪ PointsOnLine ( ` ) end for for p ∈ P1 do H = InferHyperplane ( p ) if TestHyperplane ( H ) then S1 ← S1 ∪ GetParams ( H ) else P2 ← P2 ∪ { p } end if end for return Parameters S1 , unused sample points P2 Algorithm 2 Additional layers Input Pk and S1 , . . . , Sk−1 Initialize Sk = { } for p1 ∈ Pk−1 on boundary Bz do Initialize Az = { p1 } , Lz = Hz = { } while Lz 6⊇ Layer k − 1 do Pick pi ∈ A and v p′ , Bz′ = ClosestBoundary ( pi , v ) if p′ on boundary then Az ← Az ∪ { p′ + } Lz ← Lz ∪ { z′ } Hz ← Hz ∪ { InferHyperplane ( pi ) } else Pk ← Pk ∪ { p1 } ; break end if end while if Lz ⊇ Layer k − 1 then Sk ← GetParams ( Tz ) end if end for return Parameters Sk , unused sample points Pk+1
This paper introduces a procedure for reconstructing the architecture and weights of deep ReLU network, given only the ability to query the network (observe network outputs for a sequence of inputs). The algorithm takes advantage of the piecewise linearity of ReLU networks and an analysis by [Hanin and Rolnick, 2019b] of the boundaries between linear regions as bent hyperplanes. The observation that a boundary bends only for other boundaries corresponding to neurons in earlier network layers leads to a recursive layer-by-layer procedure for recovering network parameters. Experiments show ability to recover both random networks and networks trained for a memorization task. The method is currently limited to ReLU networks and does not account for any parameter-sharing structure, such as that found in convolutional networks.
SP:6ecf7180d11e9eaf100d489c1c20123cde7a258d
ISBNet: Instance-aware Selective Branching Networks
Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search ( NAS ) . Although remarkable efficiency and accuracy have been achieved , existing expert designed and NAS models neglect the fact that input instances are of varying complexity and thus different amounts of computation are required . Inference with a fixed model that processes all instances through the same transformations would incur computational resources unnecessarily . Customizing the model capacity in an instance-aware manner is required to alleviate such a problem . In this paper , we propose a novel Instanceaware Selective Branching Network-ISBNet to support efficient instance-level inference by selectively bypassing transformation branches of insignificant importance weight . These weights are dynamically determined by a lightweight hypernetwork SelectionNet and recalibrated by gumbel-softmax for sparse branch selection . Extensive experiments show that ISBNet achieves extremely efficient inference in terms of parameter size and FLOPs comparing to existing networks . For example , ISBNet takes only 8.70 % parameters and 31.01 % FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10 . 1 INTRODUCTION . Deep convolutional neural networks ( CNNs ) ( He et al. , 2016 ; Zoph et al. , 2018 ) have revolutionized computer vision with increasingly larger and more sophisticated architectures . These model architectures have been designed and calibrated by domain experts with rich engineering experience . To achieve good inference results , these models typically comprise hundreds of layers and contain tens of millions of parameters and consequently , consume substantial amounts of computational resources for both training and inference . Recently , there has been a growing interest in efficient network design ( Howard et al. , 2017 ; Iandola et al. , 2016 ; Zhang et al. , 2018 ; Sandler et al. , 2018 ) and neural architecture search ( NAS ) ( Zoph et al. , 2018 ; Real et al. , 2018 ; Liu et al. , 2018b ) , respectively with the objective of devising network architectures that are efficient during inference and automating the architecture design process . Many efficient architectures have indeed been designed in recent years . SqueezeNet ( Iandola et al. , 2016 ) and MobileNet ( Howard et al. , 2017 ) substantially reduce parameter size and computation cost in terms of FLOPs on mobile devices . More recent works such as MobileNetV2 ( Sandler et al. , 2018 ) and ShuffleNetV2 ( Ma et al. , 2018 ) further reduce the FLOPs . It is well recognized that devising these architectures is non-trivial and requires engineering expertise . Automating the architecture design process via neural architecture search ( NAS ) has attracted increasing attention in recent years . Mainstream NAS algorithms ( Zoph & Le , 2016 ; Zoph et al. , 2018 ; Real et al. , 2018 ) search for the network architecture iteratively . In each iteration , an architecture is proposed by a controller , and then trained and evaluated . The evaluation performance is in turn exploited to update the controller . This process is incredibly slow because both the controller and each derived architecture require training . For instance , the reinforcement learning ( RL ) based controller NASNet ( Zoph et al. , 2018 ) takes 1800 GPU days and the evolution algorithm based controller AmoebaNet ( Real et al. , 2018 ) incurs 3150 GPU days to obtain the best architecture . Many acceleration methods ( Baker et al. , 2017 ; Liu et al. , 2018a ; Bender et al. , 2018 ; Pham et al. , 2018 ) have been proposed to accelerate the search process , and more recent works ( Liu et al. , 2018b ; Xie et al. , 2018 ; Wu et al. , 2018 ; Cai et al. , 2018 ) remove the controller and instead optimize the architecture selection and parameters together with gradient-based optimization algorithms . While both expert designed and NAS searched models have produced remarkable efficiency and prediction performance , they have neglected one critical issue that would affect inference efficiency . The architectures of these models are fixed during inference time and thus not adaptive to the varying complexity of input instances . However , in real-world applications , there are only a small fraction of input instances requiring deep representations ( Wang et al. , 2018 ; Huang et al. , 2017a ) . Consequently , expensive computational resources would be wasted if all instances are treated equally . Designing a model with sufficient representational power to cover the hard instances , and meanwhile a finer-grained control to provide just necessary computation dynamically for instance of varying difficulty is therefore essential . In this paper , we propose ISBNet to address the aforementioned issue with its building block Cell as illustrated in Figure 1 . Following the widely adopted strategy in NAS ( Zoph et al. , 2018 ; Pham et al. , 2018 ; Liu et al. , 2018b ; Xie et al. , 2018 ) , the backbone network is a stack of L structurally identical cells , receiving inputs from their two previous cells and each cell contains N inter-connected computational Nodes . The architecture of ISBNet deviates from the conventional wisdom of NAS which painstakingly search for the connection topology and the corresponding transformation operation of each connection . In ISBNet , each node is instead simply connected to its prescribed preceding node ( s ) and each connection transforms via a candidate set of B operations ( branches ) . To allow for instance-aware inference control in the branch level , we integrate L lightweight hypernetworks SelectionNets , one for each cell to determine the importance weight of each branch . Gumbel-softmax ( Jang et al. , 2016 ; Maddison et al. , 2016 ) is further introduced to recalibrate these weights , which enables efficient gradient-based optimization during training , and more importantly , leads to sparse branch selection during inference for efficiency . The contributions of ISBNet can be summarized as follows : • ISBNet is a general architecture framework combining advantages from both efficient network design and NAS , whose components are readily customizable . • ISBNet is a novel architecture supporting the instance-level selective branching mechanism by introducing lightweight SelectionNets , which improves inference efficiency significantly by reducing redundant computation . • ISBNet successfully integrates gumbel-softmax to the branch selection process , which enables direct gradient descent optimization and is more tractable than RL-based method . • ISBNet achieves state-of-the-art inference efficiency in terms of parameter size and FLOPs and inherently supports applications requiring fine-grained instance-level control . Our experiments show that ISBNet is extremely efficient during inference and successfully selects only vital branches on a per-input basis . In particular , with a minor 1.07 % accuracy decrease , ISBNet reduces the parameter size and FLOPs by 10x and 11.31x respectively comparing to the NAS searched high-performance architecture DARTS ( Liu et al. , 2018b ) . Furthermore , with a tiny model of 0.57M parameters , ISBNet achieves much better accuracy while with only 8.03 % and 30.60 % inference time parameter size and FLOPs comparing to the expert-designed efficient network ShuffleNetV2 1.5x ( Ma et al. , 2018 ) . We also conduct ablation studies and visualize the branch selection process to understand the proposed architecture better . The main results and findings are summarized in Sec 4.2 and Sec 4.3 . 2 RELATED WORK . Efficient Network Design . Designing resource-aware networks ( Iandola et al. , 2016 ; Gholami et al. , 2018 ; Ma et al. , 2018 ; Sandler et al. , 2018 ; Gholami et al. , 2018 ) has attracted a great deal of attention in recent years . However , these works mainly focus on reducing parameter size and inference FLOPs in many works . For instances , SqueezeNet ( Hu et al. , 2018 ) reduces parameters and computation with the fire module ; MobileNetV2 ( Sandler et al. , 2018 ) utilize depth-wise and point-wise convolution for more parameter-efficient convolutional neural networks ; ShuffleNetV2 ( Ma et al. , 2018 ) proposes lightweight group convolution with channel shuffle to facilitate the information flowing across the channels . To make inference efficient , many of these transformations are introduced to the candidate operation set in ISBNet . Many recent works explore conditional ( Wang et al. , 2018 ) and resource-constrained prediction ( Huang et al. , 2017a ) for efficiency . SkipNet ( Wang et al. , 2018 ) introduces a gating hypernetwork to determine whether to bypass each residual layer ( He et al. , 2016 ) conditional on the current input instance . Compared with SkipNet , ISBNet provides more efficient and diversified branch selections for the backbone network and the hypernetworks in ISBNet are optimized in an end-to-end training manner instead of generally less tractable policy gradient ( Williams , 1992 ) . MSDNet ( Huang et al. , 2017a ) supports budgeted prediction within prescribed computational resource constraint during inference by inserting multiple classifiers into a 2D multi-scale version of DenseNet ( Huang et al. , 2017b ) . By early-exit into a classifier , MSDNet can provide approximate predictions with minor accuracy decrease . Functionally , ISBNet also supports budgeted prediction by dynamically controlling the number of branches selected , therefore per-input inference cost . Neural Architecture Search . Mainstream NAS ( Zoph et al. , 2018 ; Real et al. , 2018 ) treats architecture search as a stand-alone process whose optimization is severed from candidate architecture optimization . Search algorithms such as RL-based NAS ( Zoph et al. , 2018 ) and evolutionary-based NAS ( Real et al. , 2018 ) obtain state-of-the-art architectures at an unprecedented amount of the GPUtime searching cost . Recently , many works have been proposed to accelerate the search pipeline , e.g. , via performance prediction ( Baker et al. , 2017 ; Liu et al. , 2018a ) , hypernetworks generating initialization weights ( Brock et al. , 2017 ) , weight sharing ( Bender et al. , 2018 ; Pham et al. , 2018 ) . These approaches greatly alleviate the search inefficiency while the scalability issue remains unsolved . A number of proposals ( Liu et al. , 2018b ; Wu et al. , 2018 ; Cai et al. , 2018 ) instead integrate the architecture search process and architecture optimization into the same gradient-based optimization framework . In particular , DARTS ( Liu et al. , 2018b ) relaxes discrete search space to be continuous by introducing operation mixing weights to each connection and optimizes these weights directly with gradient back-propagated from validation loss . Similarly , the discrete search space in SNAS ( Xie et al. , 2018 ) is modeled with sets of one-hot random variables for each connection , which is made differentiable by relaxing the discrete distribution with continuous concrete distribution ( Jang et al. , 2016 ; Maddison et al. , 2016 ) . In terms of architecture optimization , ISBNet also relaxes the discrete branch selection to continuous importance weights optimized by gradient descent ; while instead of direct optimization on the weights , SelectionNets are introduced to dynamically generate these weights which are more effective and meanwhile bring about larger model capacity . Further , SelectionNets enable instance-level architecture customization rather than finding a fixed model . 3 INSTANCE-AWARE SELECTIVE BRANCHING NETWORK . 3.1 THE BACKBONE NETWORK . The backbone network is constructed with a stack of L cells , each of which is a directed acyclic graph consisting of an ordered sequence of N intermediate nodes . As is illustrated in Figure 1 , xl0 and xl1 are the cell input nodes from the two preceding cells ; each intermediate node x l i ( i ≥ 2 ) of the lth cell forms a latent representation and receives n input nodes1 from its preceding nodes : 1n can be larger than 2 for deeper and wider local representation . E.g. , n = i− 1 for each xli leads to dense connection , i.e. , DenseNet ( Huang et al. , 2017b ) . xli = ∑ j∈Sli Fj , i ( xlj ) , Sli ⊂ { 0 , 1 , · · · , i− 1 } ∧ |Sli| = n ( 1 ) Thereby , each cell contains C = n ·N connections in total . The connection passes information from node xlj to x l i after the aggregation of a candidate set of B branches of transformation inspired from widely-adopted transformations in NAS ( Pham et al. , 2018 ; Liu et al. , 2018b ; Xie et al. , 2018 ) and efficient network design ( Iandola et al. , 2016 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ) : Fj , i ( xlj ) = B∑ b=1 wb · Fb ( xlj ) ( 2 ) where wb here represents the importance of the bth branch ( operation ) of the connection and is dynamically generated by the cell hypernetwork rather than a fixed learned parameter as is in existing NAS methods ( Liu et al. , 2018b ; Xie et al. , 2018 ) . We shall introduce the hypernetwork in Section 3.2 . Finally , the output of the cell xlout is aggregated by concatenating the output from all the intermediate nodes . We shall use superscript l , subscript c and b to index the cell , connection , and branch respectively . Recent work ( Xie et al. , 2019 ) reveals that architectures with randomly generated connection achieve surprisingly competitive results comparing to best NAS models , which is confirmed empirically in our experiments on smaller datasets . In this paper , we thus mainly focus on the branch transformation and selection part and their impact on the inference efficiency instead of specifying a detailed connection topology . Under this architecture formulation framework , we can readily adjust the number of candidate branches B and also the specific transformations before training , customizing model capacity and efficiency respectively depending on the difficulty of the task and resource constraints in deployment . 3.2 SelectionNet FOR WEIGHT RECALIBRATION To support instance level inference control , we introduce L lightweight hypernetworks SelectionNet , one for each cell . Each SelectionNet SNetl receives the same input as the lth cell , specifically the two output nodes xl−2out , x l−1 out ( i.e . x l 0 , x l 1 ) from the preceding cells , and concurrently produce C sets of recalibration weights , one for each connection of the cell : W l = SNetl ( xl0 , x l 1 ) ( 3 ) where W l ∈ RC×B is the recalibration weight matrix for the lth cell . The SelectionNet SNetl dynamically generates these weights with a pipeline of m = 2 convolutional blocks , a global average pooling and finally an affine transformation . For the m convolutional transformation , we adopt separable convolution ( Sandler et al. , 2018 ) which contains a point-wise convolution and a depth-wise convolution of stride 2 and kernel size 5 × 5 . The stride reduces the parameter size and computation of SNetl , and the larger kernel size for depth-wise convolution incurs negligible overhead while extracts features for the immediate weight generation with a larger local receptive field . The recalibration weights given by the SelectionNet is reminiscent of convolutional attention mechanism ( Hu et al. , 2018 ; Woo et al. , 2018 ; Newell et al. , 2016 ) , where attention weights are determined dynamically by summarizing information of the immediate input and then exploited to recalibrate the relative importance of different input dimensions , e.g. , channels in SENet ( Hu et al. , 2018 ) . In ISBNet , the recalibration weights are introduced to the branch . Particularly , each candidate operation of the connection is coupled with a rescaling weight . The gumbel-softmax ( Jang et al. , 2016 ; Maddison et al. , 2016 ) technique and the reparameterization trick ( Kingma & Welling , 2013 ) is introduced to further recalibrate these weights generated by the SelectionNet , to enable efficient gradient-based optimization for the whole network during training , and more importantly , ensure a sparse selection of important branches during inference . More specifically , each set of importance weights W lc ∈ RB for the cth connection of the lth cell ( Clc ) after the following recalibration of the gumbel-softmax follows concrete distribution ( Maddison et al. , 2016 ) controlled by a temperature parameter τ : w̃lc , b = exp ( ( wlc , b +G l c , b ) /τ ) ∑B b′=1 exp ( ( w l c , b′ +G l c , b′ ) /τ ) , τ > 0 ( 4 ) where w̃lc , b is then directly used for branch recalibration as in Equation 2 , and G l c , b = − log ( − log ( U lc , b ) ) here is a gumbel random variable coupling with bth branch by sampling U lc , b from Uniform ( 0 , 1 ) ( Jang et al. , 2016 ) . The concrete distribution ( Maddison et al. , 2016 ) suggests that ( 1 ) w̃c , b = 1B , as τ → +∞ , and more importantly ( 2 ) : p ( lim τ→0 w̃lc , b = 1 ) = exp ( w l c , b ) / B∑ b′=1 exp ( wlc , b′ ) ( 5 ) Therefore , high temperature leads to uniform dense branch selection while lower temperature tends to sparsely sample branches following a categorical distribution parameterized by softmax ( W lc ) . 3.3 OPTIMIZATION AND INFERENCE FOR ISBNet With the continuous relaxation of the gumbel-softmax ( Jang et al. , 2016 ; Maddison et al. , 2016 ) and the reparameterization ( Kingma & Welling , 2013 ) , the branch selection process of the SelectionNets is made directly differentiable with respect to the weight wlc , b . In particular , the gradient ∂L ∂w̃lc , b backpropagated from the loss function L to w̃lc , b through the backbone network can be directly backpropagated towlc , b with low variance ( Maddison et al. , 2016 ) , and further to the lth SelectionNet unimpededly . Therefore , parameters of the whole network can be optimized in an end-to-end manner by gradient descent . The temperature τ of Equation 4 regulates the sparsity of the branch selection . A relatively higher temperature forces the weights to distribute more uniformly so that all the branches of each connection are efficiently trained . While a low temperature instead tends to sparsely sample one branch from the categorical distribution parameterized by the importance weights dynamically determined by SelectionNets , thus supporting finer-grained instance-level inference control by bypassing unimportant branches . To leverage both characteristics , we propose a two stage training scheme for ISBNet : ( 1 ) the first stage pretrains the whole network with a fixed relatively high temperature till convergence . ( 2 ) the second stage fine-tunes the parameters with τ steadily annealing to a relatively low temperature . The first stage ensures that branches are sufficiently optimized before the instanceaware selection and the fine-tuning in the second stage helps maintain the performance of ISBNet under sparse branch selection during inference . To further promote inference efficiency and reduce redundancy , a regularization term is explicitly introduced in the fine-tuning stage which takes into account the expectation of the resource consumptionR in the final loss function L for correctly classified instances : L = LCE + λ1||w||22 + λ21ŷ=y logE [ R ] ≈ LCE + λ1||w||22 + λ21ŷ=y log L∑ l=1 C∑ c=1 B∑ b=1 w̃lc , b · R ( F lc , b ( · ) ) ( 6 ) where LCE and λ1||w||22 denotes the cross-entropy loss and the weight decay term , y is the ground truth class label , ŷ the prediction , λ2 controls the regularization strength and R ( · ) calculates the resource consumption of each operation F lc , o ( · ) . The operation importance weight w̃lc , b represents the probability of the corresponding branch F lc , b being selected during inference , and therefore the regularization term E [ R ] corresponds to the expectation of the aggregated resource required for each input instance . The resource regularizer is readily adjustable depending on deployment constraints , which may include the parameter size , FLOPs , and memory access cost ( MAC ) . In this work , we mainly focus on the inference time , specifically FLOPs , which can be calculated beforehand for each branch . R ( F lc , b ( · ) ) is thus a constant here , which means that the regularizerR is also directly differentiable with respect to w̃lc , b . We denote ISBNet trained with regularization strength λ2 as ISBNet-R-λ2 . During inference , the instance-level selective branching is achieved for each connection Clc by selecting branches of the top k largest recalibration weights whose aggregated weight slc just exceeds a threshold T . Denoting W̃ lc sorted in descending order as Ŵ l c , then : slc = min { sk : sk = k∑ b=1 ŵlc , b ∧ sk ≥ T } ( 7 ) After the selection , the recalibration weight w̃lc , b of the selected branch is rescaled by 1 slc to stabilize the scale of the representation . Consequently , the SelectionNet will select only necessary branches2 for each instance depending on the input difficulty and meanwhile the FLOPs of each branch , i.e. , trading off between LCE and R in Equation 6 . Furthermore , the resource consumption of each instance can be precisely regulated in a finer-grained manner by scheduling the threshold dynamically for each connection . In this paper , the same threshold is shared among all connections for simplicity and ISBNet inference with the threshold t is denoted as ISBNet-T-t . Under such an inference scheme , the backbone network comprises up to ( 2B−1 ) L·C possible candidate subnets , corresponding to each unique branch selection of allL·C connections . For a small ISBNet of 10 cells , with 5 candidate operations , 8 connections per cell , there are ( 25 − 1 ) 8·10 ≈ 2·10119 possible candidate architectures of different branch combination , which is orders of magnitudes larger than the search space of conventional NAS ( Pham et al. , 2018 ; Liu et al. , 2018b ; Xie et al. , 2018 ; Cai et al. , 2018 ; Stamoulis et al. , 2019 ) .
Neural architecture search usually aims to find a single fixed architecture for the task of interest. The paper proposes to condition the architecture on the input instances by introducing a "selection network" that learns to retain a subset of branches in the architecture during each inference pass. The intuition is that easier instances require less compute (hence a shallower/sparser architecture) as compared to the more difficult ones. The authors show improved results on CIFAR-10 and ImageNet in terms of accuracy-latency trade-off over some handcrafted architectures and NAS baselines. The method resembles sparsely gated mixture of experts [1] at a high-level, but has been implemented in a way that better fits the context of architecture search (which is still technically interesting).
SP:cd75cf49f7e773f69c08c6489ec9f63f9a2de4ad
ISBNet: Instance-aware Selective Branching Networks
Recent years have witnessed growing interests in designing efficient neural networks and neural architecture search ( NAS ) . Although remarkable efficiency and accuracy have been achieved , existing expert designed and NAS models neglect the fact that input instances are of varying complexity and thus different amounts of computation are required . Inference with a fixed model that processes all instances through the same transformations would incur computational resources unnecessarily . Customizing the model capacity in an instance-aware manner is required to alleviate such a problem . In this paper , we propose a novel Instanceaware Selective Branching Network-ISBNet to support efficient instance-level inference by selectively bypassing transformation branches of insignificant importance weight . These weights are dynamically determined by a lightweight hypernetwork SelectionNet and recalibrated by gumbel-softmax for sparse branch selection . Extensive experiments show that ISBNet achieves extremely efficient inference in terms of parameter size and FLOPs comparing to existing networks . For example , ISBNet takes only 8.70 % parameters and 31.01 % FLOPs of the efficient network MobileNetV2 with comparable accuracy on CIFAR-10 . 1 INTRODUCTION . Deep convolutional neural networks ( CNNs ) ( He et al. , 2016 ; Zoph et al. , 2018 ) have revolutionized computer vision with increasingly larger and more sophisticated architectures . These model architectures have been designed and calibrated by domain experts with rich engineering experience . To achieve good inference results , these models typically comprise hundreds of layers and contain tens of millions of parameters and consequently , consume substantial amounts of computational resources for both training and inference . Recently , there has been a growing interest in efficient network design ( Howard et al. , 2017 ; Iandola et al. , 2016 ; Zhang et al. , 2018 ; Sandler et al. , 2018 ) and neural architecture search ( NAS ) ( Zoph et al. , 2018 ; Real et al. , 2018 ; Liu et al. , 2018b ) , respectively with the objective of devising network architectures that are efficient during inference and automating the architecture design process . Many efficient architectures have indeed been designed in recent years . SqueezeNet ( Iandola et al. , 2016 ) and MobileNet ( Howard et al. , 2017 ) substantially reduce parameter size and computation cost in terms of FLOPs on mobile devices . More recent works such as MobileNetV2 ( Sandler et al. , 2018 ) and ShuffleNetV2 ( Ma et al. , 2018 ) further reduce the FLOPs . It is well recognized that devising these architectures is non-trivial and requires engineering expertise . Automating the architecture design process via neural architecture search ( NAS ) has attracted increasing attention in recent years . Mainstream NAS algorithms ( Zoph & Le , 2016 ; Zoph et al. , 2018 ; Real et al. , 2018 ) search for the network architecture iteratively . In each iteration , an architecture is proposed by a controller , and then trained and evaluated . The evaluation performance is in turn exploited to update the controller . This process is incredibly slow because both the controller and each derived architecture require training . For instance , the reinforcement learning ( RL ) based controller NASNet ( Zoph et al. , 2018 ) takes 1800 GPU days and the evolution algorithm based controller AmoebaNet ( Real et al. , 2018 ) incurs 3150 GPU days to obtain the best architecture . Many acceleration methods ( Baker et al. , 2017 ; Liu et al. , 2018a ; Bender et al. , 2018 ; Pham et al. , 2018 ) have been proposed to accelerate the search process , and more recent works ( Liu et al. , 2018b ; Xie et al. , 2018 ; Wu et al. , 2018 ; Cai et al. , 2018 ) remove the controller and instead optimize the architecture selection and parameters together with gradient-based optimization algorithms . While both expert designed and NAS searched models have produced remarkable efficiency and prediction performance , they have neglected one critical issue that would affect inference efficiency . The architectures of these models are fixed during inference time and thus not adaptive to the varying complexity of input instances . However , in real-world applications , there are only a small fraction of input instances requiring deep representations ( Wang et al. , 2018 ; Huang et al. , 2017a ) . Consequently , expensive computational resources would be wasted if all instances are treated equally . Designing a model with sufficient representational power to cover the hard instances , and meanwhile a finer-grained control to provide just necessary computation dynamically for instance of varying difficulty is therefore essential . In this paper , we propose ISBNet to address the aforementioned issue with its building block Cell as illustrated in Figure 1 . Following the widely adopted strategy in NAS ( Zoph et al. , 2018 ; Pham et al. , 2018 ; Liu et al. , 2018b ; Xie et al. , 2018 ) , the backbone network is a stack of L structurally identical cells , receiving inputs from their two previous cells and each cell contains N inter-connected computational Nodes . The architecture of ISBNet deviates from the conventional wisdom of NAS which painstakingly search for the connection topology and the corresponding transformation operation of each connection . In ISBNet , each node is instead simply connected to its prescribed preceding node ( s ) and each connection transforms via a candidate set of B operations ( branches ) . To allow for instance-aware inference control in the branch level , we integrate L lightweight hypernetworks SelectionNets , one for each cell to determine the importance weight of each branch . Gumbel-softmax ( Jang et al. , 2016 ; Maddison et al. , 2016 ) is further introduced to recalibrate these weights , which enables efficient gradient-based optimization during training , and more importantly , leads to sparse branch selection during inference for efficiency . The contributions of ISBNet can be summarized as follows : • ISBNet is a general architecture framework combining advantages from both efficient network design and NAS , whose components are readily customizable . • ISBNet is a novel architecture supporting the instance-level selective branching mechanism by introducing lightweight SelectionNets , which improves inference efficiency significantly by reducing redundant computation . • ISBNet successfully integrates gumbel-softmax to the branch selection process , which enables direct gradient descent optimization and is more tractable than RL-based method . • ISBNet achieves state-of-the-art inference efficiency in terms of parameter size and FLOPs and inherently supports applications requiring fine-grained instance-level control . Our experiments show that ISBNet is extremely efficient during inference and successfully selects only vital branches on a per-input basis . In particular , with a minor 1.07 % accuracy decrease , ISBNet reduces the parameter size and FLOPs by 10x and 11.31x respectively comparing to the NAS searched high-performance architecture DARTS ( Liu et al. , 2018b ) . Furthermore , with a tiny model of 0.57M parameters , ISBNet achieves much better accuracy while with only 8.03 % and 30.60 % inference time parameter size and FLOPs comparing to the expert-designed efficient network ShuffleNetV2 1.5x ( Ma et al. , 2018 ) . We also conduct ablation studies and visualize the branch selection process to understand the proposed architecture better . The main results and findings are summarized in Sec 4.2 and Sec 4.3 . 2 RELATED WORK . Efficient Network Design . Designing resource-aware networks ( Iandola et al. , 2016 ; Gholami et al. , 2018 ; Ma et al. , 2018 ; Sandler et al. , 2018 ; Gholami et al. , 2018 ) has attracted a great deal of attention in recent years . However , these works mainly focus on reducing parameter size and inference FLOPs in many works . For instances , SqueezeNet ( Hu et al. , 2018 ) reduces parameters and computation with the fire module ; MobileNetV2 ( Sandler et al. , 2018 ) utilize depth-wise and point-wise convolution for more parameter-efficient convolutional neural networks ; ShuffleNetV2 ( Ma et al. , 2018 ) proposes lightweight group convolution with channel shuffle to facilitate the information flowing across the channels . To make inference efficient , many of these transformations are introduced to the candidate operation set in ISBNet . Many recent works explore conditional ( Wang et al. , 2018 ) and resource-constrained prediction ( Huang et al. , 2017a ) for efficiency . SkipNet ( Wang et al. , 2018 ) introduces a gating hypernetwork to determine whether to bypass each residual layer ( He et al. , 2016 ) conditional on the current input instance . Compared with SkipNet , ISBNet provides more efficient and diversified branch selections for the backbone network and the hypernetworks in ISBNet are optimized in an end-to-end training manner instead of generally less tractable policy gradient ( Williams , 1992 ) . MSDNet ( Huang et al. , 2017a ) supports budgeted prediction within prescribed computational resource constraint during inference by inserting multiple classifiers into a 2D multi-scale version of DenseNet ( Huang et al. , 2017b ) . By early-exit into a classifier , MSDNet can provide approximate predictions with minor accuracy decrease . Functionally , ISBNet also supports budgeted prediction by dynamically controlling the number of branches selected , therefore per-input inference cost . Neural Architecture Search . Mainstream NAS ( Zoph et al. , 2018 ; Real et al. , 2018 ) treats architecture search as a stand-alone process whose optimization is severed from candidate architecture optimization . Search algorithms such as RL-based NAS ( Zoph et al. , 2018 ) and evolutionary-based NAS ( Real et al. , 2018 ) obtain state-of-the-art architectures at an unprecedented amount of the GPUtime searching cost . Recently , many works have been proposed to accelerate the search pipeline , e.g. , via performance prediction ( Baker et al. , 2017 ; Liu et al. , 2018a ) , hypernetworks generating initialization weights ( Brock et al. , 2017 ) , weight sharing ( Bender et al. , 2018 ; Pham et al. , 2018 ) . These approaches greatly alleviate the search inefficiency while the scalability issue remains unsolved . A number of proposals ( Liu et al. , 2018b ; Wu et al. , 2018 ; Cai et al. , 2018 ) instead integrate the architecture search process and architecture optimization into the same gradient-based optimization framework . In particular , DARTS ( Liu et al. , 2018b ) relaxes discrete search space to be continuous by introducing operation mixing weights to each connection and optimizes these weights directly with gradient back-propagated from validation loss . Similarly , the discrete search space in SNAS ( Xie et al. , 2018 ) is modeled with sets of one-hot random variables for each connection , which is made differentiable by relaxing the discrete distribution with continuous concrete distribution ( Jang et al. , 2016 ; Maddison et al. , 2016 ) . In terms of architecture optimization , ISBNet also relaxes the discrete branch selection to continuous importance weights optimized by gradient descent ; while instead of direct optimization on the weights , SelectionNets are introduced to dynamically generate these weights which are more effective and meanwhile bring about larger model capacity . Further , SelectionNets enable instance-level architecture customization rather than finding a fixed model . 3 INSTANCE-AWARE SELECTIVE BRANCHING NETWORK . 3.1 THE BACKBONE NETWORK . The backbone network is constructed with a stack of L cells , each of which is a directed acyclic graph consisting of an ordered sequence of N intermediate nodes . As is illustrated in Figure 1 , xl0 and xl1 are the cell input nodes from the two preceding cells ; each intermediate node x l i ( i ≥ 2 ) of the lth cell forms a latent representation and receives n input nodes1 from its preceding nodes : 1n can be larger than 2 for deeper and wider local representation . E.g. , n = i− 1 for each xli leads to dense connection , i.e. , DenseNet ( Huang et al. , 2017b ) . xli = ∑ j∈Sli Fj , i ( xlj ) , Sli ⊂ { 0 , 1 , · · · , i− 1 } ∧ |Sli| = n ( 1 ) Thereby , each cell contains C = n ·N connections in total . The connection passes information from node xlj to x l i after the aggregation of a candidate set of B branches of transformation inspired from widely-adopted transformations in NAS ( Pham et al. , 2018 ; Liu et al. , 2018b ; Xie et al. , 2018 ) and efficient network design ( Iandola et al. , 2016 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ) : Fj , i ( xlj ) = B∑ b=1 wb · Fb ( xlj ) ( 2 ) where wb here represents the importance of the bth branch ( operation ) of the connection and is dynamically generated by the cell hypernetwork rather than a fixed learned parameter as is in existing NAS methods ( Liu et al. , 2018b ; Xie et al. , 2018 ) . We shall introduce the hypernetwork in Section 3.2 . Finally , the output of the cell xlout is aggregated by concatenating the output from all the intermediate nodes . We shall use superscript l , subscript c and b to index the cell , connection , and branch respectively . Recent work ( Xie et al. , 2019 ) reveals that architectures with randomly generated connection achieve surprisingly competitive results comparing to best NAS models , which is confirmed empirically in our experiments on smaller datasets . In this paper , we thus mainly focus on the branch transformation and selection part and their impact on the inference efficiency instead of specifying a detailed connection topology . Under this architecture formulation framework , we can readily adjust the number of candidate branches B and also the specific transformations before training , customizing model capacity and efficiency respectively depending on the difficulty of the task and resource constraints in deployment . 3.2 SelectionNet FOR WEIGHT RECALIBRATION To support instance level inference control , we introduce L lightweight hypernetworks SelectionNet , one for each cell . Each SelectionNet SNetl receives the same input as the lth cell , specifically the two output nodes xl−2out , x l−1 out ( i.e . x l 0 , x l 1 ) from the preceding cells , and concurrently produce C sets of recalibration weights , one for each connection of the cell : W l = SNetl ( xl0 , x l 1 ) ( 3 ) where W l ∈ RC×B is the recalibration weight matrix for the lth cell . The SelectionNet SNetl dynamically generates these weights with a pipeline of m = 2 convolutional blocks , a global average pooling and finally an affine transformation . For the m convolutional transformation , we adopt separable convolution ( Sandler et al. , 2018 ) which contains a point-wise convolution and a depth-wise convolution of stride 2 and kernel size 5 × 5 . The stride reduces the parameter size and computation of SNetl , and the larger kernel size for depth-wise convolution incurs negligible overhead while extracts features for the immediate weight generation with a larger local receptive field . The recalibration weights given by the SelectionNet is reminiscent of convolutional attention mechanism ( Hu et al. , 2018 ; Woo et al. , 2018 ; Newell et al. , 2016 ) , where attention weights are determined dynamically by summarizing information of the immediate input and then exploited to recalibrate the relative importance of different input dimensions , e.g. , channels in SENet ( Hu et al. , 2018 ) . In ISBNet , the recalibration weights are introduced to the branch . Particularly , each candidate operation of the connection is coupled with a rescaling weight . The gumbel-softmax ( Jang et al. , 2016 ; Maddison et al. , 2016 ) technique and the reparameterization trick ( Kingma & Welling , 2013 ) is introduced to further recalibrate these weights generated by the SelectionNet , to enable efficient gradient-based optimization for the whole network during training , and more importantly , ensure a sparse selection of important branches during inference . More specifically , each set of importance weights W lc ∈ RB for the cth connection of the lth cell ( Clc ) after the following recalibration of the gumbel-softmax follows concrete distribution ( Maddison et al. , 2016 ) controlled by a temperature parameter τ : w̃lc , b = exp ( ( wlc , b +G l c , b ) /τ ) ∑B b′=1 exp ( ( w l c , b′ +G l c , b′ ) /τ ) , τ > 0 ( 4 ) where w̃lc , b is then directly used for branch recalibration as in Equation 2 , and G l c , b = − log ( − log ( U lc , b ) ) here is a gumbel random variable coupling with bth branch by sampling U lc , b from Uniform ( 0 , 1 ) ( Jang et al. , 2016 ) . The concrete distribution ( Maddison et al. , 2016 ) suggests that ( 1 ) w̃c , b = 1B , as τ → +∞ , and more importantly ( 2 ) : p ( lim τ→0 w̃lc , b = 1 ) = exp ( w l c , b ) / B∑ b′=1 exp ( wlc , b′ ) ( 5 ) Therefore , high temperature leads to uniform dense branch selection while lower temperature tends to sparsely sample branches following a categorical distribution parameterized by softmax ( W lc ) . 3.3 OPTIMIZATION AND INFERENCE FOR ISBNet With the continuous relaxation of the gumbel-softmax ( Jang et al. , 2016 ; Maddison et al. , 2016 ) and the reparameterization ( Kingma & Welling , 2013 ) , the branch selection process of the SelectionNets is made directly differentiable with respect to the weight wlc , b . In particular , the gradient ∂L ∂w̃lc , b backpropagated from the loss function L to w̃lc , b through the backbone network can be directly backpropagated towlc , b with low variance ( Maddison et al. , 2016 ) , and further to the lth SelectionNet unimpededly . Therefore , parameters of the whole network can be optimized in an end-to-end manner by gradient descent . The temperature τ of Equation 4 regulates the sparsity of the branch selection . A relatively higher temperature forces the weights to distribute more uniformly so that all the branches of each connection are efficiently trained . While a low temperature instead tends to sparsely sample one branch from the categorical distribution parameterized by the importance weights dynamically determined by SelectionNets , thus supporting finer-grained instance-level inference control by bypassing unimportant branches . To leverage both characteristics , we propose a two stage training scheme for ISBNet : ( 1 ) the first stage pretrains the whole network with a fixed relatively high temperature till convergence . ( 2 ) the second stage fine-tunes the parameters with τ steadily annealing to a relatively low temperature . The first stage ensures that branches are sufficiently optimized before the instanceaware selection and the fine-tuning in the second stage helps maintain the performance of ISBNet under sparse branch selection during inference . To further promote inference efficiency and reduce redundancy , a regularization term is explicitly introduced in the fine-tuning stage which takes into account the expectation of the resource consumptionR in the final loss function L for correctly classified instances : L = LCE + λ1||w||22 + λ21ŷ=y logE [ R ] ≈ LCE + λ1||w||22 + λ21ŷ=y log L∑ l=1 C∑ c=1 B∑ b=1 w̃lc , b · R ( F lc , b ( · ) ) ( 6 ) where LCE and λ1||w||22 denotes the cross-entropy loss and the weight decay term , y is the ground truth class label , ŷ the prediction , λ2 controls the regularization strength and R ( · ) calculates the resource consumption of each operation F lc , o ( · ) . The operation importance weight w̃lc , b represents the probability of the corresponding branch F lc , b being selected during inference , and therefore the regularization term E [ R ] corresponds to the expectation of the aggregated resource required for each input instance . The resource regularizer is readily adjustable depending on deployment constraints , which may include the parameter size , FLOPs , and memory access cost ( MAC ) . In this work , we mainly focus on the inference time , specifically FLOPs , which can be calculated beforehand for each branch . R ( F lc , b ( · ) ) is thus a constant here , which means that the regularizerR is also directly differentiable with respect to w̃lc , b . We denote ISBNet trained with regularization strength λ2 as ISBNet-R-λ2 . During inference , the instance-level selective branching is achieved for each connection Clc by selecting branches of the top k largest recalibration weights whose aggregated weight slc just exceeds a threshold T . Denoting W̃ lc sorted in descending order as Ŵ l c , then : slc = min { sk : sk = k∑ b=1 ŵlc , b ∧ sk ≥ T } ( 7 ) After the selection , the recalibration weight w̃lc , b of the selected branch is rescaled by 1 slc to stabilize the scale of the representation . Consequently , the SelectionNet will select only necessary branches2 for each instance depending on the input difficulty and meanwhile the FLOPs of each branch , i.e. , trading off between LCE and R in Equation 6 . Furthermore , the resource consumption of each instance can be precisely regulated in a finer-grained manner by scheduling the threshold dynamically for each connection . In this paper , the same threshold is shared among all connections for simplicity and ISBNet inference with the threshold t is denoted as ISBNet-T-t . Under such an inference scheme , the backbone network comprises up to ( 2B−1 ) L·C possible candidate subnets , corresponding to each unique branch selection of allL·C connections . For a small ISBNet of 10 cells , with 5 candidate operations , 8 connections per cell , there are ( 25 − 1 ) 8·10 ≈ 2·10119 possible candidate architectures of different branch combination , which is orders of magnitudes larger than the search space of conventional NAS ( Pham et al. , 2018 ; Liu et al. , 2018b ; Xie et al. , 2018 ; Cai et al. , 2018 ; Stamoulis et al. , 2019 ) .
This paper proposes an instance-aware dynamic network, ISBNet, for efficient image classification. The network consists of layers of cell structures with multiple branches within. During the inference, the network uses SelectionNet to compute a "calibration weight matrix", which essentially controls which branches within the cell should be used to compute the output. Similar to previous works in NAS, this paper uses Gumbel Softmax to compute the branch selection probability. The network is trained to minimize a loss function that considers both the accuracy and the inference cost. Training of the network is divided into two stages: First, a high temperature is used to ensure all the branches are sufficiently optimized, and at the second stage, the authors aneal the temperature. During the inference, branches are selected if their probability computed by Gumbel Softmax is larger than a certain threshold.
SP:cd75cf49f7e773f69c08c6489ec9f63f9a2de4ad
SPECTRA: Sparse Entity-centric Transitions
Learning an agent that interacts with objects is ubiquituous in many RL tasks . In most of them the agent ’ s actions have sparse effects : only a small subset of objects in the visual scene will be affected by the action taken . We introduce SPECTRA , a model for learning slot-structured transitions from raw visual observations that embodies this sparsity assumption . Our model is composed of a perception module that decomposes the visual scene into a set of latent objects representations ( i.e . slot-structured ) and a transition module that predicts the next latent set slot-wise and in a sparse way . We show that learning a perception module jointly with a sparse slot-structured transition model not only biases the model towards more entity-centric perceptual groupings but also enables intrinsic exploration strategy that aims at maximizing the number of objects changed in the agents trajectory . 1 INTRODUCTION . Recent model-free deep reinforcement learning ( DRL ) approaches have achieved human-level performance in a wide range of tasks such as games ( Mnih et al. , 2015 ) . A critical known drawback of these approaches is the vast amount of experience required to achieve good performance . The promise of model-based DRL is to improve sample-efficiency and generalization capacity across tasks . However model-based algorithms pose strong requirements about the models used . They have to make accurate predictions about the future states which can be very hard when dealing with high dimensional inputs such as images . Thus one of the core challenge in model-based DRL is learning accurate and computationally efficient transition models through interacting with the environment . Buesing et al . ( 2018 ) developed state-space models techniques to reduce computational complexity by making predictions at a higher level of abstraction , rather than at the level of raw pixel observations . However these methods focused on learning a state-space model that doesn ’ t capture the compositional nature of observations : the visual scene is represented by a single latent vector and thus can not be expected to generalize well to different objects layouts . Extensive work in cognitive science ( Baillargeon et al. , 1985 ; Spelke , 2013 ) indeed show that human perception is structured around objects . Object-oriented MDPs ( Diuk et al. , 2008 ) show the benefit of using object-oriented representations for structured exploration although the framework as it is presented requires hand-crafted symbolic representations . Bengio ( 2017 ) proposed as a prior ( the consciousness prior ) that the dependency between high-level variables ( such as those describing actions , states and their changes ) be represented by a sparse factor graph , i.e. , with few high-level variables at a time interacting closely , and inference performed sequentially using attention mechanisms to select a few relevant variables at each step . Besides , a recent line of work ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ; Eslami et al. , 2016 ; Kosiorek et al. , 2018 ; Greff et al. , 2019 ; Burgess et al. , 2019 ) has focused on unsupervised ways to decompose a raw visual scene in terms of objects . They rely on a slot-structured representation ( see Figure 1 ) of the scene where the latent space is a set of vectors and each vector of the set is supposed to represent an “ object ” ( which we refer to as “ entity ” ) of the scene . However , to the best of our knowledge , Watters et al . ( 2019 ) is the only work that investigates the usefulness of slot-structured representations for RL . They introduced a method to learn a transition model that is applied to all the slots of their latent scene representation . Extending their work , we go further and posit that slot-wise transformations should be sparse and that the perception module should be learned jointly with the transition model . We introduce Sparse Entity-Centric Transitions ( SPECTRA ) , an entity-centric action-conditioned transition model that embodies the fact that the agents actions have sparse effects : that means that each action will change only a few slots in the latent set and let the remaining ones unchanged . This is motivated by the physical consideration that agent interventions are localized in time and space . Our contribution is motivated by three advantages : − Sparse transitions enable transferable model learning . The intuition here is that the sparsity of the transitions will bias the model towards learning primitive transformations ( e.g . how pushing a box affects the state of a box being pushed etc ) rather than configurationdependent transformations , the former being more directly transferable to environments with increased combinatorial complexity . − Sparse transitions enable a perception module ( when trained jointly ) to be biased towards more meaningful perceptual groupings , thus giving potentially better representations that can be used for downstream tasks , compared to representations learned from static data . − Sparse transitions enable an exploration strategy that learns to predict actions that will change the state of as many entities as possible in the environment without relying on pixels error loss . 2 RELATED WORK . Unsupervised visual scene decomposition . Learning good representations of complex visual scenes is a challenging problem for AI models that is far from solved . Recent work ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ; Eslami et al. , 2016 ; Kosiorek et al. , 2018 ; Greff et al. , 2019 ; Burgess et al. , 2019 ) has focused on learning models that discover objects in the visual scene . Greff et al . ( 2019 ) further advocates for the importance of learning to segment and represent objects jointly . Like us they approach the problem from a spatial mixture perspective . van Steenkiste et al . ( 2018 ) and Kosiorek et al . ( 2018 ) build upon Greff et al . ( 2017 ) and Eslami et al . ( 2016 ) respectively by incorporating next-step prediction as part of the training objective in order to guide the network to learn about essential properties of objects . As specified in van Steenkiste et al . ( 2019 ) we also believe that objects are task-dependent and that learning a slot-based representations along with sparse transitions bias the perception module towards entity-centric perceptual groupings and that those structured representations could be better suited for RL downstream tasks . Slot-based representation for RL . Recent advances in deep reinforcement learning are in part driven by a capacity to learn good representations that can be used by an agent to update its policy . Zambaldi et al . ( 2018 ) showed the importance of having structured representations and computation when it comes to tasks that explicitly targets relational reasoning . Watters et al . ( 2019 ) also show the importance of learning representations of the world in terms of objects in a simple model-based setting . Zambaldi et al . ( 2018 ) focuses on task-dependent structured computation . They use a selfattention mechanism ( Vaswani et al. , 2017 ) to model an actor-critic based agent where vectors in the set are supposed to represent entities in the current observation . Like Watters et al . ( 2019 ) we take a model-based approach : our aim is to learn task-independent slot-based representations that can be further used in downstream tasks . We leave the RL part for future work and focus on how learning those representations jointly with a sparse transition model may help learn a better transition model . 3 SPECTRA . Our model is composed of two main components : a perception module and a transition module ( section 3.1 ) . The way we formulated the transition implicitly defines an exploration policy ( section 3.3 ) that aims at changing the states of as many entities as possible . Choice of Environment . Here we are interested in environments containing entities an agent can interact with and where actions only affect a few of them . Sokoban is thus a good testbed for our model . It consists of a difficult puzzle domain requiring an agent to push a set of boxes onto goal locations . Irreversible wrong moves can make the puzzle unsolvable . Each room is composed of walls , boxes , targets , floor and the agent avatar . The agent can take 9 different actions ( no-op , 4 types of push and 4 types of move ) . Fully Observed vs Learned Entities . The whole point is to work with slot-based representations learned from a raw pixels input . There is no guarantee that those learned slots will effectively correspond to entities in the image . We thus distinguish two versions of the environment ( that correspond to two different levels of abstraction ) : − Fully observed entities : the input is structured . Each entity corresponds to a spatial location in the grid . Entities are thus represented by their one-hot label and indexed by their x-y coordinate . This will be referred to as the fully observed setting . There is no need for a perception module in this setting . − Raw pixels input : the input is unstructured . We need to infer the latent entities representations . This will be referred to as the latent setting . 3.1 MODEL OVERVIEW . The idea is to learn an action-conditioned model of the world where at each time step the following take place : − Pairwise Interactions : Each slot in the set gathers relevant information about the slots conditioned on the action taken − Active entity selection : Select slots that will be modified by the action taken − Update : Update the selected slots and let the other ones remain unchanged . Ideally , slots would correspond to unsupervisedly learned entity-centric representations of a raw visual input like it is done by Burgess et al . ( 2019 ) ; Greff et al . ( 2019 ) . We show that learning such perception modules jointly with the sparse transition biases the perceptual groupings to be entity-centric . Perception module . The perception module is composed of an encoder fenc and a decoder fdec . The encoder maps the input image x to a set of K latent entities such that at time-step t we have fenc ( x t ) = st ∈ RK×p . It thus outputs a slot-based representation of the scene where each slot is represented in the same way and is supposed to capture properties of one entity of the scene . Like ( Burgess et al. , 2019 ; Greff et al. , 2019 ) we model the input image xt with a spatial Gaussian Mixture Model . Each slot stk is decoded by the same decoder fdec into a pixel-wise mean µik and a pixel-wise assignment mtik ( non-negative and summing to 1 over k ) . Assuming that the pixels i are independent conditioned on st , the conditional likelihood thus becomes : pθ ( x t|st ) = D∏ i=1 ∑ k mtikN ( xti ; µtik , σ2 ) with µtik , mtik = fdec ( stk ) i . As our main goal is to investigate how sparse transitions bias the groupings of entities , in our experiments we use a very simple perception module represented in Figure 1 . We leave it for future work to incorporate more sophisticated perception modules . Pairwise interactions . In order to estimate the transition dynamics , we want to select relevant entities ( represented at time t by the set st ∈ RK×p ) that will be affected by the action taken , so we model the fact that each entity needs to gather useful information from entities interacting with the agent ( i.e . is the agent close ? is the agent blocked by a wall or a box ? etc .. ) . To that end we propose to use a self-attention mechanism ( Vaswani et al. , 2017 ) . From the k-th entity representation stk at time t , we extract a row-vector key K t k , a row-vector query Q t k and a row-vector value V t k conditioned on the action taken such that ( aggregating the rows into corresponding matrices and ignoring the temporal indices ) : s̃ = softmax ( KQT√ d ) V where the softmax is applied separately on each row . In practice we concatenate the results of several attention heads to use it as input to the entity selection phase . Entity selection . Once the entities are informed w.r.t . possible pairwise interactions the model needs to select which of these entities will be affected by the action taken at . Selection of the entities are regulated by a selection gate ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014 ) computed slotwise as : f tk = σ ( MLP ( [ s̃ t k ; a t ] ) ) ( 1 ) where f tk can be interpreted as the probability for an entity to be selected . Update . Finally , each selected entity is updated conditioned on its state stk at time-step t and the action taken at . We thus simply have : st+1k = f t kfθ ( [ s t k , a t ] ) + ( 1− f tk ) stk fθ is a learned action-conditioned transformation that is applied slot-wise . We posit that enforcing the transitions to be slot-wise and implicitly sparse will bias the model towards learning more primitive transformations . We verify this assumption in next subsection in the simpler case where the entities are fully observed ( and not inferred with a perception module ) .
This paper introduces a model that learns a slot-based representation, along with a transition model to predict the evolution of these representations in a sparse fashion, all in a fully unsupervised way. This is done by leveraging a self-attention mechanism to decide which slots should be updated in a given transition, leaving the others untouched. The model learns to encode in a slot-wise and is trained on single step transitions.
SP:b2a151ab2ee385b50881be2865f6503902f2fcc9
SPECTRA: Sparse Entity-centric Transitions
Learning an agent that interacts with objects is ubiquituous in many RL tasks . In most of them the agent ’ s actions have sparse effects : only a small subset of objects in the visual scene will be affected by the action taken . We introduce SPECTRA , a model for learning slot-structured transitions from raw visual observations that embodies this sparsity assumption . Our model is composed of a perception module that decomposes the visual scene into a set of latent objects representations ( i.e . slot-structured ) and a transition module that predicts the next latent set slot-wise and in a sparse way . We show that learning a perception module jointly with a sparse slot-structured transition model not only biases the model towards more entity-centric perceptual groupings but also enables intrinsic exploration strategy that aims at maximizing the number of objects changed in the agents trajectory . 1 INTRODUCTION . Recent model-free deep reinforcement learning ( DRL ) approaches have achieved human-level performance in a wide range of tasks such as games ( Mnih et al. , 2015 ) . A critical known drawback of these approaches is the vast amount of experience required to achieve good performance . The promise of model-based DRL is to improve sample-efficiency and generalization capacity across tasks . However model-based algorithms pose strong requirements about the models used . They have to make accurate predictions about the future states which can be very hard when dealing with high dimensional inputs such as images . Thus one of the core challenge in model-based DRL is learning accurate and computationally efficient transition models through interacting with the environment . Buesing et al . ( 2018 ) developed state-space models techniques to reduce computational complexity by making predictions at a higher level of abstraction , rather than at the level of raw pixel observations . However these methods focused on learning a state-space model that doesn ’ t capture the compositional nature of observations : the visual scene is represented by a single latent vector and thus can not be expected to generalize well to different objects layouts . Extensive work in cognitive science ( Baillargeon et al. , 1985 ; Spelke , 2013 ) indeed show that human perception is structured around objects . Object-oriented MDPs ( Diuk et al. , 2008 ) show the benefit of using object-oriented representations for structured exploration although the framework as it is presented requires hand-crafted symbolic representations . Bengio ( 2017 ) proposed as a prior ( the consciousness prior ) that the dependency between high-level variables ( such as those describing actions , states and their changes ) be represented by a sparse factor graph , i.e. , with few high-level variables at a time interacting closely , and inference performed sequentially using attention mechanisms to select a few relevant variables at each step . Besides , a recent line of work ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ; Eslami et al. , 2016 ; Kosiorek et al. , 2018 ; Greff et al. , 2019 ; Burgess et al. , 2019 ) has focused on unsupervised ways to decompose a raw visual scene in terms of objects . They rely on a slot-structured representation ( see Figure 1 ) of the scene where the latent space is a set of vectors and each vector of the set is supposed to represent an “ object ” ( which we refer to as “ entity ” ) of the scene . However , to the best of our knowledge , Watters et al . ( 2019 ) is the only work that investigates the usefulness of slot-structured representations for RL . They introduced a method to learn a transition model that is applied to all the slots of their latent scene representation . Extending their work , we go further and posit that slot-wise transformations should be sparse and that the perception module should be learned jointly with the transition model . We introduce Sparse Entity-Centric Transitions ( SPECTRA ) , an entity-centric action-conditioned transition model that embodies the fact that the agents actions have sparse effects : that means that each action will change only a few slots in the latent set and let the remaining ones unchanged . This is motivated by the physical consideration that agent interventions are localized in time and space . Our contribution is motivated by three advantages : − Sparse transitions enable transferable model learning . The intuition here is that the sparsity of the transitions will bias the model towards learning primitive transformations ( e.g . how pushing a box affects the state of a box being pushed etc ) rather than configurationdependent transformations , the former being more directly transferable to environments with increased combinatorial complexity . − Sparse transitions enable a perception module ( when trained jointly ) to be biased towards more meaningful perceptual groupings , thus giving potentially better representations that can be used for downstream tasks , compared to representations learned from static data . − Sparse transitions enable an exploration strategy that learns to predict actions that will change the state of as many entities as possible in the environment without relying on pixels error loss . 2 RELATED WORK . Unsupervised visual scene decomposition . Learning good representations of complex visual scenes is a challenging problem for AI models that is far from solved . Recent work ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ; Eslami et al. , 2016 ; Kosiorek et al. , 2018 ; Greff et al. , 2019 ; Burgess et al. , 2019 ) has focused on learning models that discover objects in the visual scene . Greff et al . ( 2019 ) further advocates for the importance of learning to segment and represent objects jointly . Like us they approach the problem from a spatial mixture perspective . van Steenkiste et al . ( 2018 ) and Kosiorek et al . ( 2018 ) build upon Greff et al . ( 2017 ) and Eslami et al . ( 2016 ) respectively by incorporating next-step prediction as part of the training objective in order to guide the network to learn about essential properties of objects . As specified in van Steenkiste et al . ( 2019 ) we also believe that objects are task-dependent and that learning a slot-based representations along with sparse transitions bias the perception module towards entity-centric perceptual groupings and that those structured representations could be better suited for RL downstream tasks . Slot-based representation for RL . Recent advances in deep reinforcement learning are in part driven by a capacity to learn good representations that can be used by an agent to update its policy . Zambaldi et al . ( 2018 ) showed the importance of having structured representations and computation when it comes to tasks that explicitly targets relational reasoning . Watters et al . ( 2019 ) also show the importance of learning representations of the world in terms of objects in a simple model-based setting . Zambaldi et al . ( 2018 ) focuses on task-dependent structured computation . They use a selfattention mechanism ( Vaswani et al. , 2017 ) to model an actor-critic based agent where vectors in the set are supposed to represent entities in the current observation . Like Watters et al . ( 2019 ) we take a model-based approach : our aim is to learn task-independent slot-based representations that can be further used in downstream tasks . We leave the RL part for future work and focus on how learning those representations jointly with a sparse transition model may help learn a better transition model . 3 SPECTRA . Our model is composed of two main components : a perception module and a transition module ( section 3.1 ) . The way we formulated the transition implicitly defines an exploration policy ( section 3.3 ) that aims at changing the states of as many entities as possible . Choice of Environment . Here we are interested in environments containing entities an agent can interact with and where actions only affect a few of them . Sokoban is thus a good testbed for our model . It consists of a difficult puzzle domain requiring an agent to push a set of boxes onto goal locations . Irreversible wrong moves can make the puzzle unsolvable . Each room is composed of walls , boxes , targets , floor and the agent avatar . The agent can take 9 different actions ( no-op , 4 types of push and 4 types of move ) . Fully Observed vs Learned Entities . The whole point is to work with slot-based representations learned from a raw pixels input . There is no guarantee that those learned slots will effectively correspond to entities in the image . We thus distinguish two versions of the environment ( that correspond to two different levels of abstraction ) : − Fully observed entities : the input is structured . Each entity corresponds to a spatial location in the grid . Entities are thus represented by their one-hot label and indexed by their x-y coordinate . This will be referred to as the fully observed setting . There is no need for a perception module in this setting . − Raw pixels input : the input is unstructured . We need to infer the latent entities representations . This will be referred to as the latent setting . 3.1 MODEL OVERVIEW . The idea is to learn an action-conditioned model of the world where at each time step the following take place : − Pairwise Interactions : Each slot in the set gathers relevant information about the slots conditioned on the action taken − Active entity selection : Select slots that will be modified by the action taken − Update : Update the selected slots and let the other ones remain unchanged . Ideally , slots would correspond to unsupervisedly learned entity-centric representations of a raw visual input like it is done by Burgess et al . ( 2019 ) ; Greff et al . ( 2019 ) . We show that learning such perception modules jointly with the sparse transition biases the perceptual groupings to be entity-centric . Perception module . The perception module is composed of an encoder fenc and a decoder fdec . The encoder maps the input image x to a set of K latent entities such that at time-step t we have fenc ( x t ) = st ∈ RK×p . It thus outputs a slot-based representation of the scene where each slot is represented in the same way and is supposed to capture properties of one entity of the scene . Like ( Burgess et al. , 2019 ; Greff et al. , 2019 ) we model the input image xt with a spatial Gaussian Mixture Model . Each slot stk is decoded by the same decoder fdec into a pixel-wise mean µik and a pixel-wise assignment mtik ( non-negative and summing to 1 over k ) . Assuming that the pixels i are independent conditioned on st , the conditional likelihood thus becomes : pθ ( x t|st ) = D∏ i=1 ∑ k mtikN ( xti ; µtik , σ2 ) with µtik , mtik = fdec ( stk ) i . As our main goal is to investigate how sparse transitions bias the groupings of entities , in our experiments we use a very simple perception module represented in Figure 1 . We leave it for future work to incorporate more sophisticated perception modules . Pairwise interactions . In order to estimate the transition dynamics , we want to select relevant entities ( represented at time t by the set st ∈ RK×p ) that will be affected by the action taken , so we model the fact that each entity needs to gather useful information from entities interacting with the agent ( i.e . is the agent close ? is the agent blocked by a wall or a box ? etc .. ) . To that end we propose to use a self-attention mechanism ( Vaswani et al. , 2017 ) . From the k-th entity representation stk at time t , we extract a row-vector key K t k , a row-vector query Q t k and a row-vector value V t k conditioned on the action taken such that ( aggregating the rows into corresponding matrices and ignoring the temporal indices ) : s̃ = softmax ( KQT√ d ) V where the softmax is applied separately on each row . In practice we concatenate the results of several attention heads to use it as input to the entity selection phase . Entity selection . Once the entities are informed w.r.t . possible pairwise interactions the model needs to select which of these entities will be affected by the action taken at . Selection of the entities are regulated by a selection gate ( Hochreiter & Schmidhuber , 1997 ; Cho et al. , 2014 ) computed slotwise as : f tk = σ ( MLP ( [ s̃ t k ; a t ] ) ) ( 1 ) where f tk can be interpreted as the probability for an entity to be selected . Update . Finally , each selected entity is updated conditioned on its state stk at time-step t and the action taken at . We thus simply have : st+1k = f t kfθ ( [ s t k , a t ] ) + ( 1− f tk ) stk fθ is a learned action-conditioned transformation that is applied slot-wise . We posit that enforcing the transitions to be slot-wise and implicitly sparse will bias the model towards learning more primitive transformations . We verify this assumption in next subsection in the simpler case where the entities are fully observed ( and not inferred with a perception module ) .
This paper proposes to use a ‘slot-based’ (factored) representation of a ‘scene’ s.t. a forward model learned over some observed transitions only requires sparse updates to the current representation. The results show that jointly learning the forward model and the scene representation encourages meaningful ‘entities’ to emerge in each slot. Additionally, the paper argues that this representation allows for better generalization and can also guide exploration by rewarding actions that change multiple entities
SP:b2a151ab2ee385b50881be2865f6503902f2fcc9
Model-based reinforcement learning for biological sequence design
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact . Doing so presents a challenging black-box optimization problem characterized by the large-batch , low round setting due to the need for labor-intensive wet lab evaluations . In response , we propose using reinforcement learning ( RL ) based on proximal-policy optimization ( PPO ) for biological sequence design . RL provides a flexible framework for optimization generative sequence models to achieve specific criteria , such as diversity among the high-quality sequences discovered . We propose a model-based variant of PPO , DyNA PPO , to improve sample efficiency , where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds . To accommodate the growing number of observations across rounds , the simulator model is automatically selected at each round from a pool of diverse models of varying capacity . On the tasks of designing DNA transcription factor binding sites , designing antimicrobial proteins , and optimizing the energy of Ising models based on protein structure , we find that DyNA PPO performs significantly better than existing methods in settings in which modeling is feasible , while still not performing worse in situations in which a reliable model can not be learned . 1 INTRODUCTION . Driven by real-world obstacles in health and disease requiring new drugs , treatments , and assays , the goal of biological sequence design is to identify new discrete sequences x which optimize some oracle , typically an experimentally-measured functional property f ( x ) . This is a difficult black-box optimization problem over a combinatorially large search space in which function evaluation relies on slow and expensive wet-lab experiments . The setting induces unusual constraints in black-box optimization and reinforcement learning : large synchronous batches with few rounds total . The current gold standard for biomolecular design is directed evolution , which was recently recognized with a Nobel prize ( Arnold , 1998 ) and is a form of randomized local search . Despite its impact , directed evolution is sample inefficient and relies on greedy hillclimbing to the optimal sequences . Recent work has demonstrated that machine-learning-guided optimization ( Section 3 ) can find better sequences faster . ∗Work done as an intern at Google . Reinforcement learning ( RL ) provides a flexible framework for black-box optimization that can harness modern deep generative sequence models . This paper proposes a simple method for improving the sample efficiency of policy gradient methods such as PPO ( Schulman et al. , 2017 ) for black-box optimization by using surrogate models that are trained online to approximate f ( x ) . Our method updates the policy ’ s parameters using sequences x generated by the current policy πθ ( x ) , but evaluated using a learned surrogate f ′ ( x ) , instead of the true , but unknown , oracle reward function f ( x ) . We learn the parameters of the reward model , w , simultaneously with the parameters of the policy . This is similar to other model-based RL methods , but simpler , since in the context of sequence optimization , the state-transition model is deterministic and known . Initially the learned reward model , f ′ ( x ) , is unreliable , so we rely entirely on f ( x ) to assess sequences and update the policy . This allows a graceful fallback to PPO when the model is not effective . Over time , the reward model becomes more reliable and can be used as a cheap surrogate , similar to Bayesian optimization methods ( Shahriari et al. , 2015 ) . We show empirically that cross-validation is an effective heuristic for assessing the model quality , which is simpler than the inference required by Bayesian optimization . We rigorously evaluate our method on three in-silico sequence design tasks that draw on experimental data to construct functions f ( x ) characteristic of real-world design problems : optimizing binding affinity of DNA sequences of length 8 ( search space size 48 ) ; optimizing anti-microbial peptide sequences ( search space size 2050 ) , and optimizing binary sequences where f ( x ) is defined by the energy of an Ising model for protein structure ( search space size 2050 ) . These do not rely on wet lab experiments , and thus allow for large-scale benchmarking across a range of methods . We show that our DyNA PPO method achieves higher cumulative reward for a given budget ( measured in terms of number of calls to f ( x ) ) than existing methods , such as standard PPO , various forms of the cross-entropy method , Bayesian optimization , and evolutionary search . In summary , our contributions are as follows : • We provide a model-based RL algorthm , DyNA PPO , and demonstrate its effectiveness in performing sample efficient batched black-box function optimization . • We address model bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross-validation . • We propose a visitation-based exploration bonus and show that it is more effective than entropy-regularization in identifying multiple local optima . • We present a new optimization task for benchmarking methods for biological sequence design based on protein energy Ising models . 2 METHODS . Let f ( x ) be the function that we want to optimize and x ∈ V T a sequence of length T over a vocabulary V such as DNA nucleotides ( |V | = 4 ) or amino acids ( |V | = 20 ) . We assume N experimental rounds and that B sequences can be measured per round . Let Dn = { ( x , f ( x ) ) } be the data acquired in round n with |Dn| = B . For simplicity , we assume that the sequence length T is constant , but our approach based on generating sequences autoregressively easily generalizes to variable-length sequences . 2.1 MARKOV DECISION PROCESS . We formulate the design of a single sequence x as a Markov decision process M = ( S , A , p , r ) with state space S , action space A , transition function p , and reward function r. The state space S = ∪t=1 ... TV t is the set of all possible sequence prefixes and A corresponds to the vocabulary V . A sequence is generated left to right . At time step t , the state st = a0 , ... , at−1 corresponds to the t last tokens and the action at ∈ A to the next token . The transition function p ( st + 1|st ) = stat is deterministic and corresponds to appending at to st . The reward r ( st , at ) is zero except at the last step T , where it corresponds to the functional measurement f ( sT−1 ) . For generating variable-length sequences , we extend the vocabulary by a special end-of-sequence token and terminate sequence generation when this token is selected . Algorithm 1 : DyNA PPO 1 : Input : Number of experiment rounds N 2 : Input : Number of model-based training rounds M 3 : Input : Set of candidate models S = { f ′ } 4 : Input : Minimum model score τ for model-based training 5 : Input : Policy πθ with initial parameters θ 6 : for n = 1 , 2 , ... N do 7 : Collect samples Dn = { x , f ( x ) } using policy πθ 8 : Train policy πθ on Dn 9 : Fit candidate models f ′ ∈ S on ⋃n i=1Di and compute their score by cross-validation 10 : Select the subset of models S′ ⊆ S with a score ≥ τ 11 : if S ′ 6= ∅ then 12 : for m = 1 , 2 , ... M do 13 : Sample a batch of sequences x from πθ and observe the reward f ′′ ( x ) = 1|S′| ∑ f ′∈S′ f ′ ( x ) 14 : Update πθ on { x , f ′′ ( x ) } 15 : end for 16 : end if 17 : end for 2.2 POLICY OPTIMIZATION . We train a policy πθ ( at|st ) to optimize the expected sum of rewards : E [ R ( s1 : t ) |s0 , θ ] = ∑ st ∑ at πθ ( at|st ) r ( st , at ) . ( 1 ) We use proximal policy optimization ( PPO ) with KL trust-region constraint ( Schulman et al. , 2017 ) , which we have found to be more stable and sample efficient than REINFORCE ( Williams , 1992 ) . We have also considered off-policy deep Q-learning ( DQN ) ( Mnih et al. , 2015 ) , and categorical distributional deep Q-learning ( CatDQN ) ( Bellemare et al. , 2017 ) , which are in principle more sampleefficient than on-policy learning using PPO since they can reuse samples multiple times . However , they performed worse than PPO in our experiments ( Appendix C ) . We implement algorithms using the TF-Agents RL library ( Guadarrama et al. , 2018 ) . We employ autoregressive models with one fully-connected layer as policy and value networks since they are faster to train and outperformed recurrent networks in our experiments . At time step t , the network takes as input the W last characters at−W , ... , at−1 that are one-hot encoded , where the context window size W is a hyper-parameter . To provide the network with information about the current position of the context window , it also receives the time step t , which is embedded using a sinusoidal positional encoding ( Vaswani et al. , 2017 ) , and concatenated with the one-hot characters . The policy network outputs a distribution πθ ( at|st ) over next the token at . The value network V ( st ) , which approximates the expected future reward for being in state st , is used as a baseline to reduce the variance of stochastic estimates of equation 1 ( Schulman et al. , 2017 ) . 2.3 MODEL-BASED POLICY OPTIMIZATION . Model-based RL learns a model of the environment that is used as a simulator to provide additional pseudo-observations . While model-free RL has been successful in domains where interaction with the environment is cheap , such as those where the environment is defined by a software program , its high sample complexity may be unrealistic for biological sequence design . In model-based RL , the MDP M = ( S , A , p , r ) is approximated by a model M′ = ( S , A , p′ , r′ ) with the same state space S and action space A asM ( Sutton & Barto , 2018 , Ch . 8 ) . Since the transition function p is deterministic in our case , only the reward function r ( st , at ) needs to be approximated by r′ ( st , at ) . Since r ( sT , aT ) is non-zero at the last step T and then corresponds to f ( x ) with x == sT−1 , the problem reduces to approximating f ( x ) . This can be done by supervised regression by fitting a regressor f ′ ( x ) on the data ∪n′ < =nDn′ collected so far . We then use the resulting model to collect additional observations ( x , f ′ ( x ) ) and update the policy in a simulation phase , instead of only using observations ( x , f ( x ) ) from the the true environment , which are expensive to collect . We call our method DyNA PPO since it is similar to the DYNA architecture ( Sutton ( 1991 ) ; Peng et al . ( 2018 ) ) and since can be used for DNA sequence design . Model-based RL provides the promise of improved sample efficiency when the model is accurate , but it can reduce performance if insufficient data are available for training a trustworthy model . In this case , the policy is prone to exploit regions where the model is inaccurate ( Janner et al. , 2019 ) . To reap the benefit of model-based RL when the model is accurate and avoid reduced performance when it is not , we ( i ) automatically select the model from a set of candidate models of varying complexity , ( ii ) only use the selected model if it is accurate , and iii ) stop model-based training as soon the the model uncertainty increases by a certain threshold . After each round of experiment , we fit a set of candidate models on all available data to estimate f ( x ) via supervised regression . We quantify the accuracy of each candidate model by the R2 score , which we estimate by five-fold cross-validation . See Appendix G for a discussion of different data splitting strategies to select models using crossvalidation . If the R2 score of all candidate model is below a pre-specified threshold τ , we do not perform model-based training in that round . Otherwise , we build an ensemble model that includes all models with a score greater or equal than τ , and use the average prediction as reward for training the policy . We considered τ as a tunable hyper-parameter , were we found τ = 0.5 to be optimal for all problems ( see Figure 14 . By ignoring the model if it is inaccurate , we aim to prevent the policy from exploiting deficiencies of the model ( Janner et al. , 2019 ) . We perform up to M model-based optimization rounds ( see Algorithm 1 ) and stop as soon as the model uncertainty increased by a certain factor relative to the model uncertainty at the first round ( m = 1 ) . This is motivated by our observation that the model uncertainty is strongly correlated with the unknown model error , and prevents from training the policy with inaccurate model predictions ( see Figure 12 , 13 ) as soon as the model starts to explore regions on which the model was not trained on . For models , we consider nearest neighbor regression , Bayesian ridge regression , random forests , gradient boosting trees , Gaussian processes , and ensemble of deep neural networks . Within each model family , we additionally use cross-validation for tuning hyper-parameters , such as the number of trees , tree depth , kernels and kernel parameters , or the number of hidden layers and units ( see Appendix A.7 for details ) . By testing and optimizing the hyper-parameters of different models automatically , the model capacity can dynamically increase as data becomes available . In Bayesian optimization , non-parametric models such as Gaussian processes are popular regressors , and they also automatically grow model capacity as more data arrives ( Shahriari et al. , 2015 ) . However , with Bayesian optimization there is no opportunity to ignore the regressor entirely if it is unreliable . Furthermore , Bayesian optimization relies on performing ( approximate ) Bayesian inference , which in practice is sensitive to the choice of hyper-parameter ( Snoek et al. , 2012 ) . Overall , our method combines the positive attributes of both generative and discriminative approaches to sequence design . Our experiments do not compare to prior work on model-based RL , since these methods primarily focus on estimating a dynamics model for state transitions .
This paper apply a model-based RL algorithm, DyNA-PPO for designing biological sequences. By being model-based, this algorithm is sample efficiency compared to model-free RL algorithms. This advantage is attractive and important in the context of biological sequence design since the designed is constrained to be done in the large batch / low round settings. To further improves model efficiency, the authors reduce learning bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross validation. To encourage diversity in the target distribution they also penalize the reward using a visitation-based strategy.
SP:3d76cac4f6c4d3bb1003b739801a4981c0db00b8
Model-based reinforcement learning for biological sequence design
The ability to design biological structures such as DNA or proteins would have considerable medical and industrial impact . Doing so presents a challenging black-box optimization problem characterized by the large-batch , low round setting due to the need for labor-intensive wet lab evaluations . In response , we propose using reinforcement learning ( RL ) based on proximal-policy optimization ( PPO ) for biological sequence design . RL provides a flexible framework for optimization generative sequence models to achieve specific criteria , such as diversity among the high-quality sequences discovered . We propose a model-based variant of PPO , DyNA PPO , to improve sample efficiency , where the policy for a new round is trained offline using a simulator fit on functional measurements from prior rounds . To accommodate the growing number of observations across rounds , the simulator model is automatically selected at each round from a pool of diverse models of varying capacity . On the tasks of designing DNA transcription factor binding sites , designing antimicrobial proteins , and optimizing the energy of Ising models based on protein structure , we find that DyNA PPO performs significantly better than existing methods in settings in which modeling is feasible , while still not performing worse in situations in which a reliable model can not be learned . 1 INTRODUCTION . Driven by real-world obstacles in health and disease requiring new drugs , treatments , and assays , the goal of biological sequence design is to identify new discrete sequences x which optimize some oracle , typically an experimentally-measured functional property f ( x ) . This is a difficult black-box optimization problem over a combinatorially large search space in which function evaluation relies on slow and expensive wet-lab experiments . The setting induces unusual constraints in black-box optimization and reinforcement learning : large synchronous batches with few rounds total . The current gold standard for biomolecular design is directed evolution , which was recently recognized with a Nobel prize ( Arnold , 1998 ) and is a form of randomized local search . Despite its impact , directed evolution is sample inefficient and relies on greedy hillclimbing to the optimal sequences . Recent work has demonstrated that machine-learning-guided optimization ( Section 3 ) can find better sequences faster . ∗Work done as an intern at Google . Reinforcement learning ( RL ) provides a flexible framework for black-box optimization that can harness modern deep generative sequence models . This paper proposes a simple method for improving the sample efficiency of policy gradient methods such as PPO ( Schulman et al. , 2017 ) for black-box optimization by using surrogate models that are trained online to approximate f ( x ) . Our method updates the policy ’ s parameters using sequences x generated by the current policy πθ ( x ) , but evaluated using a learned surrogate f ′ ( x ) , instead of the true , but unknown , oracle reward function f ( x ) . We learn the parameters of the reward model , w , simultaneously with the parameters of the policy . This is similar to other model-based RL methods , but simpler , since in the context of sequence optimization , the state-transition model is deterministic and known . Initially the learned reward model , f ′ ( x ) , is unreliable , so we rely entirely on f ( x ) to assess sequences and update the policy . This allows a graceful fallback to PPO when the model is not effective . Over time , the reward model becomes more reliable and can be used as a cheap surrogate , similar to Bayesian optimization methods ( Shahriari et al. , 2015 ) . We show empirically that cross-validation is an effective heuristic for assessing the model quality , which is simpler than the inference required by Bayesian optimization . We rigorously evaluate our method on three in-silico sequence design tasks that draw on experimental data to construct functions f ( x ) characteristic of real-world design problems : optimizing binding affinity of DNA sequences of length 8 ( search space size 48 ) ; optimizing anti-microbial peptide sequences ( search space size 2050 ) , and optimizing binary sequences where f ( x ) is defined by the energy of an Ising model for protein structure ( search space size 2050 ) . These do not rely on wet lab experiments , and thus allow for large-scale benchmarking across a range of methods . We show that our DyNA PPO method achieves higher cumulative reward for a given budget ( measured in terms of number of calls to f ( x ) ) than existing methods , such as standard PPO , various forms of the cross-entropy method , Bayesian optimization , and evolutionary search . In summary , our contributions are as follows : • We provide a model-based RL algorthm , DyNA PPO , and demonstrate its effectiveness in performing sample efficient batched black-box function optimization . • We address model bias by quantifying the reliability and automatically selecting models of appropriate complexity via cross-validation . • We propose a visitation-based exploration bonus and show that it is more effective than entropy-regularization in identifying multiple local optima . • We present a new optimization task for benchmarking methods for biological sequence design based on protein energy Ising models . 2 METHODS . Let f ( x ) be the function that we want to optimize and x ∈ V T a sequence of length T over a vocabulary V such as DNA nucleotides ( |V | = 4 ) or amino acids ( |V | = 20 ) . We assume N experimental rounds and that B sequences can be measured per round . Let Dn = { ( x , f ( x ) ) } be the data acquired in round n with |Dn| = B . For simplicity , we assume that the sequence length T is constant , but our approach based on generating sequences autoregressively easily generalizes to variable-length sequences . 2.1 MARKOV DECISION PROCESS . We formulate the design of a single sequence x as a Markov decision process M = ( S , A , p , r ) with state space S , action space A , transition function p , and reward function r. The state space S = ∪t=1 ... TV t is the set of all possible sequence prefixes and A corresponds to the vocabulary V . A sequence is generated left to right . At time step t , the state st = a0 , ... , at−1 corresponds to the t last tokens and the action at ∈ A to the next token . The transition function p ( st + 1|st ) = stat is deterministic and corresponds to appending at to st . The reward r ( st , at ) is zero except at the last step T , where it corresponds to the functional measurement f ( sT−1 ) . For generating variable-length sequences , we extend the vocabulary by a special end-of-sequence token and terminate sequence generation when this token is selected . Algorithm 1 : DyNA PPO 1 : Input : Number of experiment rounds N 2 : Input : Number of model-based training rounds M 3 : Input : Set of candidate models S = { f ′ } 4 : Input : Minimum model score τ for model-based training 5 : Input : Policy πθ with initial parameters θ 6 : for n = 1 , 2 , ... N do 7 : Collect samples Dn = { x , f ( x ) } using policy πθ 8 : Train policy πθ on Dn 9 : Fit candidate models f ′ ∈ S on ⋃n i=1Di and compute their score by cross-validation 10 : Select the subset of models S′ ⊆ S with a score ≥ τ 11 : if S ′ 6= ∅ then 12 : for m = 1 , 2 , ... M do 13 : Sample a batch of sequences x from πθ and observe the reward f ′′ ( x ) = 1|S′| ∑ f ′∈S′ f ′ ( x ) 14 : Update πθ on { x , f ′′ ( x ) } 15 : end for 16 : end if 17 : end for 2.2 POLICY OPTIMIZATION . We train a policy πθ ( at|st ) to optimize the expected sum of rewards : E [ R ( s1 : t ) |s0 , θ ] = ∑ st ∑ at πθ ( at|st ) r ( st , at ) . ( 1 ) We use proximal policy optimization ( PPO ) with KL trust-region constraint ( Schulman et al. , 2017 ) , which we have found to be more stable and sample efficient than REINFORCE ( Williams , 1992 ) . We have also considered off-policy deep Q-learning ( DQN ) ( Mnih et al. , 2015 ) , and categorical distributional deep Q-learning ( CatDQN ) ( Bellemare et al. , 2017 ) , which are in principle more sampleefficient than on-policy learning using PPO since they can reuse samples multiple times . However , they performed worse than PPO in our experiments ( Appendix C ) . We implement algorithms using the TF-Agents RL library ( Guadarrama et al. , 2018 ) . We employ autoregressive models with one fully-connected layer as policy and value networks since they are faster to train and outperformed recurrent networks in our experiments . At time step t , the network takes as input the W last characters at−W , ... , at−1 that are one-hot encoded , where the context window size W is a hyper-parameter . To provide the network with information about the current position of the context window , it also receives the time step t , which is embedded using a sinusoidal positional encoding ( Vaswani et al. , 2017 ) , and concatenated with the one-hot characters . The policy network outputs a distribution πθ ( at|st ) over next the token at . The value network V ( st ) , which approximates the expected future reward for being in state st , is used as a baseline to reduce the variance of stochastic estimates of equation 1 ( Schulman et al. , 2017 ) . 2.3 MODEL-BASED POLICY OPTIMIZATION . Model-based RL learns a model of the environment that is used as a simulator to provide additional pseudo-observations . While model-free RL has been successful in domains where interaction with the environment is cheap , such as those where the environment is defined by a software program , its high sample complexity may be unrealistic for biological sequence design . In model-based RL , the MDP M = ( S , A , p , r ) is approximated by a model M′ = ( S , A , p′ , r′ ) with the same state space S and action space A asM ( Sutton & Barto , 2018 , Ch . 8 ) . Since the transition function p is deterministic in our case , only the reward function r ( st , at ) needs to be approximated by r′ ( st , at ) . Since r ( sT , aT ) is non-zero at the last step T and then corresponds to f ( x ) with x == sT−1 , the problem reduces to approximating f ( x ) . This can be done by supervised regression by fitting a regressor f ′ ( x ) on the data ∪n′ < =nDn′ collected so far . We then use the resulting model to collect additional observations ( x , f ′ ( x ) ) and update the policy in a simulation phase , instead of only using observations ( x , f ( x ) ) from the the true environment , which are expensive to collect . We call our method DyNA PPO since it is similar to the DYNA architecture ( Sutton ( 1991 ) ; Peng et al . ( 2018 ) ) and since can be used for DNA sequence design . Model-based RL provides the promise of improved sample efficiency when the model is accurate , but it can reduce performance if insufficient data are available for training a trustworthy model . In this case , the policy is prone to exploit regions where the model is inaccurate ( Janner et al. , 2019 ) . To reap the benefit of model-based RL when the model is accurate and avoid reduced performance when it is not , we ( i ) automatically select the model from a set of candidate models of varying complexity , ( ii ) only use the selected model if it is accurate , and iii ) stop model-based training as soon the the model uncertainty increases by a certain threshold . After each round of experiment , we fit a set of candidate models on all available data to estimate f ( x ) via supervised regression . We quantify the accuracy of each candidate model by the R2 score , which we estimate by five-fold cross-validation . See Appendix G for a discussion of different data splitting strategies to select models using crossvalidation . If the R2 score of all candidate model is below a pre-specified threshold τ , we do not perform model-based training in that round . Otherwise , we build an ensemble model that includes all models with a score greater or equal than τ , and use the average prediction as reward for training the policy . We considered τ as a tunable hyper-parameter , were we found τ = 0.5 to be optimal for all problems ( see Figure 14 . By ignoring the model if it is inaccurate , we aim to prevent the policy from exploiting deficiencies of the model ( Janner et al. , 2019 ) . We perform up to M model-based optimization rounds ( see Algorithm 1 ) and stop as soon as the model uncertainty increased by a certain factor relative to the model uncertainty at the first round ( m = 1 ) . This is motivated by our observation that the model uncertainty is strongly correlated with the unknown model error , and prevents from training the policy with inaccurate model predictions ( see Figure 12 , 13 ) as soon as the model starts to explore regions on which the model was not trained on . For models , we consider nearest neighbor regression , Bayesian ridge regression , random forests , gradient boosting trees , Gaussian processes , and ensemble of deep neural networks . Within each model family , we additionally use cross-validation for tuning hyper-parameters , such as the number of trees , tree depth , kernels and kernel parameters , or the number of hidden layers and units ( see Appendix A.7 for details ) . By testing and optimizing the hyper-parameters of different models automatically , the model capacity can dynamically increase as data becomes available . In Bayesian optimization , non-parametric models such as Gaussian processes are popular regressors , and they also automatically grow model capacity as more data arrives ( Shahriari et al. , 2015 ) . However , with Bayesian optimization there is no opportunity to ignore the regressor entirely if it is unreliable . Furthermore , Bayesian optimization relies on performing ( approximate ) Bayesian inference , which in practice is sensitive to the choice of hyper-parameter ( Snoek et al. , 2012 ) . Overall , our method combines the positive attributes of both generative and discriminative approaches to sequence design . Our experiments do not compare to prior work on model-based RL , since these methods primarily focus on estimating a dynamics model for state transitions .
In this work the authors propose a framework for combinatorial optimisation problems in the conditions that the measurements are expensive. The basic idea is to make an approximation of the reward function and then train the policy using the simulated environment based on the approximated reward function. The applications are shown in a set of biological tasks, which shows that the model performs well compared to the baselines.
SP:3d76cac4f6c4d3bb1003b739801a4981c0db00b8
Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination
1 INTRODUCTION . Cooperative multiagent reinforcement learning ( MARL ) studies how multiple agents can learn to coordinate as a team toward maximizing a global objective . Cooperative MARL has been applied to many real world applications such as air traffic control ( Tumer and Agogino , 2007 ) , multi-robot coordination ( Sheng et al. , 2006 ; Yliniemi et al. , 2014 ) , communication and language ( Lazaridou et al. , 2016 ; Mordatch and Abbeel , 2018 ) , and autonomous driving ( Shalev-Shwartz et al. , 2016 ) . Many such environments endow agents with a team reward that reflects the team ’ s coordination objective , as well as an agent-specific local reward that rewards basic skills . For instance , in soccer , dense local rewards could capture agent-specific skills such as passing , dribbling and running . The agents must then coordinate when and where to use these skills in order to optimize the team objective , which is winning the game . Usually , the agent-specific reward is dense and easy to learn from , while the team reward is sparse and requires the cooperation of all or most agents . Having each agent directly optimize the team reward and ignore the agent-specific reward usually fails or is sample-inefficient for complex tasks due to the sparsity of the team reward . Conversely , having each agent directly optimize the agent-specific reward also fails because it does not capture the team ’ s objective , even with state of the art multiagent RL algorithms such as MADDPG ( Lowe et al. , 2017 ) . One solution to this problem is to use reward shaping , where extensive domain knowledge about the task is used to create a proxy reward function ( Rahmattalabi et al. , 2016 ) . Constructing this proxy reward function is difficult in complex environments , and is domain-dependent . Apart from requiring domain knowledge and manual tuning , this approach also poses risks of changing the underlying problem itself ( Ng et al. , 1999 ) . Simple approaches to creating a proxy reward via linear combinations of the two objectives also fail to solve or generalize to complex coordination tasks ( Devlin et al. , 2011 ; Williamson et al. , 2009 ) . In this paper , we introduce Multiagent Evolutionary Reinforcement Learning ( MERL ) , a state-ofthe-art algorithm for cooperative MARL that does not require reward shaping . MERL is a split-level training platform that combines gradient-based and gradient-free optimization . The gradient-free optimizer is an evolutionary algorithm that maximizes the team objective through neuroevolution . The gradient-based optimizer is a policy gradient algorithm that maximizes each agent ’ s dense , local rewards . These gradient-based policies are periodically copied into the evolutionary population . The two processes operate concurrently and share information through a shared replay buffer . A key strength of MERL is that it is a general method which does not require domain-specific reward shaping . This is because MERL optimizes the team objective directly while simultaneously leveraging agent-specific rewards to learn basic skills . We test MERL in a number of multiagent coordination benchmarks . Results demonstrate that MERL significantly outperforms state-of-the-art methods such as MADDPG , while using the same observations and reward functions . We also demonstrate that MERL scales gracefully to increasing complexity of coordination objectives where MADDPG and its variants fail to learn entirely . 2 BACKGROUND AND RELATED WORK . Markov Games : A standard reinforcement learning ( RL ) setting is often formalized as a Markov Decision Process ( MDP ) and consists of an agent interacting with an environment over a finite number of discrete time steps . This formulation can be extended to multiagent systems in the form of partially observable Markov games ( Littman , 1994 ; Lowe et al. , 2017 ) . An N -agent Markov game is defined by a global state of the world , S , and a set of N observations { Oi } and N actions { Ai } corresponding to theN agents . At each time step t , each agent observes its corresponding observation Oti and maps it to an action A t i using its policy πi . Each agent receives a scalar reward rti based on the global state St and joint action of the team . The world then transitions to the next state St+1 which produces a new set of observations { Oi } . The process continues until a terminal state is reached . Ri = ∑T t=0 γ trti is the total return for agent i with discount factor γ ∈ ( 0 , 1 ] . Each agent aims to maximize its expected return . TD3 : Policy gradient ( PG ) methods frame the goal of maximizing the expected return as the minimization of a loss function . A widely used PG method for continuous , high-dimensional action spaces is DDPG ( Lillicrap et al. , 2015 ) . Recently , ( Fujimoto et al. , 2018 ) extended DDPG to Twin Delayed DDPG ( TD3 ) , addressing its well-known overestimation problem . TD3 is the state-of-the-art , off-policy algorithm for model-free DRL in continuous action spaces . TD3 uses an actor-critic architecture ( Sutton and Barto , 1998 ) maintaining a deterministic policy ( actor ) π : S → A , and two distinct criticsQ : S ×A → Ri . Each critic independently approximates the actor ’ s action-value function Qπ . A separate copy of the actor and critics are kept as target networks for stability and are updated periodically . A noisy version of the actor is used to explore the environment during training . The actor is trained using a noisy version of the sampled policy gradient computed by backpropagation through the combined actor-critic networks . This mitigates overfitting of the deterministic policy by smoothing the policy gradient updates . Evolutionary Reinforcement Learning ( ERL ) is a hybrid algorithm that combines Evolutionary Algorithms ( EAs ) ( Floreano et al. , 2008 ; Lüders et al. , 2017 ; Fogel , 2006 ; Spears et al. , 1993 ) , with policy gradient methods ( Khadka and Tumer , 2018 ) . Instead of discarding the data generated during a standard EA rollout , ERL stores this data in a central replay buffer shared with the policy gradient ’ s own rollouts - thereby increasing the diversity of the data available for the policy gradient learners . Since the EA directly optimizes for episode-wide return , it biases exploration towards states with higher long-term returns . The policy gradient algorithm which learns using this state distribution inherits this implicit bias towards long-term optimization . Concurrently , the actor trained by the policy gradient algorithm is inserted into the evolutionary population allowing the EA to benefit from the fast gradient-based learning . Related Work : Lowe et al . ( 2017 ) introduced MADDPG which tackled the inherent non-stationarity of a multiagent learning environment by leveraging a critic which had full access to the joint state and action during training . Foerster et al . ( 2018b ) utilized a similar setup with a centralized critic across agents to tackle StarCraft micromanagement tasks . An algorithm that could explicitly model other agents ’ learning was investigated in Foerster et al . ( 2018a ) . However , all these approaches rely on a dense agent reward that properly captures the team objective . Methods to solve for these types of agent-specific reward functions were investigated in Li et al . ( 2012 ) but were limited to tasks with strong simulators where tree-based planning could be used . A closely related work to MERL is ( Liu et al. , 2019 ) where Population-Based Training ( PBT ) ( Jaderberg et al. , 2017 ) is used to optimize the relative importance between a collection of dense , shaped rewards automatically during training . This can be interpreted as a singular central reward function constructed by scalarizing a collection of reward signals where the scalarization coefficients are adaptively learned during training . In contrast , MERL optimizes its reward functions independently with information transfer across them facilitated through shared replay buffers and policy migration directly . This form of information transfer through a shared replay buffer has been explored extensively in recent literature ( Colas et al. , 2018 ; Khadka et al. , 2019 ) . 3 MULTIAGENT EVOLUTIONARY REINFORCEMENT LEARNING . MERL leverages both agent-specific and team objectives through a hybrid algorithm that combines gradient-free and gradient-based optimization . The gradient-free optimizer is an evolutionary algorithm that maximizes the team objective through neuroevolution . The gradient-based optimizer trains policies to maximize agent-specific rewards . These gradient-based policies are periodically added to the evolutionary population and participate in evolution . This enables the evolutionary algorithm to use agent-specific skills learned by training on the agent-specific rewards toward optimizing the team objective without needing to resort to reward shaping . Algorithm 1 Multiagent Evolutionary Reinforcement Learning 1 : Initialize a population of k multi-head teams popπ , each with weights θπ initialized randomly 2 : Initialize a shared critic Q with weights θQ 3 : Initialize an ensemble of N empty cyclic replay buffersRk , one for each agent 4 : Define a white Gaussian noise generatorWg random number generator r ( ) ∈ [ 0 , 1 ) 5 : for generation = 1 , ∞ do 6 : for team π ∈ popπ do 7 : g , R = Rollout ( π , R , noise=None , ξ ) 8 : _ , R = Rollout ( π , R , noise=Wg , ξ = 1 ) 9 : Assign g as π ’ s fitness 10 : end for 11 : Rank the population popπ based on fitness scores 12 : Select the first e teams π ∈ popπ as elites 13 : Select the remaining ( k − e ) teams π from popπ , to form Set S using tournament selection 14 : while |S| < ( k − e ) do 15 : Single-point crossover between a randomly sampled π ∈ e and π ∈ S and append to S 16 : end while 17 : for Agent k=1 , N do 18 : Randomly sample a minibatch of T transitions ( oi , ai , li , oi+1 ) from Rk 19 : Compute yi = li + γ min j=1,2 Q′j ( oi+1 , a∼|θQ ′ j ) 20 : where a∼ = π′pg ( k , oi+1|θπ ′ pg ) [ action sampled from the kth head of π′pg ] + 21 : Update Q by minimizing the loss : L = 1T ∑ i ( yi −Q ( oi , ai|θQ ) 2 22 : Update πkpg using the sampled policy gradient ∇θπpgJ ∼ 1 T ∑ ∇aQ ( o , a|θQ ) |o=oi , a=ai∇θπpgπ k pg ( s|θπpg ) |o=oi 23 : Soft update target networks : θπ ′ ⇐ τθπ + ( 1− τ ) θπ′ and θQ′ ⇐ τθQ + ( 1− τ ) θQ′ 24 : end for 25 : Migrate the policy gradient team popj : for weakest π ∈ popjπ : θπ ⇐ θπpg 26 : end for Policy Topology : We represent our multiagent ( team ) policies using a multi-headed neural network π as illustrated in Figure 1 . The head πk represents the k-th agent in the team . Given an incoming observation for agent k , only the output of πk is considered as agent k ’ s response . In essence , all agents act independently based on their own observations while sharing weights ( and by extension , the features ) in the lower layers ( trunk ) . This is commonly used to improve learning speed ( Silver et al. , 2017 ) . Further , each agent k also has its own replay buffer ( Rk ) which stores its experience defined by the tuple ( state , action , next state , local reward ) for each interaction with the environment ( rollout ) involving that agent . Team Reward Optimization : Figure 2 illustrates the MERL algorithm . A population of multi-headed teams , each with the same topology , is initialized with random weights . The replay bufferRk is shared by the k-th agent across all teams . The population is then evaluated for each rollout . The team reward for each team is disbursed at the end of the episode and is considered as its fitness score . A selection operator selects a portion of the population for survival with probability proportionate to their fitness scores . The weights of the teams in the population are probabilistically perturbed through mutation and crossover operators to create the next generation of teams . A portion of the teams with the highest relative fitness are preserved as elites . At any given time , the team with the highest fitness , or the champion , represents the best solution for the task . Policy Gradient : The procedure described so far resembles a standard EA except that each agent k stores each of its experiences in its associated replay buffer ( Rk ) instead of just discarding it . However , unlike EA , which only learns based on the low-fidelity global reward , MERL also learns from the experiences within episodes of a rollout using policy gradients . To enable this kind of `` local learning '' , MERL initializes one multi-headed policy network πpg and one critic Q . A noisy version of πpg is then used to conduct its own set of rollouts in the environment , storing each agent k ’ s experiences in its corresponding buffer ( Rk ) similar to the evolutionary rollouts . Agent-Specific Reward Optimization : Crucially , each agent ’ s replay buffer is kept separate from that of every other agent to ensure diversity amongst the agents . The shared critic samples a random mini-batch uniformly from each replay buffer and uses it to update its parameters using gradient descent . Each agent πkpg then draws a mini-batch of experiences from its corresponding buffer ( Rk ) and uses it to sample a policy gradient from the shared critic . Unlike the teams in the evolutionary population which directly seek to optimize the team reward , πpg seeks to maximize the agent-specific local reward while exploiting the experiences collected via evolution . Skill Migration : Periodically , the πpg network is copied into the evolving population of teams and can propagate its features by participating in evolution . This is the core mechanism that combines policies learned via agent-specific and team rewards . Regardless of whether the two rewards are aligned , evolution ensures that only the performant derivatives of the migrated network are retained . This mechanism guarantees protection against destructive interference commonly seen when a direct scalarization between two reward functions is attempted . Further , the level of information exchange is automatically adjusted during the process of learning , in contrast to being manually tuned by an expert designer . Algorithm 1 provides a detailed pseudo-code of the MERL algorithm . The choice of hyperparameters is explained in the Appendix . Additionally , our source code 1 is available online . 1https : //tinyurl.com/y6erclts
This paper proposes an algorithm to learn coordination strategies for multi-agent reinforcement learning. It combines gradient-based optimization (Actor-critic) with Neuroevolution (genetic algorithms style). Specifically, Actor-critic is used to train an ensemble of agents (referred to as “team”) using a manually designed agent-specific reward. Coordination within a team is then learned with Neuroevolution. The overall design accommodates sharing of data between Actor-critic and Neuroevolution, and migration of policies. Evaluation is done using the multi-particle environments (Lowe et. al. 2017) and a Rover domain task.
SP:b1f2e7dee0606c25926a81ac32462c8bd2cb4808
Evolutionary Reinforcement Learning for Sample-Efficient Multiagent Coordination
1 INTRODUCTION . Cooperative multiagent reinforcement learning ( MARL ) studies how multiple agents can learn to coordinate as a team toward maximizing a global objective . Cooperative MARL has been applied to many real world applications such as air traffic control ( Tumer and Agogino , 2007 ) , multi-robot coordination ( Sheng et al. , 2006 ; Yliniemi et al. , 2014 ) , communication and language ( Lazaridou et al. , 2016 ; Mordatch and Abbeel , 2018 ) , and autonomous driving ( Shalev-Shwartz et al. , 2016 ) . Many such environments endow agents with a team reward that reflects the team ’ s coordination objective , as well as an agent-specific local reward that rewards basic skills . For instance , in soccer , dense local rewards could capture agent-specific skills such as passing , dribbling and running . The agents must then coordinate when and where to use these skills in order to optimize the team objective , which is winning the game . Usually , the agent-specific reward is dense and easy to learn from , while the team reward is sparse and requires the cooperation of all or most agents . Having each agent directly optimize the team reward and ignore the agent-specific reward usually fails or is sample-inefficient for complex tasks due to the sparsity of the team reward . Conversely , having each agent directly optimize the agent-specific reward also fails because it does not capture the team ’ s objective , even with state of the art multiagent RL algorithms such as MADDPG ( Lowe et al. , 2017 ) . One solution to this problem is to use reward shaping , where extensive domain knowledge about the task is used to create a proxy reward function ( Rahmattalabi et al. , 2016 ) . Constructing this proxy reward function is difficult in complex environments , and is domain-dependent . Apart from requiring domain knowledge and manual tuning , this approach also poses risks of changing the underlying problem itself ( Ng et al. , 1999 ) . Simple approaches to creating a proxy reward via linear combinations of the two objectives also fail to solve or generalize to complex coordination tasks ( Devlin et al. , 2011 ; Williamson et al. , 2009 ) . In this paper , we introduce Multiagent Evolutionary Reinforcement Learning ( MERL ) , a state-ofthe-art algorithm for cooperative MARL that does not require reward shaping . MERL is a split-level training platform that combines gradient-based and gradient-free optimization . The gradient-free optimizer is an evolutionary algorithm that maximizes the team objective through neuroevolution . The gradient-based optimizer is a policy gradient algorithm that maximizes each agent ’ s dense , local rewards . These gradient-based policies are periodically copied into the evolutionary population . The two processes operate concurrently and share information through a shared replay buffer . A key strength of MERL is that it is a general method which does not require domain-specific reward shaping . This is because MERL optimizes the team objective directly while simultaneously leveraging agent-specific rewards to learn basic skills . We test MERL in a number of multiagent coordination benchmarks . Results demonstrate that MERL significantly outperforms state-of-the-art methods such as MADDPG , while using the same observations and reward functions . We also demonstrate that MERL scales gracefully to increasing complexity of coordination objectives where MADDPG and its variants fail to learn entirely . 2 BACKGROUND AND RELATED WORK . Markov Games : A standard reinforcement learning ( RL ) setting is often formalized as a Markov Decision Process ( MDP ) and consists of an agent interacting with an environment over a finite number of discrete time steps . This formulation can be extended to multiagent systems in the form of partially observable Markov games ( Littman , 1994 ; Lowe et al. , 2017 ) . An N -agent Markov game is defined by a global state of the world , S , and a set of N observations { Oi } and N actions { Ai } corresponding to theN agents . At each time step t , each agent observes its corresponding observation Oti and maps it to an action A t i using its policy πi . Each agent receives a scalar reward rti based on the global state St and joint action of the team . The world then transitions to the next state St+1 which produces a new set of observations { Oi } . The process continues until a terminal state is reached . Ri = ∑T t=0 γ trti is the total return for agent i with discount factor γ ∈ ( 0 , 1 ] . Each agent aims to maximize its expected return . TD3 : Policy gradient ( PG ) methods frame the goal of maximizing the expected return as the minimization of a loss function . A widely used PG method for continuous , high-dimensional action spaces is DDPG ( Lillicrap et al. , 2015 ) . Recently , ( Fujimoto et al. , 2018 ) extended DDPG to Twin Delayed DDPG ( TD3 ) , addressing its well-known overestimation problem . TD3 is the state-of-the-art , off-policy algorithm for model-free DRL in continuous action spaces . TD3 uses an actor-critic architecture ( Sutton and Barto , 1998 ) maintaining a deterministic policy ( actor ) π : S → A , and two distinct criticsQ : S ×A → Ri . Each critic independently approximates the actor ’ s action-value function Qπ . A separate copy of the actor and critics are kept as target networks for stability and are updated periodically . A noisy version of the actor is used to explore the environment during training . The actor is trained using a noisy version of the sampled policy gradient computed by backpropagation through the combined actor-critic networks . This mitigates overfitting of the deterministic policy by smoothing the policy gradient updates . Evolutionary Reinforcement Learning ( ERL ) is a hybrid algorithm that combines Evolutionary Algorithms ( EAs ) ( Floreano et al. , 2008 ; Lüders et al. , 2017 ; Fogel , 2006 ; Spears et al. , 1993 ) , with policy gradient methods ( Khadka and Tumer , 2018 ) . Instead of discarding the data generated during a standard EA rollout , ERL stores this data in a central replay buffer shared with the policy gradient ’ s own rollouts - thereby increasing the diversity of the data available for the policy gradient learners . Since the EA directly optimizes for episode-wide return , it biases exploration towards states with higher long-term returns . The policy gradient algorithm which learns using this state distribution inherits this implicit bias towards long-term optimization . Concurrently , the actor trained by the policy gradient algorithm is inserted into the evolutionary population allowing the EA to benefit from the fast gradient-based learning . Related Work : Lowe et al . ( 2017 ) introduced MADDPG which tackled the inherent non-stationarity of a multiagent learning environment by leveraging a critic which had full access to the joint state and action during training . Foerster et al . ( 2018b ) utilized a similar setup with a centralized critic across agents to tackle StarCraft micromanagement tasks . An algorithm that could explicitly model other agents ’ learning was investigated in Foerster et al . ( 2018a ) . However , all these approaches rely on a dense agent reward that properly captures the team objective . Methods to solve for these types of agent-specific reward functions were investigated in Li et al . ( 2012 ) but were limited to tasks with strong simulators where tree-based planning could be used . A closely related work to MERL is ( Liu et al. , 2019 ) where Population-Based Training ( PBT ) ( Jaderberg et al. , 2017 ) is used to optimize the relative importance between a collection of dense , shaped rewards automatically during training . This can be interpreted as a singular central reward function constructed by scalarizing a collection of reward signals where the scalarization coefficients are adaptively learned during training . In contrast , MERL optimizes its reward functions independently with information transfer across them facilitated through shared replay buffers and policy migration directly . This form of information transfer through a shared replay buffer has been explored extensively in recent literature ( Colas et al. , 2018 ; Khadka et al. , 2019 ) . 3 MULTIAGENT EVOLUTIONARY REINFORCEMENT LEARNING . MERL leverages both agent-specific and team objectives through a hybrid algorithm that combines gradient-free and gradient-based optimization . The gradient-free optimizer is an evolutionary algorithm that maximizes the team objective through neuroevolution . The gradient-based optimizer trains policies to maximize agent-specific rewards . These gradient-based policies are periodically added to the evolutionary population and participate in evolution . This enables the evolutionary algorithm to use agent-specific skills learned by training on the agent-specific rewards toward optimizing the team objective without needing to resort to reward shaping . Algorithm 1 Multiagent Evolutionary Reinforcement Learning 1 : Initialize a population of k multi-head teams popπ , each with weights θπ initialized randomly 2 : Initialize a shared critic Q with weights θQ 3 : Initialize an ensemble of N empty cyclic replay buffersRk , one for each agent 4 : Define a white Gaussian noise generatorWg random number generator r ( ) ∈ [ 0 , 1 ) 5 : for generation = 1 , ∞ do 6 : for team π ∈ popπ do 7 : g , R = Rollout ( π , R , noise=None , ξ ) 8 : _ , R = Rollout ( π , R , noise=Wg , ξ = 1 ) 9 : Assign g as π ’ s fitness 10 : end for 11 : Rank the population popπ based on fitness scores 12 : Select the first e teams π ∈ popπ as elites 13 : Select the remaining ( k − e ) teams π from popπ , to form Set S using tournament selection 14 : while |S| < ( k − e ) do 15 : Single-point crossover between a randomly sampled π ∈ e and π ∈ S and append to S 16 : end while 17 : for Agent k=1 , N do 18 : Randomly sample a minibatch of T transitions ( oi , ai , li , oi+1 ) from Rk 19 : Compute yi = li + γ min j=1,2 Q′j ( oi+1 , a∼|θQ ′ j ) 20 : where a∼ = π′pg ( k , oi+1|θπ ′ pg ) [ action sampled from the kth head of π′pg ] + 21 : Update Q by minimizing the loss : L = 1T ∑ i ( yi −Q ( oi , ai|θQ ) 2 22 : Update πkpg using the sampled policy gradient ∇θπpgJ ∼ 1 T ∑ ∇aQ ( o , a|θQ ) |o=oi , a=ai∇θπpgπ k pg ( s|θπpg ) |o=oi 23 : Soft update target networks : θπ ′ ⇐ τθπ + ( 1− τ ) θπ′ and θQ′ ⇐ τθQ + ( 1− τ ) θQ′ 24 : end for 25 : Migrate the policy gradient team popj : for weakest π ∈ popjπ : θπ ⇐ θπpg 26 : end for Policy Topology : We represent our multiagent ( team ) policies using a multi-headed neural network π as illustrated in Figure 1 . The head πk represents the k-th agent in the team . Given an incoming observation for agent k , only the output of πk is considered as agent k ’ s response . In essence , all agents act independently based on their own observations while sharing weights ( and by extension , the features ) in the lower layers ( trunk ) . This is commonly used to improve learning speed ( Silver et al. , 2017 ) . Further , each agent k also has its own replay buffer ( Rk ) which stores its experience defined by the tuple ( state , action , next state , local reward ) for each interaction with the environment ( rollout ) involving that agent . Team Reward Optimization : Figure 2 illustrates the MERL algorithm . A population of multi-headed teams , each with the same topology , is initialized with random weights . The replay bufferRk is shared by the k-th agent across all teams . The population is then evaluated for each rollout . The team reward for each team is disbursed at the end of the episode and is considered as its fitness score . A selection operator selects a portion of the population for survival with probability proportionate to their fitness scores . The weights of the teams in the population are probabilistically perturbed through mutation and crossover operators to create the next generation of teams . A portion of the teams with the highest relative fitness are preserved as elites . At any given time , the team with the highest fitness , or the champion , represents the best solution for the task . Policy Gradient : The procedure described so far resembles a standard EA except that each agent k stores each of its experiences in its associated replay buffer ( Rk ) instead of just discarding it . However , unlike EA , which only learns based on the low-fidelity global reward , MERL also learns from the experiences within episodes of a rollout using policy gradients . To enable this kind of `` local learning '' , MERL initializes one multi-headed policy network πpg and one critic Q . A noisy version of πpg is then used to conduct its own set of rollouts in the environment , storing each agent k ’ s experiences in its corresponding buffer ( Rk ) similar to the evolutionary rollouts . Agent-Specific Reward Optimization : Crucially , each agent ’ s replay buffer is kept separate from that of every other agent to ensure diversity amongst the agents . The shared critic samples a random mini-batch uniformly from each replay buffer and uses it to update its parameters using gradient descent . Each agent πkpg then draws a mini-batch of experiences from its corresponding buffer ( Rk ) and uses it to sample a policy gradient from the shared critic . Unlike the teams in the evolutionary population which directly seek to optimize the team reward , πpg seeks to maximize the agent-specific local reward while exploiting the experiences collected via evolution . Skill Migration : Periodically , the πpg network is copied into the evolving population of teams and can propagate its features by participating in evolution . This is the core mechanism that combines policies learned via agent-specific and team rewards . Regardless of whether the two rewards are aligned , evolution ensures that only the performant derivatives of the migrated network are retained . This mechanism guarantees protection against destructive interference commonly seen when a direct scalarization between two reward functions is attempted . Further , the level of information exchange is automatically adjusted during the process of learning , in contrast to being manually tuned by an expert designer . Algorithm 1 provides a detailed pseudo-code of the MERL algorithm . The choice of hyperparameters is explained in the Appendix . Additionally , our source code 1 is available online . 1https : //tinyurl.com/y6erclts
This paper proposes to use a two-level optimization process to solve the challenge of optimizing the team reward and the agent's reward simultaneously, which are often not aligned. It applies the evolutionary algorithm to optimize the sparse team reward, while using RL (TD3) to optimize the agent's dense reward. In this way, there is no need to combine these two rewards into a scalar that often requires extensive manual tuning.
SP:b1f2e7dee0606c25926a81ac32462c8bd2cb4808
Never Give Up: Learning Directed Exploration Strategies
1 INTRODUCTION . The problem of exploration remains one of the major challenges in deep reinforcement learning . In general , methods that guarantee finding an optimal policy require the number of visits to each state–action pair to approach infinity . Strategies that become greedy after a finite number of steps may never learn to act optimally ; they may converge prematurely to suboptimal policies , and never gather the data they need to learn . Ensuring that all state-action pairs are encountered infinitely often is the general problem of maintaining exploration ( François-Lavet et al. , 2018 ; Sutton & Barto , 2018 ) . The simplest approach for tackling this problem is to consider stochastic policies with a non-zero probability of selecting all actions in each state , e.g . -greedy or Boltzmann exploration . While these techniques will eventually learn the optimal policy in the tabular setting , they are very inefficient and the steps they require grow exponentially with the size of the state space . Despite these shortcomings , they can perform remarkably well in dense reward scenarios ( Mnih et al. , 2015 ) . In sparse reward settings , however , they can completely fail to learn , as temporally-extended exploration ( also called deep exploration ) is crucial to even find the very few rewarding states ( Osband et al. , 2016 ) . ∗Equal contribution . Recent approaches have proposed to provide intrinsic rewards to agents to drive exploration , with a focus on demonstrating performance in non-tabular settings . These intrinsic rewards are proportional to some notion of saliency quantifying how different the current state is from those already visited ( Bellemare et al. , 2016 ; Haber et al. , 2018 ; Houthooft et al. , 2016 ; Oh et al. , 2015 ; Ostrovski et al. , 2017 ; Pathak et al. , 2017 ; Stadie et al. , 2015 ) . As the agent explores the environment and becomes familiar with it , the exploration bonus disappears and learning is only driven by extrinsic rewards . This is a sensible idea as the goal is to maximise the expected sum of extrinsic rewards . While very good results have been achieved on some very hard exploration tasks , these algorithms face a fundamental limitation : after the novelty of a state has vanished , the agent is not encouraged to visit it again , regardless of the downstream learning opportunities it might allow ( Bellemare et al. , 2016 ; Ecoffet et al. , 2019 ; Stanton & Clune , 2018 ) . Other methods estimate predictive forward models ( Haber et al. , 2018 ; Houthooft et al. , 2016 ; Oh et al. , 2015 ; Pathak et al. , 2017 ; Stadie et al. , 2015 ) and use the prediction error as the intrinsic motivation . Explicitly building models like this , particularly from observations , is expensive , error prone , and can be difficult to generalize to arbitrary environments . In the absence of the novelty signal , these algorithms reduce to undirected exploration schemes , maintaining exploration in a non-scalable way . To overcome this problem , a careful calibration between the speed of the learning algorithm and that of the vanishing rewards is required ( Ecoffet et al. , 2019 ; Ostrovski et al. , 2017 ) . The main idea of our proposed approach is to jointly learn separate exploration and exploitation policies derived from the same network , in such a way that the exploitative policy can concentrate on maximising the extrinsic reward ( solving the task at hand ) while the exploratory ones can maintain exploration without eventually reducing to an undirected policy . We propose to jointly learn a family of policies , parametrised using the UVFA framework ( Schaul et al. , 2015a ) , with various degrees of exploratory behaviour . The learning of the exploratory policies can be thought of as a set of auxiliary tasks that can help build a shared architecture that continues to develop even in the absence of extrinsic rewards ( Jaderberg et al. , 2016 ) . We use reinforcement learning to approximate the optimal value function corresponding to several different weightings of intrinsic rewards . We propose an intrinsic reward that combines per-episode and life-long novelty to explicitly encourage the agent to repeatedly visit all controllable states in the environment over an episode . Episodic novelty encourages an agent to periodically revisit familiar ( but potentially not fully explored ) states over several episodes , but not within the same episode . Life-long novelty gradually down-modulates states that become progressively more familiar across many episodes . Our episodic novelty uses an episodic memory filled with all previously visited states , encoded using the self-supervised objective of Pathak et al . ( 2017 ) to avoid uncontrollable parts of the state space . Episodic novelty is then defined as similarity of the current state to previously stored states . This allows the episodic novelty to rapidly adapt within an episode : every observation made by the agent potentially changes the per-episode novelty significantly . Our life-long novelty multiplicatively modulates the episodic similarity signal and is driven by a Random Network Distillation error ( Burda et al. , 2018b ) . In contrast to the episodic novelty , the life-long novelty changes slowly , relying upon gradient descent optimisation ( as opposed to an episodic memory write for episodic novelty ) . Thus , this combined notion of novelty is able to generalize in complex tasks with large , high dimensional state spaces in which a given state is never observed twice , and maintain consistent exploration both within an episode and across episodes . This paper makes the following contributions : ( i ) defining an exploration bonus combining life-long and episodic novelty to learn exploratory strategies that can maintain exploration throughout the agent ’ s training process ( to never give up ) , ( ii ) to learn a family of policies that separate exploration and exploitation using a conditional architecture with shared weights , ( iii ) experimental evidence that the proposed method is scalable and performs on par or better than state-of-the-art methods on hard exploration tasks . Our work differs from Savinov et al . ( 2018 ) in that it is not specialised to navigation tasks , our method incorporates a long-term intrinsic reward and is able to separate exploration and exploitation policies . Unlike Stanton & Clune ( 2018 ) , our work relies on no privileged information and combines both episodic and non-episodic novelty , obtaining superior results . Our work differs from Beyer et al . ( 2019 ) in that we learn multiple policies by sharing weights , rather than just a common replay buffer , and our method does not require exact counts and so can scale to more realistic domains such as Atari . The paper is organized as follows . In Section 2 we describe the proposed intrinsic reward . In Section 3 , we describe the proposed agent and general framework . In Section 4 we present experimental evaluation . 2 THE NEVER-GIVE-UP INTRINSIC REWARD . We follow the literature on curiosity-driven exploration , where the extrinsic reward is augmented with an intrinsic reward ( or exploration bonus ) . The augmented reward at time t is then defined as rt = r e t + βr i t , where r e t and r i t are respectively the extrinsic and intrinsic rewards , and β is a positive scalar weighting the relevance of the latter . Deep RL agents are typically trained on the augmented reward rt , while performance is measured on extrinsic reward ret only . This section describes the proposed intrinsic reward rit . Our intrinsic reward rit satisfies three properties : ( i ) it rapidly discourages revisiting the same state within the same episode , ( ii ) it slowly discourages visits to states visited many times across episodes , ( iii ) the notion of state ignores aspects of an environment that are not influenced by an agent ’ s actions . We begin by providing a general overview of the computation of the proposed intrinsic reward . Then we provide the details of each one of the components . The reward is composed of two blocks : an episodic novelty module and an ( optional ) life-long novelty module , represented in red and green respectively in Fig . 1 ( right ) . The episodic novelty module computes our episodic intrinsic reward and is composed of an episodic memory , M , and an embedding function f , mapping the current observation to a learned representation that we refer to as controllable state . At the beginning of each episode , the episodic memory starts completely empty . At every step , the agent computes an episodic intrinsic reward , repisodict , and appends the controllable state corresponding to the current observation to the memory M . To determine the bonus , the current observation is compared to the content of the episodic memory . Larger differences produce larger episodic intrinsic rewards . The episodic intrinsic reward repisodict promotes the agent to visit as many different states as possible within a single episode . This means that the notion of novelty ignores inter-episode interactions : a state that has been visited thousands of times gives the same intrinsic reward as a completely new state as long as they are equally novel given the history of the current episode . A life-long ( or inter-episodic ) novelty module provides a long-term novelty signal to statefully control the amount of exploration across episodes . We do so by multiplicatively modulating the exploration bonus repisodict with a life-long curiosity factor , αt . Note that this modulation will vanish over time , reducing our method to using the non-modulated reward . Specifically , we combine αt with r episodic t as follows ( see also Fig . 1 ( right ) ) : rit = r episodic t ·min { max { αt , 1 } , L } ( 1 ) where L is a chosen maximum reward scaling ( we fix L = 5 for all our experiments ) . Mixing rewards this way , we leverage the long-term novelty detection that αt offers , while rit continues to encourage our agent to explore all the controllable states . Embedding network : f : O → Rp maps the current observation to a p-dimensional vector corresponding to its controllable state . Consider an environment that has a lot of variability independent of the agent ’ s actions , such as navigating a busy city with many pedestrians and vehicles . An agent could visit a large number of different states ( collecting large cumulative intrinsic rewards ) without taking any actions . This would not lead to performing any meaningful form of exploration . To avoid such meaningless exploration , given two consecutive observations , we train a Siamese network ( Bromley et al. , 1994 ; Koch et al. , 2015 ) f to predict the action taken by the agent to go from one observation to the next ( Pathak et al. , 2017 ) . Intuitively , all the variability in the environment that is not affected by the action taken by the agent would not be useful to make this prediction . More formally , given a triplet { xt , at , xt+1 } composed of two consecutive observations , xt and xt+1 , and the action taken by the agent at , we parameterise the conditional likelihood as p ( a|xt , xt+1 ) = h ( f ( xt ) , f ( xt+1 ) ) , where h is a one hidden layer MLP followed by a softmax . The parameters of both h and f are trained via maximum likelihood . This architecture can be thought of as a Siamese network with a one-layer classifier on top , see Fig . 1 ( left ) for an illustration . For more details about the architecture , see App . H.1 , and hyperparameters , see App . F. Episodic memory and intrinsic reward : The episodic memory M is a dynamically-sized slotbased memory that stores the controllable states in an online fashion ( Pritzel et al. , 2017 ) . At time t , the memory contains the controllable states of all the observations visited in the current episode , { f ( x0 ) , f ( x1 ) , . . . , f ( xt−1 ) } . Inspired by theoretically-justified exploration methods turning stateaction counts into a bonus reward ( Strehl & Littman , 2008 ) , we define our intrinsic reward as repisodict = 1√ n ( f ( xt ) ) ≈ 1√∑ fi∈Nk K ( f ( xt ) , fi ) + c ( 2 ) where n ( f ( xt ) ) is the counts for the visits to the abstract state f ( xt ) . We approximate these counts n ( f ( xt ) ) as the sum of the similarities given by a kernel functionK : Rp×Rp → R , over the content of M . In practice , pseudo-counts are computed using the k-nearest neighbors of f ( xt ) in the memory M , denoted by Nk = { fi } ki=1 . The constant c guarantees a minimum amount of “ pseudo-counts ” ( fixed to 0.001 in all our experiments ) . Note that when K is a Dirac delta function , the approximation becomes exact but consequently provides no generalisation of exploration required for very large state spaces . Following Blundell et al . ( 2016 ) ; Pritzel et al . ( 2017 ) , we use the inverse kernel for K , K ( x , y ) = d2 ( x , y ) d2m + ( 3 ) where is a small constant ( fixed to 10−3 in all our experiments ) , d is the Euclidean distance and d2m is a running average of the squared Euclidean distance of the k-th nearest neighbors . This running average is used to make the kernel more robust to the task being solved , as different tasks may have different typical distances between learnt embeddings . A detailed computation of the episodic reward can be found in Alg . 1 in App . A.1 . Integrating life-long curiosity : In principle , any long-term novelty estimator could be used as a basis for the modulator αt . We found Random Network Distillation ( Burda et al. , 2018b , RND ) worked well , is simple to implement and easy to parallelize . The RND modulator αt is defined by introducing a random , untrained convolutional network g : O → Rk , and training a predictor network ĝ : O → Rk that attempts to predict the outputs of g on all the observations that are seen during training by minimizing err ( xt ) = ||ĝ ( xt ; θ ) − g ( xt ) ||2 with respect to the parameters of ĝ , θ . We then define the modulator αt as a normalized mean squared error , as done in Burda et al . ( 2018b ) : αt = 1 + err ( xt ) −µe σe , where σe and µe are running standard deviation and mean for err ( xt ) . For more details about the architecture , see App . H.2 , and hyperparameters , see App . F .
The paper proposes a novel intrinsic reward/curiosity metric that combines both episodic and “life-long” novelty. Essentially two competing pressures that push agents to explore as many novel states in a single rollout as possible and to explore as many states as possible as evenly as possible. The primary contribution here is the episodic novelty measure, which relies on a state embedding that takes into account stochasticity in the environment. The paper covers this episodic curiosity measure and how it’s integrated with the life-long curiosity metric. It then demonstrates the impact of these metrics and variations compared to baselines on particular games and all 57 Arcade Learning Environment games.
SP:ea8267af45b09cc35349456d85eb39c58447e319
Never Give Up: Learning Directed Exploration Strategies
1 INTRODUCTION . The problem of exploration remains one of the major challenges in deep reinforcement learning . In general , methods that guarantee finding an optimal policy require the number of visits to each state–action pair to approach infinity . Strategies that become greedy after a finite number of steps may never learn to act optimally ; they may converge prematurely to suboptimal policies , and never gather the data they need to learn . Ensuring that all state-action pairs are encountered infinitely often is the general problem of maintaining exploration ( François-Lavet et al. , 2018 ; Sutton & Barto , 2018 ) . The simplest approach for tackling this problem is to consider stochastic policies with a non-zero probability of selecting all actions in each state , e.g . -greedy or Boltzmann exploration . While these techniques will eventually learn the optimal policy in the tabular setting , they are very inefficient and the steps they require grow exponentially with the size of the state space . Despite these shortcomings , they can perform remarkably well in dense reward scenarios ( Mnih et al. , 2015 ) . In sparse reward settings , however , they can completely fail to learn , as temporally-extended exploration ( also called deep exploration ) is crucial to even find the very few rewarding states ( Osband et al. , 2016 ) . ∗Equal contribution . Recent approaches have proposed to provide intrinsic rewards to agents to drive exploration , with a focus on demonstrating performance in non-tabular settings . These intrinsic rewards are proportional to some notion of saliency quantifying how different the current state is from those already visited ( Bellemare et al. , 2016 ; Haber et al. , 2018 ; Houthooft et al. , 2016 ; Oh et al. , 2015 ; Ostrovski et al. , 2017 ; Pathak et al. , 2017 ; Stadie et al. , 2015 ) . As the agent explores the environment and becomes familiar with it , the exploration bonus disappears and learning is only driven by extrinsic rewards . This is a sensible idea as the goal is to maximise the expected sum of extrinsic rewards . While very good results have been achieved on some very hard exploration tasks , these algorithms face a fundamental limitation : after the novelty of a state has vanished , the agent is not encouraged to visit it again , regardless of the downstream learning opportunities it might allow ( Bellemare et al. , 2016 ; Ecoffet et al. , 2019 ; Stanton & Clune , 2018 ) . Other methods estimate predictive forward models ( Haber et al. , 2018 ; Houthooft et al. , 2016 ; Oh et al. , 2015 ; Pathak et al. , 2017 ; Stadie et al. , 2015 ) and use the prediction error as the intrinsic motivation . Explicitly building models like this , particularly from observations , is expensive , error prone , and can be difficult to generalize to arbitrary environments . In the absence of the novelty signal , these algorithms reduce to undirected exploration schemes , maintaining exploration in a non-scalable way . To overcome this problem , a careful calibration between the speed of the learning algorithm and that of the vanishing rewards is required ( Ecoffet et al. , 2019 ; Ostrovski et al. , 2017 ) . The main idea of our proposed approach is to jointly learn separate exploration and exploitation policies derived from the same network , in such a way that the exploitative policy can concentrate on maximising the extrinsic reward ( solving the task at hand ) while the exploratory ones can maintain exploration without eventually reducing to an undirected policy . We propose to jointly learn a family of policies , parametrised using the UVFA framework ( Schaul et al. , 2015a ) , with various degrees of exploratory behaviour . The learning of the exploratory policies can be thought of as a set of auxiliary tasks that can help build a shared architecture that continues to develop even in the absence of extrinsic rewards ( Jaderberg et al. , 2016 ) . We use reinforcement learning to approximate the optimal value function corresponding to several different weightings of intrinsic rewards . We propose an intrinsic reward that combines per-episode and life-long novelty to explicitly encourage the agent to repeatedly visit all controllable states in the environment over an episode . Episodic novelty encourages an agent to periodically revisit familiar ( but potentially not fully explored ) states over several episodes , but not within the same episode . Life-long novelty gradually down-modulates states that become progressively more familiar across many episodes . Our episodic novelty uses an episodic memory filled with all previously visited states , encoded using the self-supervised objective of Pathak et al . ( 2017 ) to avoid uncontrollable parts of the state space . Episodic novelty is then defined as similarity of the current state to previously stored states . This allows the episodic novelty to rapidly adapt within an episode : every observation made by the agent potentially changes the per-episode novelty significantly . Our life-long novelty multiplicatively modulates the episodic similarity signal and is driven by a Random Network Distillation error ( Burda et al. , 2018b ) . In contrast to the episodic novelty , the life-long novelty changes slowly , relying upon gradient descent optimisation ( as opposed to an episodic memory write for episodic novelty ) . Thus , this combined notion of novelty is able to generalize in complex tasks with large , high dimensional state spaces in which a given state is never observed twice , and maintain consistent exploration both within an episode and across episodes . This paper makes the following contributions : ( i ) defining an exploration bonus combining life-long and episodic novelty to learn exploratory strategies that can maintain exploration throughout the agent ’ s training process ( to never give up ) , ( ii ) to learn a family of policies that separate exploration and exploitation using a conditional architecture with shared weights , ( iii ) experimental evidence that the proposed method is scalable and performs on par or better than state-of-the-art methods on hard exploration tasks . Our work differs from Savinov et al . ( 2018 ) in that it is not specialised to navigation tasks , our method incorporates a long-term intrinsic reward and is able to separate exploration and exploitation policies . Unlike Stanton & Clune ( 2018 ) , our work relies on no privileged information and combines both episodic and non-episodic novelty , obtaining superior results . Our work differs from Beyer et al . ( 2019 ) in that we learn multiple policies by sharing weights , rather than just a common replay buffer , and our method does not require exact counts and so can scale to more realistic domains such as Atari . The paper is organized as follows . In Section 2 we describe the proposed intrinsic reward . In Section 3 , we describe the proposed agent and general framework . In Section 4 we present experimental evaluation . 2 THE NEVER-GIVE-UP INTRINSIC REWARD . We follow the literature on curiosity-driven exploration , where the extrinsic reward is augmented with an intrinsic reward ( or exploration bonus ) . The augmented reward at time t is then defined as rt = r e t + βr i t , where r e t and r i t are respectively the extrinsic and intrinsic rewards , and β is a positive scalar weighting the relevance of the latter . Deep RL agents are typically trained on the augmented reward rt , while performance is measured on extrinsic reward ret only . This section describes the proposed intrinsic reward rit . Our intrinsic reward rit satisfies three properties : ( i ) it rapidly discourages revisiting the same state within the same episode , ( ii ) it slowly discourages visits to states visited many times across episodes , ( iii ) the notion of state ignores aspects of an environment that are not influenced by an agent ’ s actions . We begin by providing a general overview of the computation of the proposed intrinsic reward . Then we provide the details of each one of the components . The reward is composed of two blocks : an episodic novelty module and an ( optional ) life-long novelty module , represented in red and green respectively in Fig . 1 ( right ) . The episodic novelty module computes our episodic intrinsic reward and is composed of an episodic memory , M , and an embedding function f , mapping the current observation to a learned representation that we refer to as controllable state . At the beginning of each episode , the episodic memory starts completely empty . At every step , the agent computes an episodic intrinsic reward , repisodict , and appends the controllable state corresponding to the current observation to the memory M . To determine the bonus , the current observation is compared to the content of the episodic memory . Larger differences produce larger episodic intrinsic rewards . The episodic intrinsic reward repisodict promotes the agent to visit as many different states as possible within a single episode . This means that the notion of novelty ignores inter-episode interactions : a state that has been visited thousands of times gives the same intrinsic reward as a completely new state as long as they are equally novel given the history of the current episode . A life-long ( or inter-episodic ) novelty module provides a long-term novelty signal to statefully control the amount of exploration across episodes . We do so by multiplicatively modulating the exploration bonus repisodict with a life-long curiosity factor , αt . Note that this modulation will vanish over time , reducing our method to using the non-modulated reward . Specifically , we combine αt with r episodic t as follows ( see also Fig . 1 ( right ) ) : rit = r episodic t ·min { max { αt , 1 } , L } ( 1 ) where L is a chosen maximum reward scaling ( we fix L = 5 for all our experiments ) . Mixing rewards this way , we leverage the long-term novelty detection that αt offers , while rit continues to encourage our agent to explore all the controllable states . Embedding network : f : O → Rp maps the current observation to a p-dimensional vector corresponding to its controllable state . Consider an environment that has a lot of variability independent of the agent ’ s actions , such as navigating a busy city with many pedestrians and vehicles . An agent could visit a large number of different states ( collecting large cumulative intrinsic rewards ) without taking any actions . This would not lead to performing any meaningful form of exploration . To avoid such meaningless exploration , given two consecutive observations , we train a Siamese network ( Bromley et al. , 1994 ; Koch et al. , 2015 ) f to predict the action taken by the agent to go from one observation to the next ( Pathak et al. , 2017 ) . Intuitively , all the variability in the environment that is not affected by the action taken by the agent would not be useful to make this prediction . More formally , given a triplet { xt , at , xt+1 } composed of two consecutive observations , xt and xt+1 , and the action taken by the agent at , we parameterise the conditional likelihood as p ( a|xt , xt+1 ) = h ( f ( xt ) , f ( xt+1 ) ) , where h is a one hidden layer MLP followed by a softmax . The parameters of both h and f are trained via maximum likelihood . This architecture can be thought of as a Siamese network with a one-layer classifier on top , see Fig . 1 ( left ) for an illustration . For more details about the architecture , see App . H.1 , and hyperparameters , see App . F. Episodic memory and intrinsic reward : The episodic memory M is a dynamically-sized slotbased memory that stores the controllable states in an online fashion ( Pritzel et al. , 2017 ) . At time t , the memory contains the controllable states of all the observations visited in the current episode , { f ( x0 ) , f ( x1 ) , . . . , f ( xt−1 ) } . Inspired by theoretically-justified exploration methods turning stateaction counts into a bonus reward ( Strehl & Littman , 2008 ) , we define our intrinsic reward as repisodict = 1√ n ( f ( xt ) ) ≈ 1√∑ fi∈Nk K ( f ( xt ) , fi ) + c ( 2 ) where n ( f ( xt ) ) is the counts for the visits to the abstract state f ( xt ) . We approximate these counts n ( f ( xt ) ) as the sum of the similarities given by a kernel functionK : Rp×Rp → R , over the content of M . In practice , pseudo-counts are computed using the k-nearest neighbors of f ( xt ) in the memory M , denoted by Nk = { fi } ki=1 . The constant c guarantees a minimum amount of “ pseudo-counts ” ( fixed to 0.001 in all our experiments ) . Note that when K is a Dirac delta function , the approximation becomes exact but consequently provides no generalisation of exploration required for very large state spaces . Following Blundell et al . ( 2016 ) ; Pritzel et al . ( 2017 ) , we use the inverse kernel for K , K ( x , y ) = d2 ( x , y ) d2m + ( 3 ) where is a small constant ( fixed to 10−3 in all our experiments ) , d is the Euclidean distance and d2m is a running average of the squared Euclidean distance of the k-th nearest neighbors . This running average is used to make the kernel more robust to the task being solved , as different tasks may have different typical distances between learnt embeddings . A detailed computation of the episodic reward can be found in Alg . 1 in App . A.1 . Integrating life-long curiosity : In principle , any long-term novelty estimator could be used as a basis for the modulator αt . We found Random Network Distillation ( Burda et al. , 2018b , RND ) worked well , is simple to implement and easy to parallelize . The RND modulator αt is defined by introducing a random , untrained convolutional network g : O → Rk , and training a predictor network ĝ : O → Rk that attempts to predict the outputs of g on all the observations that are seen during training by minimizing err ( xt ) = ||ĝ ( xt ; θ ) − g ( xt ) ||2 with respect to the parameters of ĝ , θ . We then define the modulator αt as a normalized mean squared error , as done in Burda et al . ( 2018b ) : αt = 1 + err ( xt ) −µe σe , where σe and µe are running standard deviation and mean for err ( xt ) . For more details about the architecture , see App . H.2 , and hyperparameters , see App . F .
The work is motivated by the goal of having a comprehensive exploration of an agent in deep RL. For achieving that, the authors propose a count-based NGU agent, combining intrinsic and extrinsic bonuses as new rewards. An extrinsic/ long-term novelty module is used to control the amount of exploration across episodes, a life-long curiosity factor as its output. In the intrinsic/episodic novelty module, an embedding net and a KNN on episodic memory are applied to compute the current episodic reward. In the experiment, a universal value function approximator (UVFA) framework is used to simultaneously approximate the optimal value function with a set of rewards. The proposed method is tested on several hard exploration games. Other recent count-based models are compared in the paper.
SP:ea8267af45b09cc35349456d85eb39c58447e319
Hidden incentives for self-induced distributional shift
1 INTRODUCTION . Consider a household robot , one of whose duties is to predict when its owner will ask it for coffee . We would like the robot to notice its owners preference for having coffee in the morning , but we would not want the robot to prevent its owner from sleeping late just because the robot is unsure if the owner will still want coffee if they wake up in the afternoon . While doing so would result in a better prediction , such a strategy is cheating - by changing the task rather than solving the task as intended . More specifically , waking the owner is an example of what we call self-induced distributional shift ( SIDS ) , as it changes the distribution of inputs to the robot ’ s coffee prediction algorithm . SIDS is not necessarily undesirable : consider an algorithm meant to alert drivers of imminent collisions . If it works well , such a system will help drivers avoid crashing , thus making self-refuting predictions which result in SIDS . What separates this example from the coffee robot that disturbs its owner ’ s sleep ? The collision-alert system alters its data distribution in a way that is aligned with the goal of fewer collisions , whereas the coffee robot ’ s strategy results in changes that are misaligned with the goal of good coffee-timing ( Leike et al. , 2018 ) . This makes it an example of a specification problem ( Leike et al. , 2017 ; Ortega & Maini , 2018 ) : we did not intend the robot to ensure its predictions were good using such a strategy , yet a naive specification ( e.g . maximizing likelihood ) incentivized that strategy . Ideally , we ’ d like to specify which kinds of SIDS are acceptable , i.e . the means by which a learner is intended or allowed to influence the world in order to achieve its ’ ends ( i.e . increase its performance ) , but doing so in full generality can be difficult . An alternative , more tractable problem which we address in this work is to accept the possibility of SIDS , but to carefully manage incentives for SIDS . Informally , a learner has an incentive to behave in a certain way when doing so can increase its performance ( e.g . higher accuracy , or increased reward ) . When meta-learning optimizes over a longer time horizon , or using a different algorithm , than the original “ inner loop ” learner , this can reveal new incentives for SIDS that were not apparent in the original learner ’ s behavior . We call these hidden incentives for distributional shift ( HIDS ) , and note that keeping HIDS hidden can be important for achieving aligned behavior . Notably , even in the absence of an explicit meta-learning algorithm machine learning practitioners employ “ manual meta-learning ” , also called “ grad student descent ” ( Gencoglu et al. , 2019 ) in the iterative process of algorithm design , model selection , hyperparameter tuning , etc . Considered in this broader sense , meta-learning seems indispensable , making HIDS relevant for all machine learning practitioners . A real-world setting where incentives for SIDS could be problematic is content recommendation : algorithmically selecting which media or products to display to the users of a service . For example ( see Figure 1 ) , a profit-driven algorithm might engage in upselling : persuading users to purchase or click on items they originally had no interest in . Recent media reports have described ‘ engagement ’ - ( click or view-time ) driven recommendation services such as YouTube contributing to viewer radicalization ( Roose , 2019 ; Friedersorf , 2018 ) . A recent study supports these claims , finding that many YouTube users “ systematically migrate from commenting exclusively on milder content to commenting on more extreme content ” ( Ribeiro et al. , 2019 ) .1 See Appendix 1 for a review of real-world issues related to content recommendation . Our goal in this work is to show both ( 1 ) that meta-learning can reveal HIDS , and ( 2 ) that this means applying meta-learning to a learning scenario not only changes the way in which solutions are searched for , but also which solutions are ultimately found . Our contributions are as follows : 1 . We identify and define the phenomena of SIDS ( self-induced distributional shift ) and HIDS ( hidden incentives for distributional shift ) . 2 . We create two simple environments for studying identifying and studying HIDS : a “ unit test ” based on the Prisoner ’ s Dilemma , and a content recommendation environment which disentangles two types of SIDS . 3 . We demonstrate experimentally that meta-learning reveals HIDS in these environments , yielding agents that achieve higher performance via SIDS , but may follow sub-optimal policies . 4 . We propose and test a mitigation strategy based on swapping learners between environments in order to reduce incentives for SIDS . 2 BACKGROUND . 2.1 DISTRIBUTIONAL SHIFT AND CONTENT RECOMMENDATION . In general , distributional shift refers to change of the data distribution over time . In supervised learning with data x and labels y , this can be more specifically described as dataset shift : change in the joint distribution of P ( x , y ) between the training and test sets ( Moreno-Torres et al. , 2012 ; Quionero-Candela et al. , 2009 ) . As identified by Moreno-Torres et al . ( 2012 ) , two common kinds of distributional shift are : 1 . Covariate shift : changing P ( x ) . In the context of content recommendation , this corresponds to changing the user base of the recommendation system . For instance , a media outlet which publishes inflammatory content may appeal to users with extreme views while alienating more moderate users . This self-selection effect ( Kayhan , 2015 ) may appear to a recommendation system as an increase in performance , leading to a feedback effect , as previously noted by Shah et al . ( 2018 ) . This type of feedback effect has been identified as contributing to filter bubbles and radicalization ( Pariser , 2011 ; Kayhan , 2015 ) . We observe this type of change in our experiments , as shown in Figure 1 . 1 The authors argue that commenting on a video is a good proxy for supporting its viewpoint , since only 5 out of 900 comments they checked opposed the viewpoint of the video commmented on . 2 . Concept shift : changing P ( y|x ) . In the context of content recommendation , this corresponds to changing a given user ’ s interest in different kinds of content . For example , exposure to a fake news story has been shown to increase the perceived accuracy of ( and thus presumably the interest in ) the story , an example of the illusory truth effect ( Pennycook et al. , 2019 ) . 2.2 META-LEARNING AND POPULATION BASED TRAINING . Meta-learning is the use of machine learning techniques to learn machine learning algorithms . This generally involves instantiating multiple learning scenarios which run in an inner loop ( IL ) , while an outer loop ( OL ) uses the outcomes of the inner loop ( s ) as data-points from which to learn which learning algorithms are most effective ( Metz et al. , 2019 ) . The number of IL steps per OL step is called the interval of the OL . Many recent works have focused on multi-task meta-learning where the OL seeks to find learning rules that generalize to unseen tasks by training the IL on a distribution of tasks - this is often used as an approach to one- or few-shot learning , e.g . Finn et al . ( 2017 ) ; Ren et al . ( 2018 ) , or transfer learning , e.g . Andrychowicz et al . ( 2016 ) . Single-task meta-learning includes learning an optimizer for a single task , e.g . Gong et al . ( 2018 ) , adaptive methods for selecting models , e.g . Kalousis ( 2000 ) , or for setting hyperparameters , e.g . Snoek et al . ( 2012 ) . For simplicity in this initial study we focus on single-task meta-learning . Population-based training ( PBT ) ( Jaderberg et al. , 2017 ) is a meta-learning algorithm that trains multiple learners L1 , ... , Ln in parallel , after each interval ( T steps of IL ) applying an evolutionary OL step which consists of : 1 . Evaluate the performance of each learner , 2 . Replace both parameters and hyperparameters of low-performing ( bottom 20 % ) learners with copies of those from randomly chosen high-performing ( top 20 % ) learners ( EXPLOIT ) , 3 . Randomly perturb the hyperparameters ( but not the parameters ) of all learners ( EXPLORE ) . Two distinctive features of PBT ( compared with other hyperoptimization methods , such as Bayesian optimization ( Snoek et al. , 2012 ) ) are notable for us because they give the OL more control over the learning process : 1 . PBT applies OL optimization to parameters , not just hyperparameters . This means the OL can directly select for parameters which lead to SIDS , instead of only being able to influence parameter values via hyperparameters , which may be much more limiting . 2 . PBT uses multiple OL steps within a single training run . This gives the OL more overall influence over the dynamics and outcome of the training run . 2.3 SPECIFICATION AND INCENTIVES . We define specification as the process of a ( typically human ) designer instantiating a learning algorithm in a real-world learning scenario ( see Appendix 2 for formal definitions ) . A specification problem occurs when the outcome of a learning scenario differs from the intentions of the designer . Specification is often viewed as concerned solely with the choice of performance metric , and indeed researchers often select learners solely on the basis of performance . However , our work emphasizes that the choice of learning algorithm is also an aspect of specification , as noted by Ortega & Maini ( 2018 ) . In particular , we consider this choice from the point of view incentives , similarly to Everitt et al . ( 2019 ) . Their work focused on identifying which incentives exist , but we note that incentives may exist and yet not be pursued by a learner ; for example , in supervised learning , there is an incentive to overfit the test set in order to increase test performance , but algorithms are designed to not do that . We thus distinguish between the existence of an incentive in a learner ’ s operational context and its presence in a learner ’ s objective , or revealed specification ( Ortega & Maini , 2018 ) , which is what a learner is “ trying ” to accomplish . Given an incentive that is present in the operational context , we say it is hidden from a learner if it does not appear in the objective , and revealed if it does . 3 SELF-INDUCED DISTRIBUTION SHIFT ( SIDS ) AND HIDDEN INCENTIVES FOR DISTRIBUTIONAL SHIFT ( HIDS ) . 3.1 SIDS . To formally define SIDS , we assume there exists some reference data distribution , which is the distribution of data that the learner would encounter “ by default ” . This is a standard assumption for classification problems ( Moreno-Torres et al. , 2012 ) ; in reinforcement learning , the reference distribution could be the initial distribution over states , or the distribution over states which results from following some reference policy . We say that SIDS occurs whenever the behavior ( e.g . actions or predictions , or mere existence ) , of the learner leads it to encounter a distribution other than this reference distribution . This definition excludes distributional shift which would happen even if the learner were not present - e.g . for a crash prediction algorithm trained on data from the summer , snowy roads in the winter are an example of distributional shift , but not self-induced distributional shift ( SIDS ) . In order to highlight the phenomenon of SIDS , we distinguish between the ( often implicit ) assumptions of the machine learning algorithm ( e.g . the i.i.d . assumption ) , vs. the model of the environments in which the algorithm is trained/deployed ( e.g . our synthetic content recommendation environment ) . This is formalized in Appendix 2 . This distinction allows us to explicitly model situations in which the assumptions of a learning algorithm are violated . For instance , in Sec . 4.2 we explicitly model a partially observable environment whose underlying state determines the data distribution of the examples that a standard supervised learning algorithm observes at each time-step .
The authors study the phenomena of self-introduced distributional shift. They define the term along with the term hidden incentives for distributional shift. The latter describes factors that motivate the learner to change the distribution in order to achieve a higher performance. The authors study both phenomena in two domains (one being a prisoner dilemma and the other a recommender system) and show how meta-learning reveals the hidden incentives for distributional shift. They then propose an approach based on swapping learners between environments to reduce self introduced distributional shift.
SP:1ac8384ea71a1d51086464a466cd3167da4336c1
Hidden incentives for self-induced distributional shift
1 INTRODUCTION . Consider a household robot , one of whose duties is to predict when its owner will ask it for coffee . We would like the robot to notice its owners preference for having coffee in the morning , but we would not want the robot to prevent its owner from sleeping late just because the robot is unsure if the owner will still want coffee if they wake up in the afternoon . While doing so would result in a better prediction , such a strategy is cheating - by changing the task rather than solving the task as intended . More specifically , waking the owner is an example of what we call self-induced distributional shift ( SIDS ) , as it changes the distribution of inputs to the robot ’ s coffee prediction algorithm . SIDS is not necessarily undesirable : consider an algorithm meant to alert drivers of imminent collisions . If it works well , such a system will help drivers avoid crashing , thus making self-refuting predictions which result in SIDS . What separates this example from the coffee robot that disturbs its owner ’ s sleep ? The collision-alert system alters its data distribution in a way that is aligned with the goal of fewer collisions , whereas the coffee robot ’ s strategy results in changes that are misaligned with the goal of good coffee-timing ( Leike et al. , 2018 ) . This makes it an example of a specification problem ( Leike et al. , 2017 ; Ortega & Maini , 2018 ) : we did not intend the robot to ensure its predictions were good using such a strategy , yet a naive specification ( e.g . maximizing likelihood ) incentivized that strategy . Ideally , we ’ d like to specify which kinds of SIDS are acceptable , i.e . the means by which a learner is intended or allowed to influence the world in order to achieve its ’ ends ( i.e . increase its performance ) , but doing so in full generality can be difficult . An alternative , more tractable problem which we address in this work is to accept the possibility of SIDS , but to carefully manage incentives for SIDS . Informally , a learner has an incentive to behave in a certain way when doing so can increase its performance ( e.g . higher accuracy , or increased reward ) . When meta-learning optimizes over a longer time horizon , or using a different algorithm , than the original “ inner loop ” learner , this can reveal new incentives for SIDS that were not apparent in the original learner ’ s behavior . We call these hidden incentives for distributional shift ( HIDS ) , and note that keeping HIDS hidden can be important for achieving aligned behavior . Notably , even in the absence of an explicit meta-learning algorithm machine learning practitioners employ “ manual meta-learning ” , also called “ grad student descent ” ( Gencoglu et al. , 2019 ) in the iterative process of algorithm design , model selection , hyperparameter tuning , etc . Considered in this broader sense , meta-learning seems indispensable , making HIDS relevant for all machine learning practitioners . A real-world setting where incentives for SIDS could be problematic is content recommendation : algorithmically selecting which media or products to display to the users of a service . For example ( see Figure 1 ) , a profit-driven algorithm might engage in upselling : persuading users to purchase or click on items they originally had no interest in . Recent media reports have described ‘ engagement ’ - ( click or view-time ) driven recommendation services such as YouTube contributing to viewer radicalization ( Roose , 2019 ; Friedersorf , 2018 ) . A recent study supports these claims , finding that many YouTube users “ systematically migrate from commenting exclusively on milder content to commenting on more extreme content ” ( Ribeiro et al. , 2019 ) .1 See Appendix 1 for a review of real-world issues related to content recommendation . Our goal in this work is to show both ( 1 ) that meta-learning can reveal HIDS , and ( 2 ) that this means applying meta-learning to a learning scenario not only changes the way in which solutions are searched for , but also which solutions are ultimately found . Our contributions are as follows : 1 . We identify and define the phenomena of SIDS ( self-induced distributional shift ) and HIDS ( hidden incentives for distributional shift ) . 2 . We create two simple environments for studying identifying and studying HIDS : a “ unit test ” based on the Prisoner ’ s Dilemma , and a content recommendation environment which disentangles two types of SIDS . 3 . We demonstrate experimentally that meta-learning reveals HIDS in these environments , yielding agents that achieve higher performance via SIDS , but may follow sub-optimal policies . 4 . We propose and test a mitigation strategy based on swapping learners between environments in order to reduce incentives for SIDS . 2 BACKGROUND . 2.1 DISTRIBUTIONAL SHIFT AND CONTENT RECOMMENDATION . In general , distributional shift refers to change of the data distribution over time . In supervised learning with data x and labels y , this can be more specifically described as dataset shift : change in the joint distribution of P ( x , y ) between the training and test sets ( Moreno-Torres et al. , 2012 ; Quionero-Candela et al. , 2009 ) . As identified by Moreno-Torres et al . ( 2012 ) , two common kinds of distributional shift are : 1 . Covariate shift : changing P ( x ) . In the context of content recommendation , this corresponds to changing the user base of the recommendation system . For instance , a media outlet which publishes inflammatory content may appeal to users with extreme views while alienating more moderate users . This self-selection effect ( Kayhan , 2015 ) may appear to a recommendation system as an increase in performance , leading to a feedback effect , as previously noted by Shah et al . ( 2018 ) . This type of feedback effect has been identified as contributing to filter bubbles and radicalization ( Pariser , 2011 ; Kayhan , 2015 ) . We observe this type of change in our experiments , as shown in Figure 1 . 1 The authors argue that commenting on a video is a good proxy for supporting its viewpoint , since only 5 out of 900 comments they checked opposed the viewpoint of the video commmented on . 2 . Concept shift : changing P ( y|x ) . In the context of content recommendation , this corresponds to changing a given user ’ s interest in different kinds of content . For example , exposure to a fake news story has been shown to increase the perceived accuracy of ( and thus presumably the interest in ) the story , an example of the illusory truth effect ( Pennycook et al. , 2019 ) . 2.2 META-LEARNING AND POPULATION BASED TRAINING . Meta-learning is the use of machine learning techniques to learn machine learning algorithms . This generally involves instantiating multiple learning scenarios which run in an inner loop ( IL ) , while an outer loop ( OL ) uses the outcomes of the inner loop ( s ) as data-points from which to learn which learning algorithms are most effective ( Metz et al. , 2019 ) . The number of IL steps per OL step is called the interval of the OL . Many recent works have focused on multi-task meta-learning where the OL seeks to find learning rules that generalize to unseen tasks by training the IL on a distribution of tasks - this is often used as an approach to one- or few-shot learning , e.g . Finn et al . ( 2017 ) ; Ren et al . ( 2018 ) , or transfer learning , e.g . Andrychowicz et al . ( 2016 ) . Single-task meta-learning includes learning an optimizer for a single task , e.g . Gong et al . ( 2018 ) , adaptive methods for selecting models , e.g . Kalousis ( 2000 ) , or for setting hyperparameters , e.g . Snoek et al . ( 2012 ) . For simplicity in this initial study we focus on single-task meta-learning . Population-based training ( PBT ) ( Jaderberg et al. , 2017 ) is a meta-learning algorithm that trains multiple learners L1 , ... , Ln in parallel , after each interval ( T steps of IL ) applying an evolutionary OL step which consists of : 1 . Evaluate the performance of each learner , 2 . Replace both parameters and hyperparameters of low-performing ( bottom 20 % ) learners with copies of those from randomly chosen high-performing ( top 20 % ) learners ( EXPLOIT ) , 3 . Randomly perturb the hyperparameters ( but not the parameters ) of all learners ( EXPLORE ) . Two distinctive features of PBT ( compared with other hyperoptimization methods , such as Bayesian optimization ( Snoek et al. , 2012 ) ) are notable for us because they give the OL more control over the learning process : 1 . PBT applies OL optimization to parameters , not just hyperparameters . This means the OL can directly select for parameters which lead to SIDS , instead of only being able to influence parameter values via hyperparameters , which may be much more limiting . 2 . PBT uses multiple OL steps within a single training run . This gives the OL more overall influence over the dynamics and outcome of the training run . 2.3 SPECIFICATION AND INCENTIVES . We define specification as the process of a ( typically human ) designer instantiating a learning algorithm in a real-world learning scenario ( see Appendix 2 for formal definitions ) . A specification problem occurs when the outcome of a learning scenario differs from the intentions of the designer . Specification is often viewed as concerned solely with the choice of performance metric , and indeed researchers often select learners solely on the basis of performance . However , our work emphasizes that the choice of learning algorithm is also an aspect of specification , as noted by Ortega & Maini ( 2018 ) . In particular , we consider this choice from the point of view incentives , similarly to Everitt et al . ( 2019 ) . Their work focused on identifying which incentives exist , but we note that incentives may exist and yet not be pursued by a learner ; for example , in supervised learning , there is an incentive to overfit the test set in order to increase test performance , but algorithms are designed to not do that . We thus distinguish between the existence of an incentive in a learner ’ s operational context and its presence in a learner ’ s objective , or revealed specification ( Ortega & Maini , 2018 ) , which is what a learner is “ trying ” to accomplish . Given an incentive that is present in the operational context , we say it is hidden from a learner if it does not appear in the objective , and revealed if it does . 3 SELF-INDUCED DISTRIBUTION SHIFT ( SIDS ) AND HIDDEN INCENTIVES FOR DISTRIBUTIONAL SHIFT ( HIDS ) . 3.1 SIDS . To formally define SIDS , we assume there exists some reference data distribution , which is the distribution of data that the learner would encounter “ by default ” . This is a standard assumption for classification problems ( Moreno-Torres et al. , 2012 ) ; in reinforcement learning , the reference distribution could be the initial distribution over states , or the distribution over states which results from following some reference policy . We say that SIDS occurs whenever the behavior ( e.g . actions or predictions , or mere existence ) , of the learner leads it to encounter a distribution other than this reference distribution . This definition excludes distributional shift which would happen even if the learner were not present - e.g . for a crash prediction algorithm trained on data from the summer , snowy roads in the winter are an example of distributional shift , but not self-induced distributional shift ( SIDS ) . In order to highlight the phenomenon of SIDS , we distinguish between the ( often implicit ) assumptions of the machine learning algorithm ( e.g . the i.i.d . assumption ) , vs. the model of the environments in which the algorithm is trained/deployed ( e.g . our synthetic content recommendation environment ) . This is formalized in Appendix 2 . This distinction allows us to explicitly model situations in which the assumptions of a learning algorithm are violated . For instance , in Sec . 4.2 we explicitly model a partially observable environment whose underlying state determines the data distribution of the examples that a standard supervised learning algorithm observes at each time-step .
The main idea of the paper: When using meta-learning there is an inherent incentive for the learner to win by making the task easier. The authors generalise this effect to a larger class of problems where the learning framework induces a set of Hidden Incentive for Distributional Shift (HIDS) and introduce Context Swapping, a HIDS mitigation technique. In the experimental section, the authors propos a HIDS unit test which then they employ to show that PBT (Population Based-Trainng), a popular meta-learning algorithm exhibits HIDS ant that context swapping helps fixing it.
SP:1ac8384ea71a1d51086464a466cd3167da4336c1
Hierarchical Disentangle Network for Object Representation Learning
1 INTRODUCTION . Representation learning , as one basic and hot topic in machine learning and computer vision community , has achieved significant progress in recent years on different tasks such as recognition ( Russakovsky et al. , 2015 ) , detection ( Ren et al. , 2015 ; Redmon et al. , 2016 ; Liu et al. , 2016b ) and generation ( Goodfellow et al. , 2014 ) , benefiting from the rapid development of representation learned by deep neural networks . Considering the strong capacity of deep representation , in this paper , we mainly focus on the deep representation learning framework . Despite great success the deep representations have achieved as mentioned above , two important problems are still unresolved or less considered , i.e . the interpretability and the disentanglement of the learned representations . In the past decades , various works have been developed to reveal the black box of deep learning ( Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016b ; Bau et al. , 2017 ; Simonyan et al. , 2013 ; Stock & Cissé , 2017 ; Zhang et al. , 2017 ) and move us closer to the goal of disentangling the variations within data ( Reed et al. , 2014 ; Mathieu et al. , 2016 ; Rifai et al. , 2012 ; Tran et al. , 2017 ; Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ; Chen et al. , 2016 ) . Even though they have brought great insights to us , they still have some limitations . For instance , ( Chen et al. , 2016 ; Xie et al. , 2017 ; Zhao et al. , 2017 ) learn to disentangle variation factors within each category using generative models , instead of investigating the similarities and differences among categories , leading to poor discriminability . Therefore , the learned representations would not well conform to human perception . Though ( Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ) try to obtain the domain-invariant and domain-specific knowledge , they can only handle two categories one time , which is not that efficient . In this paper , we attempt to learn disentangled representations in a more natural and efficient manner . Let us first discuss how humans understand an object . Generally speaking , an object can be regarded as the combination of many semantic attributes . Hundreds of thousands of objects in the world can be clustered and recognized by humans just because we can figure out the common and unique attributes of an object compared to others . Besides , a man who never play the billiards can only recognize a table in an image , while a sports fan may regard it as a billiard table . Both of them are right since categories have natural hierarchical structure . As shown in Fig . 1 ( a ) , given six leaf-level categories , they can be organized in a three-level hierarchy considering the common and different features they have . Each child category in the hierarchy is a special case of its parent category since it inherits all features from its parent category and has extra features that are not present in its parent category . From another perspective , each parent category is the abstraction of all its child categories considering it contains the attributes that are present in all its child categories . Then we come back to the task of disentangling representation learning . It aims to learn the representation encoding useful information that can be applied in other tasks ( e.g . building classifiers and predictors ) ( Bengio et al. , 2013 ) . Taking the hierarchical nature of categories into account , if we only learn the representations of an object in a flat manner for a specific category level as previous works do , it will not be scalable and comprehensive for the machine to be qualified for various tasks in the real world . Our work aims to exploit the natural hierarchical characteristics among categories to divide the representation learning in a coarse-to-fine manner , such that each level only focuses on learning the specific representations . For instance , given a billiard table image in Fig . 1 ( b ) , it tangles the information of being a furniture , a table and a billiard table . We first extract the features that only contain the information of furniture from the image . By tracing from the root to leaf level , more and more information is extracted until we can recognize its belonging categories in all hierarchical levels . By doing so , the disentangled representations are expected to find wide and promising applications . For example , one can transfer the semantics in a specific category level from one object to another while keep information of other levels unchanged . Besides , it would help for the hierarchical image compression task using different levels of the disentangled representations . To achieve the objective of hierarchical disentangling and simultaneously interpreting the results so that humans can understand , we propose the hierarchical disentangle network ( HDN ) , which draws lessons from hierarchical classification and the recent proposed generative adversarial nets ( Goodfellow et al. , 2014 ) . Extensive experiments are conducted on four popular object datasets to validate the effectiveness of our method . 2 RELATED WORKS . Disentangling Deep Representations . The goal of disentangling representation learning is to discover factors of variation within data ( Bengio et al. , 2013 ) . Recent years have witnessed a substantial interest on such research area ( Tenenbaum & Freeman , 1996 ) , including works based on deep learning ( Reed et al. , 2014 ; Mathieu et al. , 2016 ; Rifai et al. , 2012 ; Wang et al. , 2017 ; Tran et al. , 2017 ; Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ; Chen et al. , 2016 ) . ( Rifai et al. , 2012 ) is probably the earliest to learn disentangled representations using deep networks for the task of emotion recognition . ( Reed et al. , 2014 ) is based on a higher-order Boltzmann machine and regards each factor variation of the manifold as its sub-manifold . ( Mathieu et al. , 2016 ) and ( Chen et al. , 2016 ) leverage the generative adversarial nets ( GAN ) to learn factors of variation . Recently , cross-domain translation methods ( Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ) learn the domain-invariant and domain-specific representations . These works ignore the existing natural and inherent hierarchy relationships among categories , with which we can conduct the disentangling in a coarse-to-fine manner such that each level only focuses on learning the specific representations . Network Interpretability . Network interpretability aims to learn how the network works via visualizing it from the perspective that humans can understand . These methods can be briefly divided into two groups according to whether the visualization is involved in the network during training , i.e . the off-line methods and online methods . The off-line methods make attempts to visualize patterns in image space that activate each convolutional filter ( Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016b ; a ; Bau et al. , 2017 ; 2019 ) or to interpret the area in an image that is responsible for the network prediction ( Simonyan et al. , 2013 ; Fong & Vedaldi , 2017 ; Zintgraf et al. , 2017 ; Abbasi-Asl & Yu , 2017 ; Stock & Cissé , 2017 ; Palacio et al. , 2018 ; Geirhos et al. , 2019 ) . While such methods can explain what has already been learned by the model , they can not improve the model interpretability in return . Instead , the online works propose to directly learn interpretable representations during training ( Li et al. , 2018 ; Zhang et al. , 2017 ) . However , these methods mainly focus on figuring out the running mechanism of networks while paying less attention to dissecting variations of the features among categories , which can not make models really understand their inputs . Hierarchy-regularized Learning . Semantic hierarchies have been explored in object classification task for accelerating recognition ( Griffin & Perona , 2008 ; Marszalek & Schmid , 2008 ) , obtaining a sequence of predictions ( Deng et al. , 2012 ; Ordonez et al. , 2013 ) , making use of category relation graphs ( Deng et al. , 2014 ; Ding et al. , 2015 ) , and improving recognition performance as additional supervision ( Zhao et al. , 2011 ; Srivastava & Salakhutdinov , 2013 ; Hwang & Sigal , 2014 ; Yan et al. , 2015 ; Goo et al. , 2016 ; Ahmed et al. , 2016 ) . While these discriminative classification works have achieved their expected goals , they usually lack interpretability . To address such issues , ( Xie et al. , 2017 ; Zhao et al. , 2017 ) propose to use generative models to disentangle the factors from low-level representations to high-level ones that can construct a specific object . ( Singh et al. , 2019 ) uses an unsupervised generative framework to hierarchically disentangle the background , object shape and appearance from an image . However , they either deal with each category in isolation or ignore the discriminability of learned features , and thus can not accurately disentangle the differences and similarities among categories . 3 HIERARCHICAL REPRESENTATION LEARNING . Supposing that a category hierarchy is given in the form shown in Fig . 1 ( a ) , we use l = 1 , ... , L to denote the level of hierarchy ( L for the leaf level and 1 for the root level ) , Kl to denote the number of nodes at level l , nkl to denote the k-th node at level l , and C k l to denote the number of children for nkl . As illustrated in Fig . 1 ( b ) , given an original object image denoted as I o , our goal is to extract the feature Fl in the l-th level . Generally speaking , an object O can be described as the combination of a group of visual attributes : O = A1 + ... + Ai︸ ︷︷ ︸ level=1 + Ai+1 + ... + Aj ︸ ︷︷ ︸ level=2 ... + Aj+1 + ... + Am ︸ ︷︷ ︸ level=l + ∆ ( O ) ( 1 ) where ∆ ( O ) represents currently undefined attributes existing on O . As we have discussed , humans classify O in a particular category level according to a subset of the whole attribute set in Eqn. ( 1 ) . Take the object in Fig . 1 ( b ) for example , it can be regarded as a furniture since it contains the attribute subset { A1 + ... + Ai } , and be classified to a table in terms of the attribute subset { A1 + ... + Ai + Ai+1 + ... +Aj } present in it . Therefore , the disentangled feature Fl for our objectives in Fig . 1 ( b ) is actually the reflection of the attribute subset formulated in Eqn. ( 1 ) . Moreover , since the hierarchical correlations ( i.e . the inherited relationship ) among categories in different hierarchies , obviously the subset { A1 + ... + Ai + Ai+1 + ... + Aj } includes the subset { A1 + ... + Ai } , naturally leading to the disentangled Fl−1 being the proper subset of Fl . Taking these into consideration , we design the hierarchical disentangle network ( HDN ) based on the autoencoder architecture in Fig . 2 . The encoder E dissects the hierarchical representations given a semantic hierarchy . The decoder G plays the role of an interpreter to reflect the variations of semantic in the image space for different hierarchical levels guided by the hierarchical discriminator Dadv and classifiers Dcls ( they share most network architecture except the output layers ) .
This paper studies the problem of learning disentangled representation in a hierarchical manner. It proposed a hierarchical disentangle network (HDN) which tackles the disentangling process in a coarse-to-fine manner. Specifically, common representations are captured at root level and unique representations are learned at lower hierarchical level. The HDN is trained in a generative adversarial network (GAN) manner, with additional hierarchical classification loss enforcing the disentanglement. Experiments are conducted on CelebA (attributes), Fashion-MNIST (category), and CAD Cars (category & pose).
SP:efc663895e7ee0d78501c66be7c242d7f882d45d
Hierarchical Disentangle Network for Object Representation Learning
1 INTRODUCTION . Representation learning , as one basic and hot topic in machine learning and computer vision community , has achieved significant progress in recent years on different tasks such as recognition ( Russakovsky et al. , 2015 ) , detection ( Ren et al. , 2015 ; Redmon et al. , 2016 ; Liu et al. , 2016b ) and generation ( Goodfellow et al. , 2014 ) , benefiting from the rapid development of representation learned by deep neural networks . Considering the strong capacity of deep representation , in this paper , we mainly focus on the deep representation learning framework . Despite great success the deep representations have achieved as mentioned above , two important problems are still unresolved or less considered , i.e . the interpretability and the disentanglement of the learned representations . In the past decades , various works have been developed to reveal the black box of deep learning ( Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016b ; Bau et al. , 2017 ; Simonyan et al. , 2013 ; Stock & Cissé , 2017 ; Zhang et al. , 2017 ) and move us closer to the goal of disentangling the variations within data ( Reed et al. , 2014 ; Mathieu et al. , 2016 ; Rifai et al. , 2012 ; Tran et al. , 2017 ; Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ; Chen et al. , 2016 ) . Even though they have brought great insights to us , they still have some limitations . For instance , ( Chen et al. , 2016 ; Xie et al. , 2017 ; Zhao et al. , 2017 ) learn to disentangle variation factors within each category using generative models , instead of investigating the similarities and differences among categories , leading to poor discriminability . Therefore , the learned representations would not well conform to human perception . Though ( Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ) try to obtain the domain-invariant and domain-specific knowledge , they can only handle two categories one time , which is not that efficient . In this paper , we attempt to learn disentangled representations in a more natural and efficient manner . Let us first discuss how humans understand an object . Generally speaking , an object can be regarded as the combination of many semantic attributes . Hundreds of thousands of objects in the world can be clustered and recognized by humans just because we can figure out the common and unique attributes of an object compared to others . Besides , a man who never play the billiards can only recognize a table in an image , while a sports fan may regard it as a billiard table . Both of them are right since categories have natural hierarchical structure . As shown in Fig . 1 ( a ) , given six leaf-level categories , they can be organized in a three-level hierarchy considering the common and different features they have . Each child category in the hierarchy is a special case of its parent category since it inherits all features from its parent category and has extra features that are not present in its parent category . From another perspective , each parent category is the abstraction of all its child categories considering it contains the attributes that are present in all its child categories . Then we come back to the task of disentangling representation learning . It aims to learn the representation encoding useful information that can be applied in other tasks ( e.g . building classifiers and predictors ) ( Bengio et al. , 2013 ) . Taking the hierarchical nature of categories into account , if we only learn the representations of an object in a flat manner for a specific category level as previous works do , it will not be scalable and comprehensive for the machine to be qualified for various tasks in the real world . Our work aims to exploit the natural hierarchical characteristics among categories to divide the representation learning in a coarse-to-fine manner , such that each level only focuses on learning the specific representations . For instance , given a billiard table image in Fig . 1 ( b ) , it tangles the information of being a furniture , a table and a billiard table . We first extract the features that only contain the information of furniture from the image . By tracing from the root to leaf level , more and more information is extracted until we can recognize its belonging categories in all hierarchical levels . By doing so , the disentangled representations are expected to find wide and promising applications . For example , one can transfer the semantics in a specific category level from one object to another while keep information of other levels unchanged . Besides , it would help for the hierarchical image compression task using different levels of the disentangled representations . To achieve the objective of hierarchical disentangling and simultaneously interpreting the results so that humans can understand , we propose the hierarchical disentangle network ( HDN ) , which draws lessons from hierarchical classification and the recent proposed generative adversarial nets ( Goodfellow et al. , 2014 ) . Extensive experiments are conducted on four popular object datasets to validate the effectiveness of our method . 2 RELATED WORKS . Disentangling Deep Representations . The goal of disentangling representation learning is to discover factors of variation within data ( Bengio et al. , 2013 ) . Recent years have witnessed a substantial interest on such research area ( Tenenbaum & Freeman , 1996 ) , including works based on deep learning ( Reed et al. , 2014 ; Mathieu et al. , 2016 ; Rifai et al. , 2012 ; Wang et al. , 2017 ; Tran et al. , 2017 ; Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ; Chen et al. , 2016 ) . ( Rifai et al. , 2012 ) is probably the earliest to learn disentangled representations using deep networks for the task of emotion recognition . ( Reed et al. , 2014 ) is based on a higher-order Boltzmann machine and regards each factor variation of the manifold as its sub-manifold . ( Mathieu et al. , 2016 ) and ( Chen et al. , 2016 ) leverage the generative adversarial nets ( GAN ) to learn factors of variation . Recently , cross-domain translation methods ( Gonzalez-Garcia et al. , 2018 ; Huang et al. , 2018 ) learn the domain-invariant and domain-specific representations . These works ignore the existing natural and inherent hierarchy relationships among categories , with which we can conduct the disentangling in a coarse-to-fine manner such that each level only focuses on learning the specific representations . Network Interpretability . Network interpretability aims to learn how the network works via visualizing it from the perspective that humans can understand . These methods can be briefly divided into two groups according to whether the visualization is involved in the network during training , i.e . the off-line methods and online methods . The off-line methods make attempts to visualize patterns in image space that activate each convolutional filter ( Zeiler & Fergus , 2014 ; Dosovitskiy & Brox , 2016b ; a ; Bau et al. , 2017 ; 2019 ) or to interpret the area in an image that is responsible for the network prediction ( Simonyan et al. , 2013 ; Fong & Vedaldi , 2017 ; Zintgraf et al. , 2017 ; Abbasi-Asl & Yu , 2017 ; Stock & Cissé , 2017 ; Palacio et al. , 2018 ; Geirhos et al. , 2019 ) . While such methods can explain what has already been learned by the model , they can not improve the model interpretability in return . Instead , the online works propose to directly learn interpretable representations during training ( Li et al. , 2018 ; Zhang et al. , 2017 ) . However , these methods mainly focus on figuring out the running mechanism of networks while paying less attention to dissecting variations of the features among categories , which can not make models really understand their inputs . Hierarchy-regularized Learning . Semantic hierarchies have been explored in object classification task for accelerating recognition ( Griffin & Perona , 2008 ; Marszalek & Schmid , 2008 ) , obtaining a sequence of predictions ( Deng et al. , 2012 ; Ordonez et al. , 2013 ) , making use of category relation graphs ( Deng et al. , 2014 ; Ding et al. , 2015 ) , and improving recognition performance as additional supervision ( Zhao et al. , 2011 ; Srivastava & Salakhutdinov , 2013 ; Hwang & Sigal , 2014 ; Yan et al. , 2015 ; Goo et al. , 2016 ; Ahmed et al. , 2016 ) . While these discriminative classification works have achieved their expected goals , they usually lack interpretability . To address such issues , ( Xie et al. , 2017 ; Zhao et al. , 2017 ) propose to use generative models to disentangle the factors from low-level representations to high-level ones that can construct a specific object . ( Singh et al. , 2019 ) uses an unsupervised generative framework to hierarchically disentangle the background , object shape and appearance from an image . However , they either deal with each category in isolation or ignore the discriminability of learned features , and thus can not accurately disentangle the differences and similarities among categories . 3 HIERARCHICAL REPRESENTATION LEARNING . Supposing that a category hierarchy is given in the form shown in Fig . 1 ( a ) , we use l = 1 , ... , L to denote the level of hierarchy ( L for the leaf level and 1 for the root level ) , Kl to denote the number of nodes at level l , nkl to denote the k-th node at level l , and C k l to denote the number of children for nkl . As illustrated in Fig . 1 ( b ) , given an original object image denoted as I o , our goal is to extract the feature Fl in the l-th level . Generally speaking , an object O can be described as the combination of a group of visual attributes : O = A1 + ... + Ai︸ ︷︷ ︸ level=1 + Ai+1 + ... + Aj ︸ ︷︷ ︸ level=2 ... + Aj+1 + ... + Am ︸ ︷︷ ︸ level=l + ∆ ( O ) ( 1 ) where ∆ ( O ) represents currently undefined attributes existing on O . As we have discussed , humans classify O in a particular category level according to a subset of the whole attribute set in Eqn. ( 1 ) . Take the object in Fig . 1 ( b ) for example , it can be regarded as a furniture since it contains the attribute subset { A1 + ... + Ai } , and be classified to a table in terms of the attribute subset { A1 + ... + Ai + Ai+1 + ... +Aj } present in it . Therefore , the disentangled feature Fl for our objectives in Fig . 1 ( b ) is actually the reflection of the attribute subset formulated in Eqn. ( 1 ) . Moreover , since the hierarchical correlations ( i.e . the inherited relationship ) among categories in different hierarchies , obviously the subset { A1 + ... + Ai + Ai+1 + ... + Aj } includes the subset { A1 + ... + Ai } , naturally leading to the disentangled Fl−1 being the proper subset of Fl . Taking these into consideration , we design the hierarchical disentangle network ( HDN ) based on the autoencoder architecture in Fig . 2 . The encoder E dissects the hierarchical representations given a semantic hierarchy . The decoder G plays the role of an interpreter to reflect the variations of semantic in the image space for different hierarchical levels guided by the hierarchical discriminator Dadv and classifiers Dcls ( they share most network architecture except the output layers ) .
This paper proposed the hierarchical disentangle network (HDN) that leverages hierarchical characteristics of object categories to learn disentangled representation in multiple levels. Their coarse-to-fine manner approach allows each level to focus on learning specific representations in its granularity. This is achieved through supervised learning on each level where they train classifiers to distinguish each particular category from its ‘sibling’ categories which are close to each other. Experiments are conducted on four datasets to validate the method.
SP:efc663895e7ee0d78501c66be7c242d7f882d45d
Learning from Label Proportions with Consistency Regularization
1 INTRODUCTION . In traditional supervised learning , a classifier is trained on a dataset where each instance is associated with a class label . However , label annotation can be expensive or difficult to obtain for some applications . Take the embryo selection as an example ( Hernández-González et al. , 2018 ) . To increase the pregnancy rate , clinicians would transfer multiple embryos to a mother at the same time . However , clinicians are unable to know the outcome of a particular embryo due to limitations of current medical techniques . The only thing we know is the proportion of embryos that implant successfully . To increase the success rate of embryo implantation , clinicians aim to select high-quality embryos through the aggregated results . In this case , only label proportions about groups of instances are provided to train the classifier , a problem setting known as learning from label proportions ( LLP ) . In LLP , each group of instances is called a bag , which is associated with a proportion label of different classes . A classifier is then trained on several bags and their associated proportion labels in order to predict the class of each unseen instance . Recently , LLP has attracted much attention among researchers because its problem setting occurs in many real-life scenarios . For example , the census data and medical databases are all provided in the form of label proportion data due to privacy issues ( Patrini et al. , 2014 ; Hernández-González et al. , 2018 ) . Other LLP applications include fraud detection ( Rueping , 2010 ) , object recognition ( Kuck & de Freitas , 2012 ) , video event detection ( Lai et al. , 2014 ) , and ice-water classification ( Li & Taylor , 2015 ) . The challenge in LLP is to train models without direct instance-level label supervision . To overcome this issue , prior work seeks to estimate either the individual label ( Yu et al. , 2013 ; Dulac-Arnold et al. , 2019 ) or the mean of each class by the label proportions ( Quadrianto et al. , 2009 ; Patrini et al. , 2014 ) . However , the methodology behind developing these models do not portray LLP situations that occur in real life . First , these models can be improved by considering methods that can better leverage unlabeled data . Second , these models assume that bags of data are randomly generated , which is not the case for many applications . For example , the data of population census are collected on region , age , or occupation with varying group sizes . Third , training these models requires a validation set with labeled data . It would be more practical if the process of model selection relies only on the label proportions . This paper aims to resolve the previous problems . Our main contributions are listed as follows : • We first apply a semi-supervised learning technique , consistency regularization , to the multi-class LLP problem . Consistency regularization considers an auxiliary loss term to enforce network predictions to be consistent when its input is perturbed . By exploiting the unlabeled instances , our method captures the latent structure of data and obtains the SOTA performance on three benchmark datasets . • We develop a new bag generation algorithm – the K-means bag generation , where training data are grouped by attribute similarity . Using this setup can help train models that are more applicable to actual LLP scenarios . • We show that it is possible to select models with a validation set consisting of only bags and associated label proportions . The experiments demonstrate correlation between baglevel validation error and instance-level test error . This potentially reduces the need of a validation set with instance-level labels . 2 PRELIMINARY . 2.1 LEARNING FROM LABEL PROPORTIONS . We consider the multi-class classification problem in the LLP setting in this paper . Let xi ∈ RD be a feature vector of i-th example and yi ∈ { 1 , . . . , L } be a class label of i-th example , where L is the number of different classes . We define e ( j ) to be a standard basis vector [ 0 , . . . , 1 , . . . , 0 ] with 1 at j-th position and ∆L = { p ∈ RL+ : ∑L i pi = 1 } to be a probability simplex . In the setting of LLP , each individual label yi is hidden from the training data . On the other hand , the training data are aggregated by a bag generation procedure . We are given M bags B1 , . . . , BM , where each bag Bm contains a set Xm of instances and a proportion label pm , defined by pm = 1 |Xm| ∑ i : xi∈Xm e ( yi ) , M⋃ m=1 Xm = { x1 , . . . , xN } . We do not require each subset to be disjoint . Also , each bag may have different size . The task of LLP is to learn an individual-level classifier fθ : RD → ∆L to predict the correct label y = arg maxi fθ ( x ) i for a new instance x . Figure 1 illustrates the setting of learning from label proportions in the multi-class classification ( Dulac-Arnold et al. , 2019 ) . 2.2 PROPORTION LOSS . The feasibility of the binary LLP setting has been theoretically justified by Yu et al . ( 2014 ) . Specifically , Yu et al . ( 2014 ) propose the framework of Empirical Proportion Risk Minimization ( EPRM ) , proving that the LLP problem is PAC-learnable under the assumption that bags are i.i.d sampled from an unknown probability distribution . The EPRM framework provides a generalization bound on the expected proportion error and guarantees to learn a probably approximately correct proportion predictor when the number of bags is large enough . Furthermore , the authors prove that the instance label error can be bounded by the bag proportion error . That is , a decent bag proportion predictor guarantees a decent instance label predictor . Based on the profound theoretical analysis , a vast number of LLP approaches learn an instance-level classifier by directly minimizing the proportion loss without acquiring the individual labels . To be more precise , given a bag B = ( X , p ) , an instance-level classifier fθ and a divergence function dprop : RL × RL → R , the proportion loss penalizes the difference between the real proportion label pm and the estimated proportion label p̂ = 1 |X| ∑ x∈X fθ ( x ) , which is an average of the instance predictions within a bag . Thus , the proportion loss Lprop can be defined as follows : Lprop ( θ ) = dprop ( p , p̂ ) . The commonly used divergence functions are L1 and L2 function in prior work ( Musicant et al. , 2007 ; Yu et al. , 2013 ) . Ardehaly & Culotta ( 2017 ) and Dulac-Arnold et al . ( 2019 ) , on the other hand , consider the cross-entropy function for the multi-class LLP problem . 2.3 CONSISTENCY REGULARIZATION . Since collecting labeled data is expensive and time-consuming , the semi-supervised learning approaches aim to leverage a large amount of unlabeled data to mitigate the need for labeled data . There are many semi-supervised learning methods , such as pseudo-labeling ( Lee , 2013 ) , generative approaches ( Kingma et al. , 2014 ) , and consistency-based methods ( Laine & Aila , 2016 ; Miyato et al. , 2018 ; Tarvainen & Valpola , 2017 ) . Consistency-based approaches encourage the network to produce consistent output probabilities between unlabeled data and the perturbed examples . These methods rely on the smoothness assumption ( Chapelle et al. , 2009 ) : if two data points xi and xj are close , then so should be the corresponding output distributions yi and yj . Then , the consistency-based approaches can enforce the decision boundary to traverse through the low-density region . More precisely , given a perturbed input x̂ taken from the input x , consistency regularization penalizes the distinction of model predictions between fθ ( x ) and fθ ( x̂ ) by a distance function dcons : RL × RL → R. The consistency loss can be written as follows : Lcons ( θ ) = dcons ( fθ ( x ) , fθ ( x̂ ) ) . Modern consistency-based methods ( Laine & Aila , 2016 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2018 ; Verma et al. , 2019 ; Berthelot et al. , 2019 ) differ in how perturbed examples are generated for the unlabeled data . Laine & Aila ( 2016 ) introduce the Π-Model approach , which uses the additive Gaussian noise for perturbed examples and chooses the L2 error as the distance function . However , a drawback to Π-Model is that the consistency target fθ ( x̂ ) obtained from the stochastic network is unstable since the network changes rapidly during training . To address this problem , Temporal Ensembling ( Laine & Aila , 2016 ) takes the exponential moving average of the network predictions as the consistency target . Mean Teacher ( Tarvainen & Valpola , 2017 ) , on the other hand , proposes averaging the model parametes instead of network predictions . Overall , the Mean Teacher approach significantly improves the quality of consistency targets and the empirical results on semi-supervised benchmarks . Instead of applying stochastic perturbations to the inputs , Virtual Adversarial Training or VAT ( Miyato et al. , 2018 ) computes the perturbed examples x̂ = x + radv , where radv = arg max r : ||r||2≤ DKL ( fθ ( x ) ‖fθ ( x + r ) ) . ( 1 ) That is , the VAT approach attempts to generate a perturbation which most likely causes the model to misclassify the input in an adversarial direction . Finally , the VAT approach adopts Kullback-Leibler ( KL ) divergence to compute the consistency loss . In comparison to the stochastic perturbation , the VAT approach demonstrates the greater effectiveness in the semi-supervised learning problem . 3 LLP WITH CONSISTENCY REGULARIZATION . With regards to weak supervision , the LLP scenario is similar to the semi-supervised learning problem . In the semi-supervised learning setting , only a small portion of training examples is labeled . On the other hand , in the LLP scenario , we are given the weak supervision of label proportions instead of the strong label on individual instances . Both settings are challenging since most training examples do not have individual labels . To address this challenge , semi-supervised approaches seek to exploit the unlabeled examples to further capture the latent structure of data . Motivated by these semi-supervised approaches , we combine the idea of leveraging the unlabeled data into the LLP problem . We make the same smoothness assumption and introduce a new concept incorporating consistency regularization with LLP . In particular , we consider the typical cross-entropy function between real label proportions and estimated label proportions . Given a bag B = ( X , p ) , we define the proportion loss Lprop as follows : Lprop ( θ ) = − L∑ i=1 pi log 1 |X| ∑ x∈X fθ ( x ) i. Interestingly , the proportion loss Lprop boils down to standard cross-entropy loss for fullysupervised learning when the bag size is one . To learn a decision boundary that better reflects the data manifold , we add an auxiliary consistency loss that leverages the unlabeled data . More formally , we compute the average consistency loss across all instances within the bag . Given a bag B = ( X , p ) , the consistency loss Lcons can be written as follows : Lcons ( θ ) = 1 |X| ∑ x∈X dcons ( fθ ( x ) , fθ ( x̂ ) ) , where dcons is a distance function , and x̂ is a perturbed input of x . We can use any consistencybased approach to generate the perturbed examples and compute the consistency loss . Finally , we mix the two loss functions Lprop and Lcons with a hyperparameter α > 0 , yielding the combined loss L for LLP : L ( θ ) = Lprop ( θ ) + αLcons ( θ ) , where α controls the balance between the bag-level estimation of proportion labels and instancelevel consistency regularization . To understand the intuition behind combining consistency regularization into LLP , we follow the Π-Model approach ( Laine & Aila , 2016 ) to adopt the stochastic Gaussian noise as the perturbation and to use L2 as the distance function dcons in a toy example . Figure 2 illustrates how our method is able to produce a decision boundary that passes through the low-density region and captures the data manifold . On the other hand , the vanilla approach , which simply optimizes the proportion loss , gets easily stuck at a poor solution due to the lack of label information . This toy example shows the advantage of applying consistency regularization into LLP . According to Miyato et al . ( 2018 ) , VAT is more effective and stable than Π-Model due to the way it generates the perturbed examples . For each data example , the Π-Model approach stochastically Algorithm 1 LLP-VAT algorithm Require : D = { ( Xm , pm ) } Mm=1 : collection of bags Require : fθ ( x ) : instance-level classifier with trainable parameters θ Require : g ( x ; θ ) = x + radv : VAT augmentation function according to Equation 1 Require : w ( t ) : ramp-up function for increasing the weight of consistency regularization Require : T : total number of iterations for t = 1 , . . . , T do for each bag ( X , p ) ∈ D do p̂← 1|X| ∑ x∈X fθ ( x ) . Estimated proportion label Lprop = − ∑L j=1 pi log p̂i . Proportion loss Lcons = 1|X| ∑ x∈XDKL ( fθ ( x ) ‖fθ ( g ( x ; θ ) ) ) . Consistency loss L = Lprop + w ( t ) · Lcons . Total loss update θ by gradient∇θL . e.g . SGD , Adam end for end for return θ perturbs inputs and trains the model to assign the same class distributions to all neighbors . In contrast , the VAT approach focuses on neighbors that are sensitive to the model . That is , VAT aims to generate a perturbed input whose prediction is the most different from the model prediction of its original input . The learning of VAT approach tends to be more effective in improving model generalization . Therefore , we adopt the VAT approach to compute the consistency loss for each instance in the bag . Additionally , to prevent the model from getting stuck at a local optimum in the early stage , we use the exponential ramp-up scheduling function ( Laine & Aila , 2016 ) to increase the consistency weight gradually to the maximum value α . The full algorithm of LLP with VAT ( LLP-VAT ) is described in Algorithm 1 .
of the paper: Learning from label proportions (LLP) is an area in machine learning that tries to learn a classifier that predicts labels of instances, with only bag-level aggregated labels given at the training stage. Instead of proposing a loss specialized for this problem, this paper proposes a regularization term for the LLP problem. The core contribution of this paper is to use the idea of consistency regularization, which has become very popular in semi-supervised learning in the recent years. The regularization term takes a perturbation of an input sample, and then force the output of the original and perturbed sample to be similar by minimizing a KL divergence of the two output distributions. Experiments show the performance of the proposed method under two bag generation settings. The paper also finds empirically that the hard L_1 has high correlation with the test error rate, which makes it an ideal candidate when the user splits the validation data from training data (meaning there are no ground truth labels for each instances).
SP:cfbe7ae1f40e2c23a6161d04e3229bc860c79042
Learning from Label Proportions with Consistency Regularization
1 INTRODUCTION . In traditional supervised learning , a classifier is trained on a dataset where each instance is associated with a class label . However , label annotation can be expensive or difficult to obtain for some applications . Take the embryo selection as an example ( Hernández-González et al. , 2018 ) . To increase the pregnancy rate , clinicians would transfer multiple embryos to a mother at the same time . However , clinicians are unable to know the outcome of a particular embryo due to limitations of current medical techniques . The only thing we know is the proportion of embryos that implant successfully . To increase the success rate of embryo implantation , clinicians aim to select high-quality embryos through the aggregated results . In this case , only label proportions about groups of instances are provided to train the classifier , a problem setting known as learning from label proportions ( LLP ) . In LLP , each group of instances is called a bag , which is associated with a proportion label of different classes . A classifier is then trained on several bags and their associated proportion labels in order to predict the class of each unseen instance . Recently , LLP has attracted much attention among researchers because its problem setting occurs in many real-life scenarios . For example , the census data and medical databases are all provided in the form of label proportion data due to privacy issues ( Patrini et al. , 2014 ; Hernández-González et al. , 2018 ) . Other LLP applications include fraud detection ( Rueping , 2010 ) , object recognition ( Kuck & de Freitas , 2012 ) , video event detection ( Lai et al. , 2014 ) , and ice-water classification ( Li & Taylor , 2015 ) . The challenge in LLP is to train models without direct instance-level label supervision . To overcome this issue , prior work seeks to estimate either the individual label ( Yu et al. , 2013 ; Dulac-Arnold et al. , 2019 ) or the mean of each class by the label proportions ( Quadrianto et al. , 2009 ; Patrini et al. , 2014 ) . However , the methodology behind developing these models do not portray LLP situations that occur in real life . First , these models can be improved by considering methods that can better leverage unlabeled data . Second , these models assume that bags of data are randomly generated , which is not the case for many applications . For example , the data of population census are collected on region , age , or occupation with varying group sizes . Third , training these models requires a validation set with labeled data . It would be more practical if the process of model selection relies only on the label proportions . This paper aims to resolve the previous problems . Our main contributions are listed as follows : • We first apply a semi-supervised learning technique , consistency regularization , to the multi-class LLP problem . Consistency regularization considers an auxiliary loss term to enforce network predictions to be consistent when its input is perturbed . By exploiting the unlabeled instances , our method captures the latent structure of data and obtains the SOTA performance on three benchmark datasets . • We develop a new bag generation algorithm – the K-means bag generation , where training data are grouped by attribute similarity . Using this setup can help train models that are more applicable to actual LLP scenarios . • We show that it is possible to select models with a validation set consisting of only bags and associated label proportions . The experiments demonstrate correlation between baglevel validation error and instance-level test error . This potentially reduces the need of a validation set with instance-level labels . 2 PRELIMINARY . 2.1 LEARNING FROM LABEL PROPORTIONS . We consider the multi-class classification problem in the LLP setting in this paper . Let xi ∈ RD be a feature vector of i-th example and yi ∈ { 1 , . . . , L } be a class label of i-th example , where L is the number of different classes . We define e ( j ) to be a standard basis vector [ 0 , . . . , 1 , . . . , 0 ] with 1 at j-th position and ∆L = { p ∈ RL+ : ∑L i pi = 1 } to be a probability simplex . In the setting of LLP , each individual label yi is hidden from the training data . On the other hand , the training data are aggregated by a bag generation procedure . We are given M bags B1 , . . . , BM , where each bag Bm contains a set Xm of instances and a proportion label pm , defined by pm = 1 |Xm| ∑ i : xi∈Xm e ( yi ) , M⋃ m=1 Xm = { x1 , . . . , xN } . We do not require each subset to be disjoint . Also , each bag may have different size . The task of LLP is to learn an individual-level classifier fθ : RD → ∆L to predict the correct label y = arg maxi fθ ( x ) i for a new instance x . Figure 1 illustrates the setting of learning from label proportions in the multi-class classification ( Dulac-Arnold et al. , 2019 ) . 2.2 PROPORTION LOSS . The feasibility of the binary LLP setting has been theoretically justified by Yu et al . ( 2014 ) . Specifically , Yu et al . ( 2014 ) propose the framework of Empirical Proportion Risk Minimization ( EPRM ) , proving that the LLP problem is PAC-learnable under the assumption that bags are i.i.d sampled from an unknown probability distribution . The EPRM framework provides a generalization bound on the expected proportion error and guarantees to learn a probably approximately correct proportion predictor when the number of bags is large enough . Furthermore , the authors prove that the instance label error can be bounded by the bag proportion error . That is , a decent bag proportion predictor guarantees a decent instance label predictor . Based on the profound theoretical analysis , a vast number of LLP approaches learn an instance-level classifier by directly minimizing the proportion loss without acquiring the individual labels . To be more precise , given a bag B = ( X , p ) , an instance-level classifier fθ and a divergence function dprop : RL × RL → R , the proportion loss penalizes the difference between the real proportion label pm and the estimated proportion label p̂ = 1 |X| ∑ x∈X fθ ( x ) , which is an average of the instance predictions within a bag . Thus , the proportion loss Lprop can be defined as follows : Lprop ( θ ) = dprop ( p , p̂ ) . The commonly used divergence functions are L1 and L2 function in prior work ( Musicant et al. , 2007 ; Yu et al. , 2013 ) . Ardehaly & Culotta ( 2017 ) and Dulac-Arnold et al . ( 2019 ) , on the other hand , consider the cross-entropy function for the multi-class LLP problem . 2.3 CONSISTENCY REGULARIZATION . Since collecting labeled data is expensive and time-consuming , the semi-supervised learning approaches aim to leverage a large amount of unlabeled data to mitigate the need for labeled data . There are many semi-supervised learning methods , such as pseudo-labeling ( Lee , 2013 ) , generative approaches ( Kingma et al. , 2014 ) , and consistency-based methods ( Laine & Aila , 2016 ; Miyato et al. , 2018 ; Tarvainen & Valpola , 2017 ) . Consistency-based approaches encourage the network to produce consistent output probabilities between unlabeled data and the perturbed examples . These methods rely on the smoothness assumption ( Chapelle et al. , 2009 ) : if two data points xi and xj are close , then so should be the corresponding output distributions yi and yj . Then , the consistency-based approaches can enforce the decision boundary to traverse through the low-density region . More precisely , given a perturbed input x̂ taken from the input x , consistency regularization penalizes the distinction of model predictions between fθ ( x ) and fθ ( x̂ ) by a distance function dcons : RL × RL → R. The consistency loss can be written as follows : Lcons ( θ ) = dcons ( fθ ( x ) , fθ ( x̂ ) ) . Modern consistency-based methods ( Laine & Aila , 2016 ; Tarvainen & Valpola , 2017 ; Miyato et al. , 2018 ; Verma et al. , 2019 ; Berthelot et al. , 2019 ) differ in how perturbed examples are generated for the unlabeled data . Laine & Aila ( 2016 ) introduce the Π-Model approach , which uses the additive Gaussian noise for perturbed examples and chooses the L2 error as the distance function . However , a drawback to Π-Model is that the consistency target fθ ( x̂ ) obtained from the stochastic network is unstable since the network changes rapidly during training . To address this problem , Temporal Ensembling ( Laine & Aila , 2016 ) takes the exponential moving average of the network predictions as the consistency target . Mean Teacher ( Tarvainen & Valpola , 2017 ) , on the other hand , proposes averaging the model parametes instead of network predictions . Overall , the Mean Teacher approach significantly improves the quality of consistency targets and the empirical results on semi-supervised benchmarks . Instead of applying stochastic perturbations to the inputs , Virtual Adversarial Training or VAT ( Miyato et al. , 2018 ) computes the perturbed examples x̂ = x + radv , where radv = arg max r : ||r||2≤ DKL ( fθ ( x ) ‖fθ ( x + r ) ) . ( 1 ) That is , the VAT approach attempts to generate a perturbation which most likely causes the model to misclassify the input in an adversarial direction . Finally , the VAT approach adopts Kullback-Leibler ( KL ) divergence to compute the consistency loss . In comparison to the stochastic perturbation , the VAT approach demonstrates the greater effectiveness in the semi-supervised learning problem . 3 LLP WITH CONSISTENCY REGULARIZATION . With regards to weak supervision , the LLP scenario is similar to the semi-supervised learning problem . In the semi-supervised learning setting , only a small portion of training examples is labeled . On the other hand , in the LLP scenario , we are given the weak supervision of label proportions instead of the strong label on individual instances . Both settings are challenging since most training examples do not have individual labels . To address this challenge , semi-supervised approaches seek to exploit the unlabeled examples to further capture the latent structure of data . Motivated by these semi-supervised approaches , we combine the idea of leveraging the unlabeled data into the LLP problem . We make the same smoothness assumption and introduce a new concept incorporating consistency regularization with LLP . In particular , we consider the typical cross-entropy function between real label proportions and estimated label proportions . Given a bag B = ( X , p ) , we define the proportion loss Lprop as follows : Lprop ( θ ) = − L∑ i=1 pi log 1 |X| ∑ x∈X fθ ( x ) i. Interestingly , the proportion loss Lprop boils down to standard cross-entropy loss for fullysupervised learning when the bag size is one . To learn a decision boundary that better reflects the data manifold , we add an auxiliary consistency loss that leverages the unlabeled data . More formally , we compute the average consistency loss across all instances within the bag . Given a bag B = ( X , p ) , the consistency loss Lcons can be written as follows : Lcons ( θ ) = 1 |X| ∑ x∈X dcons ( fθ ( x ) , fθ ( x̂ ) ) , where dcons is a distance function , and x̂ is a perturbed input of x . We can use any consistencybased approach to generate the perturbed examples and compute the consistency loss . Finally , we mix the two loss functions Lprop and Lcons with a hyperparameter α > 0 , yielding the combined loss L for LLP : L ( θ ) = Lprop ( θ ) + αLcons ( θ ) , where α controls the balance between the bag-level estimation of proportion labels and instancelevel consistency regularization . To understand the intuition behind combining consistency regularization into LLP , we follow the Π-Model approach ( Laine & Aila , 2016 ) to adopt the stochastic Gaussian noise as the perturbation and to use L2 as the distance function dcons in a toy example . Figure 2 illustrates how our method is able to produce a decision boundary that passes through the low-density region and captures the data manifold . On the other hand , the vanilla approach , which simply optimizes the proportion loss , gets easily stuck at a poor solution due to the lack of label information . This toy example shows the advantage of applying consistency regularization into LLP . According to Miyato et al . ( 2018 ) , VAT is more effective and stable than Π-Model due to the way it generates the perturbed examples . For each data example , the Π-Model approach stochastically Algorithm 1 LLP-VAT algorithm Require : D = { ( Xm , pm ) } Mm=1 : collection of bags Require : fθ ( x ) : instance-level classifier with trainable parameters θ Require : g ( x ; θ ) = x + radv : VAT augmentation function according to Equation 1 Require : w ( t ) : ramp-up function for increasing the weight of consistency regularization Require : T : total number of iterations for t = 1 , . . . , T do for each bag ( X , p ) ∈ D do p̂← 1|X| ∑ x∈X fθ ( x ) . Estimated proportion label Lprop = − ∑L j=1 pi log p̂i . Proportion loss Lcons = 1|X| ∑ x∈XDKL ( fθ ( x ) ‖fθ ( g ( x ; θ ) ) ) . Consistency loss L = Lprop + w ( t ) · Lcons . Total loss update θ by gradient∇θL . e.g . SGD , Adam end for end for return θ perturbs inputs and trains the model to assign the same class distributions to all neighbors . In contrast , the VAT approach focuses on neighbors that are sensitive to the model . That is , VAT aims to generate a perturbed input whose prediction is the most different from the model prediction of its original input . The learning of VAT approach tends to be more effective in improving model generalization . Therefore , we adopt the VAT approach to compute the consistency loss for each instance in the bag . Additionally , to prevent the model from getting stuck at a local optimum in the early stage , we use the exponential ramp-up scheduling function ( Laine & Aila , 2016 ) to increase the consistency weight gradually to the maximum value α . The full algorithm of LLP with VAT ( LLP-VAT ) is described in Algorithm 1 .
This paper proposes using Consistency Regularization and a new bag generation technique to better learn classification decision boundaries in a Label Proportion setting. The consistency regularization works to make sure that examples in the local neighbourhood have similar outputs. The authors further use K-means clustering to create a new bagging scenario they use to mimic real-world LLP settings.
SP:cfbe7ae1f40e2c23a6161d04e3229bc860c79042
Adapting Behaviour for Learning Progress
1 INTRODUCTION . Reinforcement learning ( RL ) is a general formalism modelling sequential decision making . It aspires to be broadly applicable , making minimal assumptions about the task at hand and reducing the need for prior knowledge . By learning behaviour from scratch , it has the potential to surpass human expertise or tackle complex domains where human intuition is not applicable . In practice , however , generality is often traded for performance and efficiency , with RL practitioners tuning algorithms , architectures and hyper-parameters to the task at hand ( Hessel et al. , 2019 ) . A side-effect of this is that the resulting methods can be brittle , or difficult to reliably reproduce ( Nagarajan et al. , 2018 ) . Exploration is one of the main aspects commonly designed or tuned specifically for the task being solved . Previous work has shown that large sample-efficiency gains are possible , for example , when the exploratory behaviour ’ s level of stochasticity is adjusted to the environment ’ s hazard rate ( Garcı́a & Fernández , 2015 ) , or when an appropriate prior is used in large action spaces ( DulacArnold et al. , 2015 ; Czarnecki et al. , 2018 ; Vinyals et al. , 2019 ) . Ideal exploration in the presence of function approximation should be agent-centred . It ought to focus more on generating data that supports the learning of agent at its current parameters θ , rather than making progress on objective measurements of information gathering . A useful notion here is learning progress ( LP ) , defined as the improvement of the learned policy πθ ( Section 3 ) . The agent ’ s source of data is its behaviour policy . Beyond the conventional RL setting of a single stream of experience , distributed agents that interact with parallel copies of the environment can have multiple such data sources ( Horgan et al. , 2018 ) . In this paper , we restrict ourselves to the setting where all behaviour policies are derived from a single set of learned parameters θ , for example when θ parameterises an action-value function Qθ . Consequently the behaviour policies are given by π ( Qθ , z ) , where each modulation z leads to meaningfully different behaviour . This can be guaranteed if z is semantic ( e.g . degree of stochasticity ) and consistent across multiple time-steps . The latter is achieved by holding z fixed throughout each episode ( Section 2 ) . We propose to estimate a proxy that is indicative of future learning progress , f ( z ) ( Section 3 ) , separately for each modulation z , and to adapt the distribution over modulations to maximize f , using a non-stationary multi-armed bandit that can exploit the factored structure of the modulations ( Section 4 ) . Figure 1 shows a diagram of all these components . This results in an autonomous adaptation of behaviour to the agent ’ s stage of learning ( Section 5 ) , varying across tasks and across time , and reducing the need for hyper-parameter tuning . 2 MODULATED BEHAVIOUR . As usual in RL , the objective of the agent is to find a policy π that maximises the γ-discounted expected return Gt . = ∑∞ i=0 γ iRt+i , where Rt is the reward obtained during the transition from time t to t + 1 . A common way to address this problem is to use methods that compute the actionvalue function Qπ given by Qπ ( s , a ) .= E [ Gt|s , a ] , i.e . the expected return when starting from state s with action a and then following π ( Puterman , 1994 ) . A richer representation of Qπ that aims to capture more information about the underlying distribution of Gt has been proposed by Bellemare et al . ( 2017 ) , and extended by Dabney et al . ( 2018 ) . Instead of approximating only the mean of the return distribution , we approximate a discrete set of n quantile values qν ( where ν ∈ { 12n , 3 2n , . . . , 2n−1 2n } ) such that P ( Q π ≤ qν ) = ν . Outside the benefits in performance and representation learning ( Such et al. , 2018 ) , these quantile estimates provide a way of inducing risk-sensitive behaviour . We approximate all qν using a single deep neural network with parameters θ , and define the evaluation policy as the greedy one with respect to the mean estimate : πθ ( ·|s ) ∈ arg max a 1 n ∑ ν qν ( s , a ) . The behaviour policy is the central element of exploration : it generates exploratory behaviour ( and experience therefrom ) which is used to learn πθ ; ideally in such a way as to reduce the total amount of experience required to achieve good performance . Instead of a single monolithic behaviour policy , we propose to use a modulated policy to support parameterized variation . Its modulations z should satisfy the following criteria : they need to ( i ) be impactful , having a direct and meaningful effect on generated behaviour ; ( ii ) have small dimensionality , as to quickly adapt to the needs of the learning algorithm , and interpretable semantics to ease the choice of viable ranges and initialisation ; and ( iii ) be frugal , in the sense that they are relatively simple and computationally inexpensive to apply . In this work , we consider five concrete types of such modulations : Temperature : a Boltzmann softmax policy based on action-logits , modulated by temperature , T . Flat stochasticity : with probability the agent ignores the action distribution produced by the softmax , and samples an action uniformly at random ( -greedy ) . Per-action biases : action-logit offsets , b , to bias the agent to prefer some actions . Action-repeat probability : with probability ρ , the previous action is repeated ( Machado et al. , 2017 ) . This produces chains of repeated actions with expected length 11−ρ . Optimism : as the value function is represented by quantiles qν , the aggregate estimate Qω can be parameterised by an optimism exponent ω , such that ω = 0 recovers the default flat average , while positive values of ω imply optimism and negative ones pessimism . When near risk-neutral , our simple risk measure produces qualitatively similar transforms to those of Wang ( 2000 ) . We combine the above modulations to produce the overall z-modulated policy π ( a|s , z ) .= ( 1− ) ( 1− ρ ) e 1 T ( Qω ( s , a ) +ba ) ∑ a′∈A e 1 T ( Qω ( s , a ′ ) +ba′ ) + ( 1− ρ ) |A| + ρIa=at−1 , where z .= ( T , , b , ρ , ω ) , Ix is the indicator function , and the optimism-aggregated value is Qω . = ∑ ν e −ωνqν∑ ν e −ων . Now that the behaviour policy can be modulated , the following two sections discuss the criteria and mechanisms for choosing modulations z . 3 EXPLORATION & THE EFFECTIVE ACQUISITION OF INFORMATION . A key component of a successful reinforcement learning algorithm is the ability to acquire experience ( information ) that allows it to make expeditious progress towards its objective of learning to act in the environment in such a way as to optimise returns over the relevant ( potentially discounted ) horizon . The types of experience that most benefit an agent ’ s ultimate performance may differ qualitatively throughout the course of learning — a behaviour modulation that is beneficial in the beginning of training often enough does not carry over to the end , as illustrated by the analysis in Figure 5 . However , this analysis was conducted in hindsight , and in general how to generate such experience optimally — optimal exploration in any environment — remains an open problem . One approach is to require exploration to be in service of the agent ’ s future learning progress ( LP ) , and to optimise this quantity during learning . Although there are multiple ways of defining learning progress , in this work we opted for a task-related measure , namely the improvement of the policy in terms of expected return . This choice of measure corresponds to the local steepness of the learning curve of the evaluation policy πθ , LPt ( ∆θ ) . = E s0 [ V πθt+∆θ ( s0 ) − V πθt ( s0 ) ] , ( 1 ) where the expectation is over start states s0 , the value V π ( s ) = Eπ [ ∑ γiRi|s0 = s ] is the γdiscounted return one would expected to obtain , starting in state s and following policy π afterwards , and ∆θ is the change in the agent ’ s parameters . Note that this is still a limited criterion , as it is myopic and might be prone to local optima . As prefaced in the last section , our goal here is to define a mechanism that can switch between different behaviour modulations depending on which of them seems most promising at this point in the training process . Thus in order to adapt the distribution over modulations z , we want to assess the expected LP when learning from data generated according to z-modulated behaviour : LPt ( z ) . = E τ∼πθt ( z ) [ LPt ( ∆θ ( τ , t ) ) ] , with ∆θ ( τ , t ) the weight-change of learning from trajectory τ at time t. This is a subjective utility measure , quantifying how useful τ is for a particular learning algorithm , at this stage in training . Proxies for learning progress : WhileLP ( z ) is a simple and clear progress metric , it is not readily available during training , so that in practice , a proxy fitness ft ( z ) ≈ LPt ( z ) needs to be used . A key practical challenge is to construct ft from inexpensively measurable proxies , in a way that is sufficiently informative to effectively adapt the distribution over z , while being robust to noise , approximation error , state distribution shift and mismatch between the proxies and learning progress . The ideal choice of f ( z ) is a matter of empirical study , and this paper only scratches the surface on this topic . After some initial experimentation , we opted for the simple proxy of empirical ( undiscounted ) episodic return : ft ( z ) = ∑ ai∼π ( Qθt , z ) Ri . This is trivial to estimate , but it departs from LP ( z ) in a number of ways . First , it does not contain learner-subjective information , but this is partly mitigated through the joint use of with prioritized replay ( see Section 5.1 ) that over-samples high error experience . Another potential mechanism by which the episodic return can be indicative of future learning is because an improved policy tends to be preceded by some higher-return episodes – in general , there is a lag between best-seen performance and reliably reproducing it . Second , the fitness is based on absolute returns not differences in returns as suggested by Equation 1 ; this makes no difference to the relative orderings of z ( and the resulting probabilities induced by the bandit ) , but it has the benefit that the non-stationarity takes a different form : a difference-based metric will appear stationary if the policy performance keeps increasing at a steady rate , but such a policy must be changing significantly to achieve that progress , and therefore the selection mechanism should keep revisiting other modulations . In contrast , our absolute fitness naturally has this effect when paired with a non-stationary bandit , as described in the next section .
This papers studies how to explore, in order to generate experience for faster learning of policies in context of RL. RL methods typically employ simple hand-tuned exploration schedules (such as epsilon greedy exploration, and changing the epsilon as training proceeds). This paper proposes a scheme for learning this schedule. The paper does this by modeling this as a non-stationary multi-arm bandit problem. Different exploration settings (tuple of choice of exploration, and the exact hyper-parameter), are considered as different non-stationary multi-arm bandits (while also employing some factorization) and expected returns are maintained over training. Arm (exploration strategy and hyper-parameter) is picked according to the return. The paper demonstrates results on the Atari suite of RL benchmarks, and shows results that demonstrate that their proposed search leads to faster learning.
SP:54eb8cf5375f436952059b8e6890a0550b98fb52
Adapting Behaviour for Learning Progress
1 INTRODUCTION . Reinforcement learning ( RL ) is a general formalism modelling sequential decision making . It aspires to be broadly applicable , making minimal assumptions about the task at hand and reducing the need for prior knowledge . By learning behaviour from scratch , it has the potential to surpass human expertise or tackle complex domains where human intuition is not applicable . In practice , however , generality is often traded for performance and efficiency , with RL practitioners tuning algorithms , architectures and hyper-parameters to the task at hand ( Hessel et al. , 2019 ) . A side-effect of this is that the resulting methods can be brittle , or difficult to reliably reproduce ( Nagarajan et al. , 2018 ) . Exploration is one of the main aspects commonly designed or tuned specifically for the task being solved . Previous work has shown that large sample-efficiency gains are possible , for example , when the exploratory behaviour ’ s level of stochasticity is adjusted to the environment ’ s hazard rate ( Garcı́a & Fernández , 2015 ) , or when an appropriate prior is used in large action spaces ( DulacArnold et al. , 2015 ; Czarnecki et al. , 2018 ; Vinyals et al. , 2019 ) . Ideal exploration in the presence of function approximation should be agent-centred . It ought to focus more on generating data that supports the learning of agent at its current parameters θ , rather than making progress on objective measurements of information gathering . A useful notion here is learning progress ( LP ) , defined as the improvement of the learned policy πθ ( Section 3 ) . The agent ’ s source of data is its behaviour policy . Beyond the conventional RL setting of a single stream of experience , distributed agents that interact with parallel copies of the environment can have multiple such data sources ( Horgan et al. , 2018 ) . In this paper , we restrict ourselves to the setting where all behaviour policies are derived from a single set of learned parameters θ , for example when θ parameterises an action-value function Qθ . Consequently the behaviour policies are given by π ( Qθ , z ) , where each modulation z leads to meaningfully different behaviour . This can be guaranteed if z is semantic ( e.g . degree of stochasticity ) and consistent across multiple time-steps . The latter is achieved by holding z fixed throughout each episode ( Section 2 ) . We propose to estimate a proxy that is indicative of future learning progress , f ( z ) ( Section 3 ) , separately for each modulation z , and to adapt the distribution over modulations to maximize f , using a non-stationary multi-armed bandit that can exploit the factored structure of the modulations ( Section 4 ) . Figure 1 shows a diagram of all these components . This results in an autonomous adaptation of behaviour to the agent ’ s stage of learning ( Section 5 ) , varying across tasks and across time , and reducing the need for hyper-parameter tuning . 2 MODULATED BEHAVIOUR . As usual in RL , the objective of the agent is to find a policy π that maximises the γ-discounted expected return Gt . = ∑∞ i=0 γ iRt+i , where Rt is the reward obtained during the transition from time t to t + 1 . A common way to address this problem is to use methods that compute the actionvalue function Qπ given by Qπ ( s , a ) .= E [ Gt|s , a ] , i.e . the expected return when starting from state s with action a and then following π ( Puterman , 1994 ) . A richer representation of Qπ that aims to capture more information about the underlying distribution of Gt has been proposed by Bellemare et al . ( 2017 ) , and extended by Dabney et al . ( 2018 ) . Instead of approximating only the mean of the return distribution , we approximate a discrete set of n quantile values qν ( where ν ∈ { 12n , 3 2n , . . . , 2n−1 2n } ) such that P ( Q π ≤ qν ) = ν . Outside the benefits in performance and representation learning ( Such et al. , 2018 ) , these quantile estimates provide a way of inducing risk-sensitive behaviour . We approximate all qν using a single deep neural network with parameters θ , and define the evaluation policy as the greedy one with respect to the mean estimate : πθ ( ·|s ) ∈ arg max a 1 n ∑ ν qν ( s , a ) . The behaviour policy is the central element of exploration : it generates exploratory behaviour ( and experience therefrom ) which is used to learn πθ ; ideally in such a way as to reduce the total amount of experience required to achieve good performance . Instead of a single monolithic behaviour policy , we propose to use a modulated policy to support parameterized variation . Its modulations z should satisfy the following criteria : they need to ( i ) be impactful , having a direct and meaningful effect on generated behaviour ; ( ii ) have small dimensionality , as to quickly adapt to the needs of the learning algorithm , and interpretable semantics to ease the choice of viable ranges and initialisation ; and ( iii ) be frugal , in the sense that they are relatively simple and computationally inexpensive to apply . In this work , we consider five concrete types of such modulations : Temperature : a Boltzmann softmax policy based on action-logits , modulated by temperature , T . Flat stochasticity : with probability the agent ignores the action distribution produced by the softmax , and samples an action uniformly at random ( -greedy ) . Per-action biases : action-logit offsets , b , to bias the agent to prefer some actions . Action-repeat probability : with probability ρ , the previous action is repeated ( Machado et al. , 2017 ) . This produces chains of repeated actions with expected length 11−ρ . Optimism : as the value function is represented by quantiles qν , the aggregate estimate Qω can be parameterised by an optimism exponent ω , such that ω = 0 recovers the default flat average , while positive values of ω imply optimism and negative ones pessimism . When near risk-neutral , our simple risk measure produces qualitatively similar transforms to those of Wang ( 2000 ) . We combine the above modulations to produce the overall z-modulated policy π ( a|s , z ) .= ( 1− ) ( 1− ρ ) e 1 T ( Qω ( s , a ) +ba ) ∑ a′∈A e 1 T ( Qω ( s , a ′ ) +ba′ ) + ( 1− ρ ) |A| + ρIa=at−1 , where z .= ( T , , b , ρ , ω ) , Ix is the indicator function , and the optimism-aggregated value is Qω . = ∑ ν e −ωνqν∑ ν e −ων . Now that the behaviour policy can be modulated , the following two sections discuss the criteria and mechanisms for choosing modulations z . 3 EXPLORATION & THE EFFECTIVE ACQUISITION OF INFORMATION . A key component of a successful reinforcement learning algorithm is the ability to acquire experience ( information ) that allows it to make expeditious progress towards its objective of learning to act in the environment in such a way as to optimise returns over the relevant ( potentially discounted ) horizon . The types of experience that most benefit an agent ’ s ultimate performance may differ qualitatively throughout the course of learning — a behaviour modulation that is beneficial in the beginning of training often enough does not carry over to the end , as illustrated by the analysis in Figure 5 . However , this analysis was conducted in hindsight , and in general how to generate such experience optimally — optimal exploration in any environment — remains an open problem . One approach is to require exploration to be in service of the agent ’ s future learning progress ( LP ) , and to optimise this quantity during learning . Although there are multiple ways of defining learning progress , in this work we opted for a task-related measure , namely the improvement of the policy in terms of expected return . This choice of measure corresponds to the local steepness of the learning curve of the evaluation policy πθ , LPt ( ∆θ ) . = E s0 [ V πθt+∆θ ( s0 ) − V πθt ( s0 ) ] , ( 1 ) where the expectation is over start states s0 , the value V π ( s ) = Eπ [ ∑ γiRi|s0 = s ] is the γdiscounted return one would expected to obtain , starting in state s and following policy π afterwards , and ∆θ is the change in the agent ’ s parameters . Note that this is still a limited criterion , as it is myopic and might be prone to local optima . As prefaced in the last section , our goal here is to define a mechanism that can switch between different behaviour modulations depending on which of them seems most promising at this point in the training process . Thus in order to adapt the distribution over modulations z , we want to assess the expected LP when learning from data generated according to z-modulated behaviour : LPt ( z ) . = E τ∼πθt ( z ) [ LPt ( ∆θ ( τ , t ) ) ] , with ∆θ ( τ , t ) the weight-change of learning from trajectory τ at time t. This is a subjective utility measure , quantifying how useful τ is for a particular learning algorithm , at this stage in training . Proxies for learning progress : WhileLP ( z ) is a simple and clear progress metric , it is not readily available during training , so that in practice , a proxy fitness ft ( z ) ≈ LPt ( z ) needs to be used . A key practical challenge is to construct ft from inexpensively measurable proxies , in a way that is sufficiently informative to effectively adapt the distribution over z , while being robust to noise , approximation error , state distribution shift and mismatch between the proxies and learning progress . The ideal choice of f ( z ) is a matter of empirical study , and this paper only scratches the surface on this topic . After some initial experimentation , we opted for the simple proxy of empirical ( undiscounted ) episodic return : ft ( z ) = ∑ ai∼π ( Qθt , z ) Ri . This is trivial to estimate , but it departs from LP ( z ) in a number of ways . First , it does not contain learner-subjective information , but this is partly mitigated through the joint use of with prioritized replay ( see Section 5.1 ) that over-samples high error experience . Another potential mechanism by which the episodic return can be indicative of future learning is because an improved policy tends to be preceded by some higher-return episodes – in general , there is a lag between best-seen performance and reliably reproducing it . Second , the fitness is based on absolute returns not differences in returns as suggested by Equation 1 ; this makes no difference to the relative orderings of z ( and the resulting probabilities induced by the bandit ) , but it has the benefit that the non-stationarity takes a different form : a difference-based metric will appear stationary if the policy performance keeps increasing at a steady rate , but such a policy must be changing significantly to achieve that progress , and therefore the selection mechanism should keep revisiting other modulations . In contrast , our absolute fitness naturally has this effect when paired with a non-stationary bandit , as described in the next section .
This paper develops a multi-arm bandit-based algorithm to dynamically adapt the exploration policy for reinforcement learning. The arms of the bandit are parameters of the policy such as exploration noise, per-action biases etc. A proxy fitness metric is defined that measures the return of the trajectories upon perturbations of the policy z; the bandit then samples perturbations z that are better than the average fitness of the past few perturbations.
SP:54eb8cf5375f436952059b8e6890a0550b98fb52
How to 0wn the NAS in Your Spare Time
1 INTRODUCTION . To continue outperforming state-of-the-art results , research in deep learning ( DL ) has shifted from manually engineering features to engineering DL systems , including novel data pre-processing pipelines ( Raff et al. , 2018 ; Wang et al. , 2019 ) and novel neural architectures ( Cai et al. , 2019 ; Zoph et al. , 2018 ) . For example , a recent malware detection system MalConv , with a manually designed pipeline that combines embeddings and convolutions , achieves 6 % better detection rate over previous state-of-the-art technique without pre-processing ( Raff et al. , 2018 ) . In addition to designing data pre-processing pipelines , other research efforts focus on neural architecture search ( NAS ) —a method to automatically generate novel architectures that are faster , more accurate and more compact . For instance , the recent work of ProxylessNAS ( Cai et al. , 2019 ) can generate a novel architecture with 10 % less error rate and 5x fewer parameters than previous state-of-the-art generic architecture . As a result , in the industry such novel DL systems are kept as trade secrets or intellectual property as they give their owners a competitive edge ( Christian & Vanhoucke , 2017 ) . These novel DL systems are usually costly to obtain : generating the NASNet architectures ( Zoph et al. , 2018 ) takes almost 40K GPU hours and the MalConv authors had to test a large number of failed designs in the process of finding a successful architecture . As a result , an adversary who wishes to have the benefits of such DL systems without incurring the costs has an incentive to steal them . Compared to stealing a trained model ( including all the weights ) , stealing the architectural This work was done when Michael Davinroy was a research intern at the Maryland Cybersecurity Center . details that make the victim DL system novel provides the benefit that the new architectures and pipelines are usually applicable to multiple tasks . Training new DL systems based on these stolen details still provides the benefits , even when the training data is different . After obtaining these details , an attacker can train a functioning model , even on a different data set , and still benefit from the stolen DL system ( So et al. , 2019 ; Wang et al. , 2019 ) . Further , against a novel system , stealing its architectural details increases the reliability of black-box poisoning and evasion attacks ( Demontis et al. , 2019 ) . Moreover , stealing leads to threats such as Camouflage attacks ( Xiao et al. , 2019 ) that trigger misclassifications by exploiting the image scaling algorithms that are common in DNN pre-processing pipelines . The emerging Machine-Learning-as-a-Service ( MLaaS ) model that offers DL computation tools in the cloud makes remote hardware side-channel attacks a practical vector for stealing DL systems ( Liu et al. , 2015 ) . Unlike prior stealing attacks , these attacks do not require physical proximity to the hardare that runs the system ( Batina et al. , 2019 ; Hua et al. , 2018 ) or direct query access to train an approximate model ( Tramèr et al. , 2016 ) . Cache side-channel attacks have especially been shown as practical in cloud computing for stealing sensitive information , such as cryptographic keys ( Liu et al. , 2015 ) . Cache side-channel attacks are ubiquitous and difficult to defeat as they are inherent to the micro architectural design of modern CPUs ( Werner et al. , 2019 ) . In this paper , considering the incentives to steal a novel DL system and applicability of cache sidechannel attacks in modern DL settings , we design a practical attack to steal novel DL systems by leveraging only the cache side-channel leakage . Simulating a common cloud computing scenario , our attacker has a co-located VM on the same host machine as the victim DL system , and shares the last-level cache with the victim ( Liu et al. , 2015 ) . As a result , even though the VMs are running on separate processor cores , the attacker can monitor the cache accesses a DL framework—PyTorch or TensorFlow—makes while the victim system is running ( Liu et al. , 2015 ) . The first step of our attack is launching a cache side-channel attack , Flush+Reload ( Yarom & Falkner , 2014 ) , to extract a single trace of victim ’ s function calls ( Section 3 ) . This trace corresponds to the execution of specific network operations a DL framework performs , e.g. , convolutions or batch-normalizations , while processing an input sample . However , the trace has little information about the computational graph , e.g. , the layers , branches or skip connections , or the architectural parameters , e.g. , the number of filters in a convolutional layer . The limited prior work on side-channel attacks against DL systems assumed knowledge of the architecture family of the victim DNN ( Yan et al. , 2018 ; Duddu et al. , 2018 ) ; therefore , these attacks are only able to extract variants of generic architectures , such as VGG ( Simonyan & Zisserman , 2015 ) or ResNet ( He et al. , 2016 ) . To overcome this challenge , we also extract the approximate time each DL operation takes , in addition to the trace , and we leverage this information to estimate the architectural parameters . This enables us to develop a reconstruction algorithm that generates a set of candidate graphs given the trace and eliminates the incompatible candidates given the parameters ( Section 4 ) . We apply our technique to two exemplar DL systems : the MalConv data pre-processing pipeline and a novel neural architecture produced by ProxylessNAS . Contributions . We design an algorithm that reconstructs novel DL systems only by extracting cache side-channel information , that leaks DL computations , using Flush+Reload attack . We show that Flush+Reload reliably extracts the trace of computations and exposes the time each computational step takes in a practical cloud scenario . Using the extracted information , our reconstruction algorithm estimates the computational graph and the architectural parameters . We demonstrate that our attacker can reconstruct a novel network architecture found from NAS process ( ProxylessNAS ) and a novel manually designed data pre-processing pipeline ( MalConv ) with no reconstruction error . We demonstrate the threat of practical stealing attacks against DL by exposing that the vulnerability is shared across common DL frameworks , PyTorch and TensorFlow . 2 BACKGROUND . Here , we discuss prior efforts in both crafting and stealing network architectures . There is a growing interest in crafting novel DL systems as they significantly outperform their generic counterparts . The immense effort and computational costs of crafting them , however , motivates the adversaries to steal them . Effort to Design Deep Learning Systems . Creating deep learning systems traditionally takes the form of human design through expert knowledge and experience . Some problems require novel designs to manipulate the input in a domain-specific way that DNNs can process more effectively . For example , MalConv malware detection system ( Raff et al. , 2018 ) uses a manually designed preprocessing pipeline that can digest raw executable files as a whole . Pseudo LIDAR ( Wang et al. , 2019 ) , by pre-processing the output of a simple camera sensor into a LIDAR-like representation , achieves four times better object detection accuracy than previous state-of-the-art technique . Moreover , recent work also focuses on automatically generating optimal architectures via neural architecture search ( NAS ) . For example , reinforcement learning ( Zoph & Le , 2016 ) or gradient-based approaches ( Cai et al. , 2019 ) have been proposed for learning to generate optimal architectures . Even though NAS procedures have been shown to produce more accurate , more compact and faster neural networks , the computational cost of the search can be an order of magnitude higher than training a generic architecture ( Zoph et al. , 2018 ) . Effort to Steal Deep Learning Systems . Prior work on stealing DNN systems focus on two main threat models based on whether the attacker has physical access to the victim ’ s hardware . Physical access attacks have been proposed against hardware accelerators and they rely on precise timing measurements ( Hua et al. , 2018 ) or electromagnetic emanations ( Batina et al. , 2019 ) . These attacks are not applicable in the cloud setting we consider . The remote attacks that are applicable in the cloud setting , on the other hand , have limitation of requiring precise measurements that are impractical in the cloud ( Duddu et al. , 2018 ) . Further , the attack without this limitation ( Hong et al. , 2018 ) requires the attacker to know the family the target architecture comes from ; thus , it can not steal novel architectures . In our work , we design an attack to reconstruct novel DL systems by utilizing a practical cache side-channel attack in the cloud setting . 3 EXTRACTING THE SEQUENCE OF COMPUTATIONS VIA FLUSH+RELOAD . 3.1 THREAT MODEL . We consider an attacker who aims to steal the key components in a novel DL system , i.e. , a novel pre-processing pipeline or a novel network architecture . We first launch a Flush+Reload ( Yarom & Falkner , 2014 ) attack to extract cache side-channel information leaked by DL computation . Our target setting is a cloud environment , where the victim ’ s DL system is deployed inside a VM—or a container—to serve the requests of external users . Flush+Reload , in this setting , is known to be a weak , and practical , side-channel attack ( Liu et al. , 2015 ) . Further , as in MLaaS products in the cloud , the victim uses popular open-source DL frameworks , such as PyTorch ( Benoit Steiner , 2019 ) or TensorFlow ( Abadi et al. , 2016 ) . Capabilities . We consider an attacker that owns a co-located VM—or a container—in the same physical host machine as the victim ’ s system . Prior work has shown that spinning-up the co-located VM in the third-party cloud computing services does not require sophisticated techniques ( Ristenpart et al. , 2009 ; Zhang et al. , 2011 ; Bates et al. , 2012 ; Kohno et al. , 2005 ; Varadarajan et al. , 2015 ) . Due to the co-location , the last-level cache ( L3 cache ) in the physical host is shared between multiple cores where the attacker ’ s and victim ’ s processes are ; thus , our attacker can monitor the victim ’ s computations leaked at the L3 cache . We also note that , even if the victim uses GPUs , our attacker can still observe the same computations used for CPUs via cache side-channels ( see Appendix A ) . Knowledge . We consider our attacker and the victim use the same version of the same open-source DL framework . This is realistic , in MLaaS scenarios such as AWS SageMaker or Google Cloud ’ s AutoML , as cloud providers recommend practitioners to use the common frameworks to construct their systems . These common practices also allow our attacker to reverse-engineer the frameworks offline and identify the lines of code to monitor with the Flush+Reload technique . For example , AWS provides convenient deployment options for both PyTorch and TensorFlow : https : //docs . aws.amazon.com/sagemaker/latest/dg/pytorch.html , and https : //docs.aws.amazon.com/sagemaker/latest/dg/tf.html . 3.2 FLUSH+RELOAD MECHANISM . Flush+Reload allows an adversary to continually monitor victim ’ s instruction access patterns by observing the time taken to load them from memory . This technique is effective to extract the computation flow of the victim ’ s program when the attacker and victim share memory ( i.e. , a shared library or page deduplication ( Bosman et al. , 2016 ) ) . The attacker flushes specific lines of code in a shared DL framework from the co-located machine ’ s cache-hierarchy and then measure the amount of time it takes to reload the lines of code . If the victim invokes the monitored line of code , the instruction will be reloaded into the shared cache , and when the attacker reloads the instruction , the access to it will be noticably faster . On the other hand , if the victim does not call the monitored line of code , the access to it will be slower because the instruction needs to be loaded from main memory ( DRAM ) . By repeating this process , our attacker can tell when a victim has accessed a line of code .
This paper proposes a way to attack and reconstruct a victim's neural architecture that is co-located on the same host. They do it through cache side-channel leakage and use Flush+Reload to extract the trace of victim's function call, which tells specific network operations. To recover the computational graph, they use the approximate time each operation takes to prune out any incompatible candidate computation graph. They show that they can reconstruct exactly the MalConv and ProxylessNAS.
SP:a8cb23a70671d54f8784ac023bbecbcbd0bffcfa
How to 0wn the NAS in Your Spare Time
1 INTRODUCTION . To continue outperforming state-of-the-art results , research in deep learning ( DL ) has shifted from manually engineering features to engineering DL systems , including novel data pre-processing pipelines ( Raff et al. , 2018 ; Wang et al. , 2019 ) and novel neural architectures ( Cai et al. , 2019 ; Zoph et al. , 2018 ) . For example , a recent malware detection system MalConv , with a manually designed pipeline that combines embeddings and convolutions , achieves 6 % better detection rate over previous state-of-the-art technique without pre-processing ( Raff et al. , 2018 ) . In addition to designing data pre-processing pipelines , other research efforts focus on neural architecture search ( NAS ) —a method to automatically generate novel architectures that are faster , more accurate and more compact . For instance , the recent work of ProxylessNAS ( Cai et al. , 2019 ) can generate a novel architecture with 10 % less error rate and 5x fewer parameters than previous state-of-the-art generic architecture . As a result , in the industry such novel DL systems are kept as trade secrets or intellectual property as they give their owners a competitive edge ( Christian & Vanhoucke , 2017 ) . These novel DL systems are usually costly to obtain : generating the NASNet architectures ( Zoph et al. , 2018 ) takes almost 40K GPU hours and the MalConv authors had to test a large number of failed designs in the process of finding a successful architecture . As a result , an adversary who wishes to have the benefits of such DL systems without incurring the costs has an incentive to steal them . Compared to stealing a trained model ( including all the weights ) , stealing the architectural This work was done when Michael Davinroy was a research intern at the Maryland Cybersecurity Center . details that make the victim DL system novel provides the benefit that the new architectures and pipelines are usually applicable to multiple tasks . Training new DL systems based on these stolen details still provides the benefits , even when the training data is different . After obtaining these details , an attacker can train a functioning model , even on a different data set , and still benefit from the stolen DL system ( So et al. , 2019 ; Wang et al. , 2019 ) . Further , against a novel system , stealing its architectural details increases the reliability of black-box poisoning and evasion attacks ( Demontis et al. , 2019 ) . Moreover , stealing leads to threats such as Camouflage attacks ( Xiao et al. , 2019 ) that trigger misclassifications by exploiting the image scaling algorithms that are common in DNN pre-processing pipelines . The emerging Machine-Learning-as-a-Service ( MLaaS ) model that offers DL computation tools in the cloud makes remote hardware side-channel attacks a practical vector for stealing DL systems ( Liu et al. , 2015 ) . Unlike prior stealing attacks , these attacks do not require physical proximity to the hardare that runs the system ( Batina et al. , 2019 ; Hua et al. , 2018 ) or direct query access to train an approximate model ( Tramèr et al. , 2016 ) . Cache side-channel attacks have especially been shown as practical in cloud computing for stealing sensitive information , such as cryptographic keys ( Liu et al. , 2015 ) . Cache side-channel attacks are ubiquitous and difficult to defeat as they are inherent to the micro architectural design of modern CPUs ( Werner et al. , 2019 ) . In this paper , considering the incentives to steal a novel DL system and applicability of cache sidechannel attacks in modern DL settings , we design a practical attack to steal novel DL systems by leveraging only the cache side-channel leakage . Simulating a common cloud computing scenario , our attacker has a co-located VM on the same host machine as the victim DL system , and shares the last-level cache with the victim ( Liu et al. , 2015 ) . As a result , even though the VMs are running on separate processor cores , the attacker can monitor the cache accesses a DL framework—PyTorch or TensorFlow—makes while the victim system is running ( Liu et al. , 2015 ) . The first step of our attack is launching a cache side-channel attack , Flush+Reload ( Yarom & Falkner , 2014 ) , to extract a single trace of victim ’ s function calls ( Section 3 ) . This trace corresponds to the execution of specific network operations a DL framework performs , e.g. , convolutions or batch-normalizations , while processing an input sample . However , the trace has little information about the computational graph , e.g. , the layers , branches or skip connections , or the architectural parameters , e.g. , the number of filters in a convolutional layer . The limited prior work on side-channel attacks against DL systems assumed knowledge of the architecture family of the victim DNN ( Yan et al. , 2018 ; Duddu et al. , 2018 ) ; therefore , these attacks are only able to extract variants of generic architectures , such as VGG ( Simonyan & Zisserman , 2015 ) or ResNet ( He et al. , 2016 ) . To overcome this challenge , we also extract the approximate time each DL operation takes , in addition to the trace , and we leverage this information to estimate the architectural parameters . This enables us to develop a reconstruction algorithm that generates a set of candidate graphs given the trace and eliminates the incompatible candidates given the parameters ( Section 4 ) . We apply our technique to two exemplar DL systems : the MalConv data pre-processing pipeline and a novel neural architecture produced by ProxylessNAS . Contributions . We design an algorithm that reconstructs novel DL systems only by extracting cache side-channel information , that leaks DL computations , using Flush+Reload attack . We show that Flush+Reload reliably extracts the trace of computations and exposes the time each computational step takes in a practical cloud scenario . Using the extracted information , our reconstruction algorithm estimates the computational graph and the architectural parameters . We demonstrate that our attacker can reconstruct a novel network architecture found from NAS process ( ProxylessNAS ) and a novel manually designed data pre-processing pipeline ( MalConv ) with no reconstruction error . We demonstrate the threat of practical stealing attacks against DL by exposing that the vulnerability is shared across common DL frameworks , PyTorch and TensorFlow . 2 BACKGROUND . Here , we discuss prior efforts in both crafting and stealing network architectures . There is a growing interest in crafting novel DL systems as they significantly outperform their generic counterparts . The immense effort and computational costs of crafting them , however , motivates the adversaries to steal them . Effort to Design Deep Learning Systems . Creating deep learning systems traditionally takes the form of human design through expert knowledge and experience . Some problems require novel designs to manipulate the input in a domain-specific way that DNNs can process more effectively . For example , MalConv malware detection system ( Raff et al. , 2018 ) uses a manually designed preprocessing pipeline that can digest raw executable files as a whole . Pseudo LIDAR ( Wang et al. , 2019 ) , by pre-processing the output of a simple camera sensor into a LIDAR-like representation , achieves four times better object detection accuracy than previous state-of-the-art technique . Moreover , recent work also focuses on automatically generating optimal architectures via neural architecture search ( NAS ) . For example , reinforcement learning ( Zoph & Le , 2016 ) or gradient-based approaches ( Cai et al. , 2019 ) have been proposed for learning to generate optimal architectures . Even though NAS procedures have been shown to produce more accurate , more compact and faster neural networks , the computational cost of the search can be an order of magnitude higher than training a generic architecture ( Zoph et al. , 2018 ) . Effort to Steal Deep Learning Systems . Prior work on stealing DNN systems focus on two main threat models based on whether the attacker has physical access to the victim ’ s hardware . Physical access attacks have been proposed against hardware accelerators and they rely on precise timing measurements ( Hua et al. , 2018 ) or electromagnetic emanations ( Batina et al. , 2019 ) . These attacks are not applicable in the cloud setting we consider . The remote attacks that are applicable in the cloud setting , on the other hand , have limitation of requiring precise measurements that are impractical in the cloud ( Duddu et al. , 2018 ) . Further , the attack without this limitation ( Hong et al. , 2018 ) requires the attacker to know the family the target architecture comes from ; thus , it can not steal novel architectures . In our work , we design an attack to reconstruct novel DL systems by utilizing a practical cache side-channel attack in the cloud setting . 3 EXTRACTING THE SEQUENCE OF COMPUTATIONS VIA FLUSH+RELOAD . 3.1 THREAT MODEL . We consider an attacker who aims to steal the key components in a novel DL system , i.e. , a novel pre-processing pipeline or a novel network architecture . We first launch a Flush+Reload ( Yarom & Falkner , 2014 ) attack to extract cache side-channel information leaked by DL computation . Our target setting is a cloud environment , where the victim ’ s DL system is deployed inside a VM—or a container—to serve the requests of external users . Flush+Reload , in this setting , is known to be a weak , and practical , side-channel attack ( Liu et al. , 2015 ) . Further , as in MLaaS products in the cloud , the victim uses popular open-source DL frameworks , such as PyTorch ( Benoit Steiner , 2019 ) or TensorFlow ( Abadi et al. , 2016 ) . Capabilities . We consider an attacker that owns a co-located VM—or a container—in the same physical host machine as the victim ’ s system . Prior work has shown that spinning-up the co-located VM in the third-party cloud computing services does not require sophisticated techniques ( Ristenpart et al. , 2009 ; Zhang et al. , 2011 ; Bates et al. , 2012 ; Kohno et al. , 2005 ; Varadarajan et al. , 2015 ) . Due to the co-location , the last-level cache ( L3 cache ) in the physical host is shared between multiple cores where the attacker ’ s and victim ’ s processes are ; thus , our attacker can monitor the victim ’ s computations leaked at the L3 cache . We also note that , even if the victim uses GPUs , our attacker can still observe the same computations used for CPUs via cache side-channels ( see Appendix A ) . Knowledge . We consider our attacker and the victim use the same version of the same open-source DL framework . This is realistic , in MLaaS scenarios such as AWS SageMaker or Google Cloud ’ s AutoML , as cloud providers recommend practitioners to use the common frameworks to construct their systems . These common practices also allow our attacker to reverse-engineer the frameworks offline and identify the lines of code to monitor with the Flush+Reload technique . For example , AWS provides convenient deployment options for both PyTorch and TensorFlow : https : //docs . aws.amazon.com/sagemaker/latest/dg/pytorch.html , and https : //docs.aws.amazon.com/sagemaker/latest/dg/tf.html . 3.2 FLUSH+RELOAD MECHANISM . Flush+Reload allows an adversary to continually monitor victim ’ s instruction access patterns by observing the time taken to load them from memory . This technique is effective to extract the computation flow of the victim ’ s program when the attacker and victim share memory ( i.e. , a shared library or page deduplication ( Bosman et al. , 2016 ) ) . The attacker flushes specific lines of code in a shared DL framework from the co-located machine ’ s cache-hierarchy and then measure the amount of time it takes to reload the lines of code . If the victim invokes the monitored line of code , the instruction will be reloaded into the shared cache , and when the attacker reloads the instruction , the access to it will be noticably faster . On the other hand , if the victim does not call the monitored line of code , the access to it will be slower because the instruction needs to be loaded from main memory ( DRAM ) . By repeating this process , our attacker can tell when a victim has accessed a line of code .
This work proposed a method to reconstruct machine learning pipelines and network architectures using cache side-channel attack. It is based on a previous proposed method Flush+Reload that generates the raw trace of function calls. Then the authors applied several techniques to rebuild the computational graph from the raw traces. The proposed method is used to reconstruct MalConv which is a data pre-processing pipeline for malware detection and ProxyLessNas which is a network architecture obtained by NAS.
SP:a8cb23a70671d54f8784ac023bbecbcbd0bffcfa
Decoupling Representation and Classifier for Long-Tailed Recognition
1 INTRODUCTION . Visual recognition research has made rapid advances during the past years , driven primarily by the use of deep convolutional neural networks ( CNNs ) and large image datasets , most importantly the ImageNet Challenge ( Russakovsky et al. , 2015 ) . Such datasets are usually artificially balanced with respect to the number of instances for each object/class in the training set . Visual phenomena , however , follow a long-tailed distribution that many standard approaches fail to properly model , leading to a significant drop in accuracy . Motivated by this , a number of works have recently emerged that try to study long-tailed recognition , i.e. , recognition in a setting where the number of instances in each class highly varies and follows a long-tailed distribution . When learning with long-tailed data , a common challenge is that instance-rich ( or head ) classes dominate the training procedure . The learned classification model tends to perform better on these classes , while performance is significantly worse for instance-scarce ( or tail ) classes . To address this issue and to improve performance across all classes , one can re-sample the data or design specific loss functions that better facilitate learning with imbalanced data ( Chawla et al. , 2002 ; Cui et al. , 2019 ; Cao et al. , 2019 ) . Another direction is to enhance recognition performance of the tail classes by transferring knowledge from the head classes ( Wang et al. , 2017 ; 2018 ; Zhong et al. , 2019 ; Liu et al. , 2019 ) . Nevertheless , the common belief behind existing approaches is that designing proper sampling strategies , losses , or even more complex models , is useful for learning high-quality representations for long-tailed recognition . Most aforementioned approaches thus learn the classifiers used for recognition jointly with the data representations . However , such a joint learning scheme makes it unclear how the long-tailed recognition ability is achieved—is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries ? To answer this question , we take one step back and decouple long-tail recognition into representation learning and classification . For learning rep- resentations , the model is exposed to the training instances and trained through different sampling strategies or losses . For classification , upon the learned representations , the model recognizes the long-tailed classes through various classifiers . We evaluate the performance of various sampling and classifier training strategies for long-tailed recognition under both joint and decoupled learning schemes . Specifically , we first train models to learn representations with different sampling strategies , including the standard instance-based sampling , class-balanced sampling and a mixture of them . Next , we study three different basic approaches to obtain a classifier with balanced decision boundaries , on top of the learned representations . They are 1 ) re-training the parametric linear classifier in a class-balancing manner ( i.e. , re-sampling ) ; 2 ) non-parametric nearest class mean classifier , which classifies the data based on their closest class-specific mean representations from the training set ; and 3 ) normalizing the classifier weights , which adjusts the weight magnitude directly to be more balanced , adding a temperature to modulate the normalization procedure . We conduct extensive experiments to compare the aforementioned instantiations of the decoupled learning scheme with the conventional scheme that jointly trains the classifier and the representations . We also compare to recent , carefully designed and more complex models , including approaches using memory ( e.g. , OLTR ( Liu et al. , 2019 ) ) as well as more sophisticated losses ( Cui et al. , 2019 ) . From our extensive study across three long-tail datasets , ImageNet-LT , Places-LT and iNaturalist , we make the following intriguing observations : • We find that decoupling representation learning and classification has surprising results that challenge common beliefs for long-tailed recognition : instance-balanced sampling learns the best and most generalizable representations . • It is advantageous in long-tailed recognition to re-adjust the decision boundaries specified by the jointly learned classifier during representation learning : Our experiments show that this can either be achieved by retraining the classifier with class-balanced sampling or by a simple , yet effective , classifier weight normalization which has only a single hyperparameter controlling the “ temperature ” and which does not require additional training . • By applying the decoupled learning scheme to standard networks ( e.g. , ResNeXt ) , we achieve significantly higher accuracy than well established state-of-the-art methods ( different sampling strategies , new loss designs and other complex modules ) on multiple longtailed recognition benchmark datasets , including ImageNet-LT , Places-LT , and iNaturalist . 2 RELATED WORK . Long-tailed recognition has attracted increasing attention due to the prevalence of imbalanced data in real-world applications ( Wang et al. , 2017 ; Zhou et al. , 2017 ; Mahajan et al. , 2018 ; Zhong et al. , 2019 ; Gupta et al. , 2019 ) . Recent studies have mainly pursued the following three directions : Data distribution re-balancing . Along this direction , researchers have proposed to re-sample the dataset to achieve a more balanced data distribution . These methods include over-sampling ( Chawla et al. , 2002 ; Han et al. , 2005 ) for the minority classes ( by adding copies of data ) , undersampling ( Drummond et al. , 2003 ) for the majority classes ( by removing data ) , and class-balanced sampling ( Shen et al. , 2016 ; Mahajan et al. , 2018 ) based on the number of samples for each class . Class-balanced Losses . Various methods are proposed to assign different losses to different training samples for each class . The loss can vary at class-level for matching a given data distribution and improving the generalization of tail classes ( Cui et al. , 2019 ; Khan et al. , 2017 ; Cao et al. , 2019 ; Khan et al. , 2019 ; Huang et al. , 2019 ) . A more fine-grained control of the loss can also be achieved at sample level , e.g . with Focal loss ( Lin et al. , 2017 ) , Meta-Weight-Net ( Shu et al. , 2019 ) , re-weighted training ( Ren et al. , 2018 ) , or based on Bayesian uncertainty ( Khan et al. , 2019 ) . Recently , Hayat et al . ( 2019 ) proposed to balance the classification regions of head and tail classes using an affinity measure to enforce cluster centers of classes to be uniformly spaced and equidistant . Transfer learning from head- to tail classes . Transfer-learning based methods address the issue of imbalanced training data by transferring features learned from head classes with abundant training instances to under-represented tail classes . Recent work includes transferring the intra-class variance ( Yin et al. , 2019 ) and transferring semantic deep features ( Liu et al. , 2019 ) . However it is usually a non-trivial task to design specific modules ( e.g . external memory ) for feature transfer . A benchmark for low-shot recognition was proposed by Hariharan & Girshick ( 2017 ) and consists of a representation learning phase without access to the low-shot classes and a subsequent low-shot learning phase . In contrast , the setup for long-tail recognition assumes access to both head and tail classes and a more continuous decrease in in class labels . Recently , Liu et al . ( 2019 ) and Cao et al . ( 2019 ) adopt re-balancing schedules that learn representation and classifier jointly within a two-stage training scheme . OLTR ( Liu et al. , 2019 ) uses instance-balanced sampling to first learn representations that are fine-tuned in a second stage with class-balanced sampling together with a memory module . LDAM ( Cao et al. , 2019 ) introduces a label-distribution-aware margin loss that expands the decision boundaries of few-shot classes . In Section 5 we exhaustively compare to OLTR and LDAM , since they report state-of-the-art results for the ImageNet-LT , Places-LT and iNaturalist datasets . In our work , we argue for decoupling representation and classification . We demonstrate that in a long-tailed scenario , this separation allows straightforward approaches to achieve high recognition performance , without the need for designing sampling strategies , balance-aware losses or adding memory modules . 3 LEARNING REPRESENTATIONS FOR LONG-TAILED RECOGNITION . For long-tailed recognition , the training set follows a long-tailed distribution over the classes . As we have less data about infrequent classes during training , the models trained using imbalanced datasets tend to exhibit under-fitting on the few-shot classes . But in practice we are interested in obtaining the model capable of recognizing all classes well . Various re-sampling strategies ( Chawla et al. , 2002 ; Shen et al. , 2016 ; Cao et al. , 2019 ) , loss reweighting and margin regularization over few-shot classes are thus proposed . However , it remains unclear how they achieve performance improvement , if any , for long-tailed recognition . Here we systematically investigate their effectiveness by disentangling representation learning from classifier learning , in order to identify what indeed matters for longtailed recognition . Notation . We define the notation used through the paper . Let X = { xi , yi } , i ∈ { 1 , . . . , n } be a training set , where yi is the label for data point xi . Let nj denote the number of training sample for class j , and let n = ∑C j=1 nj be the total number of training samples . Without loss of generality , we assume that the classes are sorted by cardinality in decreasing order , i.e. , if i < j , then ni ≥ nj . Additionally , since we are in a long-tail setting , n1 nC . Finally , we denote with f ( x ; θ ) = z the representation for x , where f ( x ; θ ) is implemented by a deep CNN model with parameter θ . The final class prediction ỹ is given by a classifier function g , such that ỹ = arg max g ( z ) . For the common case , g is a linear classifier , i.e. , g ( z ) = W > z + b , where W denotes the classifier weight matrix , and b is the bias . We present other instantiations of g in Section 4 . Sampling strategies . In this section we present a number of sampling strategies that aim at rebalancing the data distribution for representation and classifier learning . For most sampling strategies presented below , the probability pj of sampling a data point from class j is given by : pj = nqj∑C i=1 n q i , ( 1 ) where q ∈ [ 0 , 1 ] and C is the number of training classes . Different sampling strategies arise for different values of q and below we present strategies that correspond to q = 1 , q = 0 , and q = 1/2 . Instance-balanced sampling . This is the most common way of sampling data , where each training example has equal probability of being selected . For instance-balanced sampling , the probability pIBj is given by Equation 1 with q = 1 , i.e. , a data point from class j will be sampled proportionally to the cardinality nj of the class in the training set . Class-balanced sampling . For imbalanced datasets , instance-balanced sampling has been shown to be sub-optimal ( Huang et al. , 2016 ; Wang et al. , 2017 ) as the model under-fits for few-shot classes leading to lower accuracy , especially for balanced test sets . Class-balanced sampling has been used to alleviate this discrepancy , as , in this case , each class has an equal probability of being selected . The probability pCBj is given by Eq . ( 1 ) with q = 0 , i.e. , p CB j = 1/C . One can see this as a two- stage sampling strategy , where first a class is selected uniformly from the set of classes , and then an instance from that class is subsequently uniformly sampled . Square-root sampling . A number of variants of the previous sampling strategies have been explored . A commonly used variant is square-root sampling ( Mikolov et al. , 2013 ; Mahajan et al. , 2018 ) , where q is set to 1/2 in Eq . ( 1 ) above . Progressively-balanced sampling . Recent approaches ( Cui et al. , 2018 ; Cao et al. , 2019 ) utilized mixed ways of sampling , i.e. , combinations of the sampling strategies presented above . In practice this involves first using instance-balanced sampling for a number of epochs , and then class-balanced sampling for the last epochs . These mixed sampling approaches require setting the number of epochs before switching the sampling strategy as an explicit hyper-parameter . Here , we experiment with a softer version , progressively-balanced sampling , that progressively “ interpolates ” between instancebalanced and class-balanced sampling as learning progresses . Its sampling probability/weight pj for class j is now a function of the epoch t , pPBj ( t ) = ( 1− t T ) pIBj + t T pCBj , ( 2 ) where T is the total number of epochs . Figure 3 in appendix depicts the sampling probabilities . Loss re-weighting strategies . Loss re-weighting functions for imbalanced data have been extensively studied , and it is beyond the scope of this paper to examine all related approaches . What is more , we found that some of the most recent approaches reporting high performance were hard to train and reproduce and in many cases require extensive , dataset-specific hyper-parameter tuning . In Section A of the Appendix we summarize the latest , best performing methods from this area . In Section 5 we show that , without bells and whistles , baseline methods equipped with a properly balanced classifier can perform equally well , if not better , than the latest loss re-weighting approaches .
The paper tries to handle the class imbalance problem by decoupling the learning process into representation learning and classification, in contrast to the current methods that jointly learn both of them. They comprehensively study several sampling methods for representation learning and different strategies for classification. They find that instance-balanced sampling gives the best representation, and simply adjusting the classifier will equip the model with long-tailed recognition ability. They achieve start of art on long-tailed data (ImageNet-LT, Places-LT and iNaturalist).
SP:4fba557254310577845d291e0f216dc76403c9ac
Decoupling Representation and Classifier for Long-Tailed Recognition
1 INTRODUCTION . Visual recognition research has made rapid advances during the past years , driven primarily by the use of deep convolutional neural networks ( CNNs ) and large image datasets , most importantly the ImageNet Challenge ( Russakovsky et al. , 2015 ) . Such datasets are usually artificially balanced with respect to the number of instances for each object/class in the training set . Visual phenomena , however , follow a long-tailed distribution that many standard approaches fail to properly model , leading to a significant drop in accuracy . Motivated by this , a number of works have recently emerged that try to study long-tailed recognition , i.e. , recognition in a setting where the number of instances in each class highly varies and follows a long-tailed distribution . When learning with long-tailed data , a common challenge is that instance-rich ( or head ) classes dominate the training procedure . The learned classification model tends to perform better on these classes , while performance is significantly worse for instance-scarce ( or tail ) classes . To address this issue and to improve performance across all classes , one can re-sample the data or design specific loss functions that better facilitate learning with imbalanced data ( Chawla et al. , 2002 ; Cui et al. , 2019 ; Cao et al. , 2019 ) . Another direction is to enhance recognition performance of the tail classes by transferring knowledge from the head classes ( Wang et al. , 2017 ; 2018 ; Zhong et al. , 2019 ; Liu et al. , 2019 ) . Nevertheless , the common belief behind existing approaches is that designing proper sampling strategies , losses , or even more complex models , is useful for learning high-quality representations for long-tailed recognition . Most aforementioned approaches thus learn the classifiers used for recognition jointly with the data representations . However , such a joint learning scheme makes it unclear how the long-tailed recognition ability is achieved—is it from learning a better representation or by handling the data imbalance better via shifting classifier decision boundaries ? To answer this question , we take one step back and decouple long-tail recognition into representation learning and classification . For learning rep- resentations , the model is exposed to the training instances and trained through different sampling strategies or losses . For classification , upon the learned representations , the model recognizes the long-tailed classes through various classifiers . We evaluate the performance of various sampling and classifier training strategies for long-tailed recognition under both joint and decoupled learning schemes . Specifically , we first train models to learn representations with different sampling strategies , including the standard instance-based sampling , class-balanced sampling and a mixture of them . Next , we study three different basic approaches to obtain a classifier with balanced decision boundaries , on top of the learned representations . They are 1 ) re-training the parametric linear classifier in a class-balancing manner ( i.e. , re-sampling ) ; 2 ) non-parametric nearest class mean classifier , which classifies the data based on their closest class-specific mean representations from the training set ; and 3 ) normalizing the classifier weights , which adjusts the weight magnitude directly to be more balanced , adding a temperature to modulate the normalization procedure . We conduct extensive experiments to compare the aforementioned instantiations of the decoupled learning scheme with the conventional scheme that jointly trains the classifier and the representations . We also compare to recent , carefully designed and more complex models , including approaches using memory ( e.g. , OLTR ( Liu et al. , 2019 ) ) as well as more sophisticated losses ( Cui et al. , 2019 ) . From our extensive study across three long-tail datasets , ImageNet-LT , Places-LT and iNaturalist , we make the following intriguing observations : • We find that decoupling representation learning and classification has surprising results that challenge common beliefs for long-tailed recognition : instance-balanced sampling learns the best and most generalizable representations . • It is advantageous in long-tailed recognition to re-adjust the decision boundaries specified by the jointly learned classifier during representation learning : Our experiments show that this can either be achieved by retraining the classifier with class-balanced sampling or by a simple , yet effective , classifier weight normalization which has only a single hyperparameter controlling the “ temperature ” and which does not require additional training . • By applying the decoupled learning scheme to standard networks ( e.g. , ResNeXt ) , we achieve significantly higher accuracy than well established state-of-the-art methods ( different sampling strategies , new loss designs and other complex modules ) on multiple longtailed recognition benchmark datasets , including ImageNet-LT , Places-LT , and iNaturalist . 2 RELATED WORK . Long-tailed recognition has attracted increasing attention due to the prevalence of imbalanced data in real-world applications ( Wang et al. , 2017 ; Zhou et al. , 2017 ; Mahajan et al. , 2018 ; Zhong et al. , 2019 ; Gupta et al. , 2019 ) . Recent studies have mainly pursued the following three directions : Data distribution re-balancing . Along this direction , researchers have proposed to re-sample the dataset to achieve a more balanced data distribution . These methods include over-sampling ( Chawla et al. , 2002 ; Han et al. , 2005 ) for the minority classes ( by adding copies of data ) , undersampling ( Drummond et al. , 2003 ) for the majority classes ( by removing data ) , and class-balanced sampling ( Shen et al. , 2016 ; Mahajan et al. , 2018 ) based on the number of samples for each class . Class-balanced Losses . Various methods are proposed to assign different losses to different training samples for each class . The loss can vary at class-level for matching a given data distribution and improving the generalization of tail classes ( Cui et al. , 2019 ; Khan et al. , 2017 ; Cao et al. , 2019 ; Khan et al. , 2019 ; Huang et al. , 2019 ) . A more fine-grained control of the loss can also be achieved at sample level , e.g . with Focal loss ( Lin et al. , 2017 ) , Meta-Weight-Net ( Shu et al. , 2019 ) , re-weighted training ( Ren et al. , 2018 ) , or based on Bayesian uncertainty ( Khan et al. , 2019 ) . Recently , Hayat et al . ( 2019 ) proposed to balance the classification regions of head and tail classes using an affinity measure to enforce cluster centers of classes to be uniformly spaced and equidistant . Transfer learning from head- to tail classes . Transfer-learning based methods address the issue of imbalanced training data by transferring features learned from head classes with abundant training instances to under-represented tail classes . Recent work includes transferring the intra-class variance ( Yin et al. , 2019 ) and transferring semantic deep features ( Liu et al. , 2019 ) . However it is usually a non-trivial task to design specific modules ( e.g . external memory ) for feature transfer . A benchmark for low-shot recognition was proposed by Hariharan & Girshick ( 2017 ) and consists of a representation learning phase without access to the low-shot classes and a subsequent low-shot learning phase . In contrast , the setup for long-tail recognition assumes access to both head and tail classes and a more continuous decrease in in class labels . Recently , Liu et al . ( 2019 ) and Cao et al . ( 2019 ) adopt re-balancing schedules that learn representation and classifier jointly within a two-stage training scheme . OLTR ( Liu et al. , 2019 ) uses instance-balanced sampling to first learn representations that are fine-tuned in a second stage with class-balanced sampling together with a memory module . LDAM ( Cao et al. , 2019 ) introduces a label-distribution-aware margin loss that expands the decision boundaries of few-shot classes . In Section 5 we exhaustively compare to OLTR and LDAM , since they report state-of-the-art results for the ImageNet-LT , Places-LT and iNaturalist datasets . In our work , we argue for decoupling representation and classification . We demonstrate that in a long-tailed scenario , this separation allows straightforward approaches to achieve high recognition performance , without the need for designing sampling strategies , balance-aware losses or adding memory modules . 3 LEARNING REPRESENTATIONS FOR LONG-TAILED RECOGNITION . For long-tailed recognition , the training set follows a long-tailed distribution over the classes . As we have less data about infrequent classes during training , the models trained using imbalanced datasets tend to exhibit under-fitting on the few-shot classes . But in practice we are interested in obtaining the model capable of recognizing all classes well . Various re-sampling strategies ( Chawla et al. , 2002 ; Shen et al. , 2016 ; Cao et al. , 2019 ) , loss reweighting and margin regularization over few-shot classes are thus proposed . However , it remains unclear how they achieve performance improvement , if any , for long-tailed recognition . Here we systematically investigate their effectiveness by disentangling representation learning from classifier learning , in order to identify what indeed matters for longtailed recognition . Notation . We define the notation used through the paper . Let X = { xi , yi } , i ∈ { 1 , . . . , n } be a training set , where yi is the label for data point xi . Let nj denote the number of training sample for class j , and let n = ∑C j=1 nj be the total number of training samples . Without loss of generality , we assume that the classes are sorted by cardinality in decreasing order , i.e. , if i < j , then ni ≥ nj . Additionally , since we are in a long-tail setting , n1 nC . Finally , we denote with f ( x ; θ ) = z the representation for x , where f ( x ; θ ) is implemented by a deep CNN model with parameter θ . The final class prediction ỹ is given by a classifier function g , such that ỹ = arg max g ( z ) . For the common case , g is a linear classifier , i.e. , g ( z ) = W > z + b , where W denotes the classifier weight matrix , and b is the bias . We present other instantiations of g in Section 4 . Sampling strategies . In this section we present a number of sampling strategies that aim at rebalancing the data distribution for representation and classifier learning . For most sampling strategies presented below , the probability pj of sampling a data point from class j is given by : pj = nqj∑C i=1 n q i , ( 1 ) where q ∈ [ 0 , 1 ] and C is the number of training classes . Different sampling strategies arise for different values of q and below we present strategies that correspond to q = 1 , q = 0 , and q = 1/2 . Instance-balanced sampling . This is the most common way of sampling data , where each training example has equal probability of being selected . For instance-balanced sampling , the probability pIBj is given by Equation 1 with q = 1 , i.e. , a data point from class j will be sampled proportionally to the cardinality nj of the class in the training set . Class-balanced sampling . For imbalanced datasets , instance-balanced sampling has been shown to be sub-optimal ( Huang et al. , 2016 ; Wang et al. , 2017 ) as the model under-fits for few-shot classes leading to lower accuracy , especially for balanced test sets . Class-balanced sampling has been used to alleviate this discrepancy , as , in this case , each class has an equal probability of being selected . The probability pCBj is given by Eq . ( 1 ) with q = 0 , i.e. , p CB j = 1/C . One can see this as a two- stage sampling strategy , where first a class is selected uniformly from the set of classes , and then an instance from that class is subsequently uniformly sampled . Square-root sampling . A number of variants of the previous sampling strategies have been explored . A commonly used variant is square-root sampling ( Mikolov et al. , 2013 ; Mahajan et al. , 2018 ) , where q is set to 1/2 in Eq . ( 1 ) above . Progressively-balanced sampling . Recent approaches ( Cui et al. , 2018 ; Cao et al. , 2019 ) utilized mixed ways of sampling , i.e. , combinations of the sampling strategies presented above . In practice this involves first using instance-balanced sampling for a number of epochs , and then class-balanced sampling for the last epochs . These mixed sampling approaches require setting the number of epochs before switching the sampling strategy as an explicit hyper-parameter . Here , we experiment with a softer version , progressively-balanced sampling , that progressively “ interpolates ” between instancebalanced and class-balanced sampling as learning progresses . Its sampling probability/weight pj for class j is now a function of the epoch t , pPBj ( t ) = ( 1− t T ) pIBj + t T pCBj , ( 2 ) where T is the total number of epochs . Figure 3 in appendix depicts the sampling probabilities . Loss re-weighting strategies . Loss re-weighting functions for imbalanced data have been extensively studied , and it is beyond the scope of this paper to examine all related approaches . What is more , we found that some of the most recent approaches reporting high performance were hard to train and reproduce and in many cases require extensive , dataset-specific hyper-parameter tuning . In Section A of the Appendix we summarize the latest , best performing methods from this area . In Section 5 we show that , without bells and whistles , baseline methods equipped with a properly balanced classifier can perform equally well , if not better , than the latest loss re-weighting approaches .
The paper considers the problem of long-tailed image classification, where the class frequencies during (supervised) training of an image classifier are heavily skewed, so that the classifier underfits on under-represented classes. Different known and novel sampling schemes during training as well as post-training procedures to restore the class balance after training are studied. The overall best strategy turns out to be naive training on the skewed training set, and post-hoc rebalancing only of the classification stage. The paper presents various ablation studies and comparisons with related methods on the ImageNet-LT, Places-LT, and iNaturalist data sets, achieving state-of-the-art performance.
SP:4fba557254310577845d291e0f216dc76403c9ac
GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations
1 INTRODUCTION . Task execution in robotics and reinforcement learning ( RL ) requires accurate perception of and reasoning about discrete elements in an environment . While supervised methods can be used to identify pertinent objects , it is intractable to collect labels for every scenario and task . Discovering structure in data—such as objects—and learning to represent data in a compact fashion without supervision are long-standing problems in machine learning ( Comon , 1992 ; Tishby et al. , 2000 ) , often formulated as generative latent-variable modelling ( e.g . Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . Such methods have been leveraged to increase sample efficiency in RL ( Gregor et al. , 2019 ) and other supervised tasks ( van Steenkiste et al. , 2019 ) . They also offer the ability to imagine environments for training ( Ha & Schmidhuber , 2018 ) . Given the compositional nature of visual scenes , separating latent representations into object-centric ones can facilitate fast and robust learning ( Watters et al. , 2019a ) , while also being amenable to relational reasoning ( Santoro et al. , 2017 ) . Interestingly , however , state-of-the-art methods for generating realistic images do not account for this discrete structure ( Brock et al. , 2018 ; Parmar et al. , 2018 ) . As in the approach proposed in this work , human visual perception is not passive . Rather it involves a creative interplay between external stimulation and an active , internal generative model of the world ( Rao & Ballard , 1999 ; Friston , 2005 ) . That this is necessary can be seen from the physiology of the eye , where the small portion of the visual field that can produce sharp images ( fovea centralis ) motivates the need for rapid eye movements ( saccades ) to build up a crisp and holistic percept of a scene ( Wandell , 1995 ) . In other words , what we perceive is largely a mental simulation of the external world . Meanwhile , work in computational neuroscience tells us that visual features ( see , e.g. , Hubel & Wiesel , 1968 ) can be inferred from the statistics of static images using unsupervised learning ( Olshausen & Field , 1996 ) . Experimental investigations further show that specific brain areas ( e.g . LO ) appear specialised for objects , for example responding more strongly to common objects than to scenes or textures , while responding only weakly to movement ( cf . MT ) ( e.g. , GrillSpector & Malach , 2004 ) . ∗Corresponding author : martin @ robots.ox.ac.uk In this work , we are interested in probabilistic generative models that can explain visual scenes compositionally via several latent variables . This corresponds to fitting a probability distribution pθ ( x ) with parameters θ to the data . The compositional structure is captured by K latent variables so that pθ ( x ) = ∫ pθ ( x | z1 : K ) pθ ( z1 : K ) dz1 : K . Models from this family can be optimised using the variational auto-encoder ( VAE ) framework ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) , by maximising a variational lower bound on the model evidence ( Jordan et al. , 1999 ) . Burgess et al . ( 2019 ) and Greff et al . ( 2019 ) recently proposed two such models , MONet and IODINE , to decompose visual scenes into meaningful objects . Both works leverage an analysis-by-synthesis approach through the machinery of VAEs ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) to train these models without labelled supervision , e.g . in the form of ground truth segmentation masks . However , the models have a factorised prior that treats scene components as independent . Thus , neither provides an object-centric generation mechanism that accounts for relationships between constituent parts of a scene , e.g . two physical objects can not occupy the same location , prohibiting the component-wise generation of novel scenes and restricting the utility of these approaches . Moreover , MONet embeds a convolutional neural network ( CNN ) inside of an recurrent neural network ( RNN ) that is unrolled for each scene component , which does not scale well to more complex scenes . Similarly , IODINE utilises a CNN within an expensive , gradient-based iterative refinement mechanism . Therefore , we introduce GENErative Scene Inference and Sampling ( GENESIS ) which is , to the best of our knowledge , the first object-centric generative model of rendered 3D scenes capable of both decomposing and generating scenes1 . Compared to previous work , this renders GENESIS significantly more suitable for a wide range of applications in robotics and reinforcement learning . GENESIS achieves this by modelling relationships between scene components with an expressive , autoregressive prior that is learned alongside a sequential , amortised inference network . Importantly , sequential inference is performed in low-dimensional latent space , allowing all convolutional encoders and decoders to be run in parallel to fully exploit modern graphics processing hardware . We conduct experiments on three canonical and publicly available datasets : coloured Multi-dSprites ( Burgess et al. , 2019 ) , the GQN dataset ( Eslami et al. , 2018 ) , and ShapeStacks ( Groth et al. , 2018 ) . The latter two are simulated 3D environments which serve as testing grounds for navigation and object manipulation tasks , respectively . We show both qualitatively and quantitatively that in contrast to prior art , GENESIS is able to generate coherent scenes while also performing well on scene decomposition . Furthermore , we use the scene annotations available for ShapeStacks to show the benefit of utilising general purpose , object-centric latent representations from GENESIS for tasks such as predicting whether a block tower is stable or not . Code and models are available at https : //github.com/applied-ai-lab/genesis . 2 RELATED WORK . Structured Models Several methods leverage structured latent variables to discover objects in images without direct supervision . CST-VAE ( Huang & Murphy , 2015 ) , AIR ( Eslami et al. , 2016 ) , SQAIR ( Kosiorek et al. , 2018 ) , and SPAIR ( Crawford & Pineau , 2019 ) use spatial attention to partition scenes into objects . TAGGER ( Greff et al. , 2016 ) , NEM ( Greff et al. , 2017 ) , and R-NEM ( van Steenkiste et al. , 2018a ) perform unsupervised segmentation by modelling images as spatial mixture models . SCAE ( Kosiorek et al. , 2019 ) discovers geometric relationships between objects and their parts by using an affine-aware decoder . Yet , these approaches have not been shown to work on more complex images , for example visual scenes with 3D spatial structure , occlusion , perspective distortion , and multiple foreground and background components as considered in this work . Moreover , none of them demonstrate the ability to generate novel scenes with relational structure . While Xu et al . ( 2018 ) present an extension of Eslami et al . ( 2016 ) to generate images , their method only works on binary images with a uniform black background and assumes that object bounding boxes do not overlap . In contrast , we train GENESIS on rendered 3D scenes from Eslami et al . ( 2018 ) and Groth et al . ( 2018 ) which feature complex backgrounds and considerable occlusion to perform both decomposition and generation . Lastly , Xu et al . ( 2019 ) use ground truth pixel-wise flow fields as a cue for segmenting objects or object parts . Similarly , GENESIS could be adapted to also leverage temporal information which is a promising avenue for future research . 1We use the terms “ object ” and “ scene component ” synonymously in this work . MONet & IODINE While this work is most directly related to MONet ( Burgess et al. , 2019 ) and IODINE ( Greff et al. , 2019 ) , it sets itself apart by introducing a generative model that captures relations between scene components with an autoregressive prior , enabling the unconditional generation of coherent , novel scenes . Moreover , MONet relies on a deterministic attention mechanism rather than utilising a proper probabilistic inference procedure . This implies that the training objective is not a valid lower bound on the marginal likelihood and that the model can not perform density estimation without modification . Furthermore , this attention mechanism embeds a CNN in a RNN , posing an issue in terms of scalability . These two considerations do not apply to IODINE , but IODINE employs a gradient-based , iterative refinement mechanism which expensive both in terms of computation and memory , limiting its practicality and utility . Architecturally , GENESIS is more similar to MONet and does not require expensive iterative refinement as IODINE . Unlike MONet , though , the convolutional encoders and decoders in GENESIS can be run in parallel , rendering the model computationally more scalable to inputs with a larger number of scene components . Adversarial Methods A few recent works have proposed to use an adversary for scene segmentation and generation . Chen et al . ( 2019 ) and Bielski & Favaro ( 2019 ) segment a single foreground object per image and Arandjelović & Zisserman ( 2019 ) segment several synthetic objects superimposed on natural images . Azadi et al . ( 2019 ) combine two objects or an object and a background scene in a sensible fashion and van Steenkiste et al . ( 2018b ) can generate scenes with a potentially arbitrary number of components . In comparison , GENESIS performs both inference and generation , does not exhibit the instabilities of adversarial training , and offers a probabilistic formulation which captures uncertainty , e.g . during scene decomposition . Furthermore , the complexity of GENESIS increases with O ( K ) , where K is the number of components , as opposed to the O ( K2 ) complexity of the relational stage in van Steenkiste et al . ( 2018b ) . Inverse Graphics A range of works formulate scene understanding as an inverse graphics problem . These well-engineered methods , however , rely on scene annotations for training and lack probabilistic formulations . For example , Wu et al . ( 2017b ) leverage a graphics renderer to decode a structured scene description which is inferred by a neural network . Romaszko et al . ( 2017 ) pursue a similar approach but instead make use of a differentiable graphics render . Wu et al . ( 2017a ) further employ different physics engines to predict the movement of billiard balls and block towers . 3 GENESIS : GENERATIVE SCENE INFERENCE AND SAMPLING . In this section , we first describe the generative model of GENESIS and a simplified variant called GENESIS-S . This is followed by the associated inference procedures and two possible learning objectives . GENESIS is illustrated in Figure 1 and Figure 2 shows the graphical model in comparison to alternative methods . An illustration of GENESIS-S is included Appendix B.1 , Figure 5 . Generative model Let x ∈ RH×W×C be an image . We formulate the problem of image generation as a spatial Gaussian mixture model ( GMM ) . That is , every Gaussian component k = 1 , . . . , K represents an image-sized scene component xk ∈ RH×W×C . K ∈ N+ is the maximum number of scene components . The corresponding mixing probabilities πk ∈ [ 0 , 1 ] H×W indicate whether the component is present at a location in the image . The mixing probabilities are normalised across scene components , i.e . ∀i , j ∑ k πi , j , k = 1 , and can be regarded as spatial attention masks . Since there are strong spatial dependencies between components , we formulate an autoregressive prior distribution over mask variables zmk ∈ RDm which encode the mixing probabilities πk , as pθ ( z m 1 : K ) = K∏ k=1 pθ ( zmk | zm1 : k−1 ) = K∏ k=1 pθ ( z m k | uk ) |uk=Rθ ( zmk−1 , uk−1 ) . ( 1 ) The dependence on previous latents zm1 : k−1 is implemented via an RNN Rθ with hidden state uk . Next , we assume that the scene components xk are conditionally independent given their spatial allocation in the scene . The corresponding conditional distribution over component variables zck ∈ RDc which encode the scene components xk factorises as follows , pθ ( z c 1 : K | zm1 : K ) = K∏ k=1 pθ ( z c k | zmk ) . ( 2 ) Now , the image likelihood is given by a mixture model , p ( x | zm1 : K , zc1 : K ) = K∑ k=1 πk pθ ( xk | zck ) , ( 3 ) where the mixing probabilities πk = πθ ( zm1 : k ) are created via a stick-breaking process ( SBP ) adapted from Burgess et al . ( 2019 ) as follows , slightly overloading the π notation , π1 = πθ ( z m 1 ) , πk = 1− k−1∑ j=1 πj πθ ( zmk ) , πK = 1− K−1∑ j=1 πj . ( 4 ) Note that this step is not necessary for our model and instead one could use a softmax to normalise masks as in Greff et al . ( 2019 ) . Finally , omitting subscripts , the full generative model can be written as pθ ( x ) = ∫∫ pθ ( x | zc , zm ) pθ ( zc | zm ) pθ ( zm ) dzm dzc , ( 5 ) where we assume that all conditional distributions are Gaussian . The Gaussian components of the image likelihood have a fixed scalar standard deviation σ2x . We refer to this model as GENESIS . To investigate whether separate latents for masks and component appearances are necessary for decomposition , we consider a simplified model , GENESIS-S , with a single latent variable per component , pθ ( z1 : K ) = K∏ k=1 pθ ( zk | z1 : k−1 ) . ( 6 ) In this case , zk takes the role of zck in Equation ( 3 ) and of z m k in Equation ( 4 ) , while Equation ( 2 ) is no longer necessary . Approximate posterior We amortise inference by using an approximate posterior distribution with parameters φ and a structure similar to the generative model . The full approximate posterior reads as follows , qφ ( z c 1 : K , z m 1 : K | x ) = qφ ( zm1 : K | x ) qφ ( zc1 : K | x , zm1 : K ) , where qφ ( z m 1 : K | x ) = K∏ k=1 qφ ( zmk | x , zm1 : k−1 ) , and qφ ( zc1 : K | x , zm1 : K ) = K∏ k=1 qφ ( z c k | x , zm1 : k ) , ( 7 ) with the dependence on zm1 : k−1 realised by an RNN Rφ . The RNN could , in principle , be shared with the prior , but we have not investigated this option . All conditional distributions are Gaussian . For GENESIS-S , the approximate posterior takes the form qφ ( z1 : K | x ) = ∏K k=1 qφ ( zk | x , z1 : k−1 ) . Learning GENESIS can be trained by maximising the evidence lower bound ( ELBO ) on the logmarginal likelihood log pθ ( x ) , given by LELBO ( x ) = Eqφ ( zc , zm|x ) [ log pθ ( x | zc , zm ) pθ ( zc | zm ) pθ ( zm ) qφ ( zc | zm , x ) qφ ( zm | x ) ] ( 8 ) = Eqφ ( zc , zm|x ) [ log pθ ( x | z c , zm ) ] − KL ( qφ ( zc , zm | x ) || pθ ( zc , zm ) ) . ( 9 ) However , this often leads to a strong emphasis on the likelihood term , while allowing the marginal approximate posterior qφ ( z ) = Epdata ( x ) [ qφ ( z | x ) ] to drift away from the prior distribution , hence increasing the KL-divergence . This also decreases the quality of samples drawn from the model . To prevent this behaviour , we use the Generalised ELBO with Constrained Optimisation ( GECO ) objective from Rezende & Viola ( 2018 ) instead , which changes the learning problem to minimising the KL-divergence subject to a reconstruction constraint . Let C ∈ R be the minimum allowed reconstruction log-likelihood , GECO then uses Lagrange multipliers to solve the following problem , θ ? , φ ? = argmin θ , φ KL ( qφ ( z c , zm | x ) || pθ ( zc , zm ) ) such that Eqφ ( zc , zm|x ) [ log pθ ( x | z c , zm ) ] ≥ C . ( 10 )
The paper proposes a generative model for images. There's a probability mask per-pixel per-component (which yields mixing probabilities), and then a set of latents per-component that yield an image. The system is tested on a set of scenes like the GQN dataset, stacks of blocks, and the multi-dsprites dataset. The system is better than MONet, although there are a few lingering questions.
SP:00fed729e27d8c9d2a3d96fdb7e54c3e5cc0a94d
GENESIS: Generative Scene Inference and Sampling with Object-Centric Latent Representations
1 INTRODUCTION . Task execution in robotics and reinforcement learning ( RL ) requires accurate perception of and reasoning about discrete elements in an environment . While supervised methods can be used to identify pertinent objects , it is intractable to collect labels for every scenario and task . Discovering structure in data—such as objects—and learning to represent data in a compact fashion without supervision are long-standing problems in machine learning ( Comon , 1992 ; Tishby et al. , 2000 ) , often formulated as generative latent-variable modelling ( e.g . Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . Such methods have been leveraged to increase sample efficiency in RL ( Gregor et al. , 2019 ) and other supervised tasks ( van Steenkiste et al. , 2019 ) . They also offer the ability to imagine environments for training ( Ha & Schmidhuber , 2018 ) . Given the compositional nature of visual scenes , separating latent representations into object-centric ones can facilitate fast and robust learning ( Watters et al. , 2019a ) , while also being amenable to relational reasoning ( Santoro et al. , 2017 ) . Interestingly , however , state-of-the-art methods for generating realistic images do not account for this discrete structure ( Brock et al. , 2018 ; Parmar et al. , 2018 ) . As in the approach proposed in this work , human visual perception is not passive . Rather it involves a creative interplay between external stimulation and an active , internal generative model of the world ( Rao & Ballard , 1999 ; Friston , 2005 ) . That this is necessary can be seen from the physiology of the eye , where the small portion of the visual field that can produce sharp images ( fovea centralis ) motivates the need for rapid eye movements ( saccades ) to build up a crisp and holistic percept of a scene ( Wandell , 1995 ) . In other words , what we perceive is largely a mental simulation of the external world . Meanwhile , work in computational neuroscience tells us that visual features ( see , e.g. , Hubel & Wiesel , 1968 ) can be inferred from the statistics of static images using unsupervised learning ( Olshausen & Field , 1996 ) . Experimental investigations further show that specific brain areas ( e.g . LO ) appear specialised for objects , for example responding more strongly to common objects than to scenes or textures , while responding only weakly to movement ( cf . MT ) ( e.g. , GrillSpector & Malach , 2004 ) . ∗Corresponding author : martin @ robots.ox.ac.uk In this work , we are interested in probabilistic generative models that can explain visual scenes compositionally via several latent variables . This corresponds to fitting a probability distribution pθ ( x ) with parameters θ to the data . The compositional structure is captured by K latent variables so that pθ ( x ) = ∫ pθ ( x | z1 : K ) pθ ( z1 : K ) dz1 : K . Models from this family can be optimised using the variational auto-encoder ( VAE ) framework ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) , by maximising a variational lower bound on the model evidence ( Jordan et al. , 1999 ) . Burgess et al . ( 2019 ) and Greff et al . ( 2019 ) recently proposed two such models , MONet and IODINE , to decompose visual scenes into meaningful objects . Both works leverage an analysis-by-synthesis approach through the machinery of VAEs ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) to train these models without labelled supervision , e.g . in the form of ground truth segmentation masks . However , the models have a factorised prior that treats scene components as independent . Thus , neither provides an object-centric generation mechanism that accounts for relationships between constituent parts of a scene , e.g . two physical objects can not occupy the same location , prohibiting the component-wise generation of novel scenes and restricting the utility of these approaches . Moreover , MONet embeds a convolutional neural network ( CNN ) inside of an recurrent neural network ( RNN ) that is unrolled for each scene component , which does not scale well to more complex scenes . Similarly , IODINE utilises a CNN within an expensive , gradient-based iterative refinement mechanism . Therefore , we introduce GENErative Scene Inference and Sampling ( GENESIS ) which is , to the best of our knowledge , the first object-centric generative model of rendered 3D scenes capable of both decomposing and generating scenes1 . Compared to previous work , this renders GENESIS significantly more suitable for a wide range of applications in robotics and reinforcement learning . GENESIS achieves this by modelling relationships between scene components with an expressive , autoregressive prior that is learned alongside a sequential , amortised inference network . Importantly , sequential inference is performed in low-dimensional latent space , allowing all convolutional encoders and decoders to be run in parallel to fully exploit modern graphics processing hardware . We conduct experiments on three canonical and publicly available datasets : coloured Multi-dSprites ( Burgess et al. , 2019 ) , the GQN dataset ( Eslami et al. , 2018 ) , and ShapeStacks ( Groth et al. , 2018 ) . The latter two are simulated 3D environments which serve as testing grounds for navigation and object manipulation tasks , respectively . We show both qualitatively and quantitatively that in contrast to prior art , GENESIS is able to generate coherent scenes while also performing well on scene decomposition . Furthermore , we use the scene annotations available for ShapeStacks to show the benefit of utilising general purpose , object-centric latent representations from GENESIS for tasks such as predicting whether a block tower is stable or not . Code and models are available at https : //github.com/applied-ai-lab/genesis . 2 RELATED WORK . Structured Models Several methods leverage structured latent variables to discover objects in images without direct supervision . CST-VAE ( Huang & Murphy , 2015 ) , AIR ( Eslami et al. , 2016 ) , SQAIR ( Kosiorek et al. , 2018 ) , and SPAIR ( Crawford & Pineau , 2019 ) use spatial attention to partition scenes into objects . TAGGER ( Greff et al. , 2016 ) , NEM ( Greff et al. , 2017 ) , and R-NEM ( van Steenkiste et al. , 2018a ) perform unsupervised segmentation by modelling images as spatial mixture models . SCAE ( Kosiorek et al. , 2019 ) discovers geometric relationships between objects and their parts by using an affine-aware decoder . Yet , these approaches have not been shown to work on more complex images , for example visual scenes with 3D spatial structure , occlusion , perspective distortion , and multiple foreground and background components as considered in this work . Moreover , none of them demonstrate the ability to generate novel scenes with relational structure . While Xu et al . ( 2018 ) present an extension of Eslami et al . ( 2016 ) to generate images , their method only works on binary images with a uniform black background and assumes that object bounding boxes do not overlap . In contrast , we train GENESIS on rendered 3D scenes from Eslami et al . ( 2018 ) and Groth et al . ( 2018 ) which feature complex backgrounds and considerable occlusion to perform both decomposition and generation . Lastly , Xu et al . ( 2019 ) use ground truth pixel-wise flow fields as a cue for segmenting objects or object parts . Similarly , GENESIS could be adapted to also leverage temporal information which is a promising avenue for future research . 1We use the terms “ object ” and “ scene component ” synonymously in this work . MONet & IODINE While this work is most directly related to MONet ( Burgess et al. , 2019 ) and IODINE ( Greff et al. , 2019 ) , it sets itself apart by introducing a generative model that captures relations between scene components with an autoregressive prior , enabling the unconditional generation of coherent , novel scenes . Moreover , MONet relies on a deterministic attention mechanism rather than utilising a proper probabilistic inference procedure . This implies that the training objective is not a valid lower bound on the marginal likelihood and that the model can not perform density estimation without modification . Furthermore , this attention mechanism embeds a CNN in a RNN , posing an issue in terms of scalability . These two considerations do not apply to IODINE , but IODINE employs a gradient-based , iterative refinement mechanism which expensive both in terms of computation and memory , limiting its practicality and utility . Architecturally , GENESIS is more similar to MONet and does not require expensive iterative refinement as IODINE . Unlike MONet , though , the convolutional encoders and decoders in GENESIS can be run in parallel , rendering the model computationally more scalable to inputs with a larger number of scene components . Adversarial Methods A few recent works have proposed to use an adversary for scene segmentation and generation . Chen et al . ( 2019 ) and Bielski & Favaro ( 2019 ) segment a single foreground object per image and Arandjelović & Zisserman ( 2019 ) segment several synthetic objects superimposed on natural images . Azadi et al . ( 2019 ) combine two objects or an object and a background scene in a sensible fashion and van Steenkiste et al . ( 2018b ) can generate scenes with a potentially arbitrary number of components . In comparison , GENESIS performs both inference and generation , does not exhibit the instabilities of adversarial training , and offers a probabilistic formulation which captures uncertainty , e.g . during scene decomposition . Furthermore , the complexity of GENESIS increases with O ( K ) , where K is the number of components , as opposed to the O ( K2 ) complexity of the relational stage in van Steenkiste et al . ( 2018b ) . Inverse Graphics A range of works formulate scene understanding as an inverse graphics problem . These well-engineered methods , however , rely on scene annotations for training and lack probabilistic formulations . For example , Wu et al . ( 2017b ) leverage a graphics renderer to decode a structured scene description which is inferred by a neural network . Romaszko et al . ( 2017 ) pursue a similar approach but instead make use of a differentiable graphics render . Wu et al . ( 2017a ) further employ different physics engines to predict the movement of billiard balls and block towers . 3 GENESIS : GENERATIVE SCENE INFERENCE AND SAMPLING . In this section , we first describe the generative model of GENESIS and a simplified variant called GENESIS-S . This is followed by the associated inference procedures and two possible learning objectives . GENESIS is illustrated in Figure 1 and Figure 2 shows the graphical model in comparison to alternative methods . An illustration of GENESIS-S is included Appendix B.1 , Figure 5 . Generative model Let x ∈ RH×W×C be an image . We formulate the problem of image generation as a spatial Gaussian mixture model ( GMM ) . That is , every Gaussian component k = 1 , . . . , K represents an image-sized scene component xk ∈ RH×W×C . K ∈ N+ is the maximum number of scene components . The corresponding mixing probabilities πk ∈ [ 0 , 1 ] H×W indicate whether the component is present at a location in the image . The mixing probabilities are normalised across scene components , i.e . ∀i , j ∑ k πi , j , k = 1 , and can be regarded as spatial attention masks . Since there are strong spatial dependencies between components , we formulate an autoregressive prior distribution over mask variables zmk ∈ RDm which encode the mixing probabilities πk , as pθ ( z m 1 : K ) = K∏ k=1 pθ ( zmk | zm1 : k−1 ) = K∏ k=1 pθ ( z m k | uk ) |uk=Rθ ( zmk−1 , uk−1 ) . ( 1 ) The dependence on previous latents zm1 : k−1 is implemented via an RNN Rθ with hidden state uk . Next , we assume that the scene components xk are conditionally independent given their spatial allocation in the scene . The corresponding conditional distribution over component variables zck ∈ RDc which encode the scene components xk factorises as follows , pθ ( z c 1 : K | zm1 : K ) = K∏ k=1 pθ ( z c k | zmk ) . ( 2 ) Now , the image likelihood is given by a mixture model , p ( x | zm1 : K , zc1 : K ) = K∑ k=1 πk pθ ( xk | zck ) , ( 3 ) where the mixing probabilities πk = πθ ( zm1 : k ) are created via a stick-breaking process ( SBP ) adapted from Burgess et al . ( 2019 ) as follows , slightly overloading the π notation , π1 = πθ ( z m 1 ) , πk = 1− k−1∑ j=1 πj πθ ( zmk ) , πK = 1− K−1∑ j=1 πj . ( 4 ) Note that this step is not necessary for our model and instead one could use a softmax to normalise masks as in Greff et al . ( 2019 ) . Finally , omitting subscripts , the full generative model can be written as pθ ( x ) = ∫∫ pθ ( x | zc , zm ) pθ ( zc | zm ) pθ ( zm ) dzm dzc , ( 5 ) where we assume that all conditional distributions are Gaussian . The Gaussian components of the image likelihood have a fixed scalar standard deviation σ2x . We refer to this model as GENESIS . To investigate whether separate latents for masks and component appearances are necessary for decomposition , we consider a simplified model , GENESIS-S , with a single latent variable per component , pθ ( z1 : K ) = K∏ k=1 pθ ( zk | z1 : k−1 ) . ( 6 ) In this case , zk takes the role of zck in Equation ( 3 ) and of z m k in Equation ( 4 ) , while Equation ( 2 ) is no longer necessary . Approximate posterior We amortise inference by using an approximate posterior distribution with parameters φ and a structure similar to the generative model . The full approximate posterior reads as follows , qφ ( z c 1 : K , z m 1 : K | x ) = qφ ( zm1 : K | x ) qφ ( zc1 : K | x , zm1 : K ) , where qφ ( z m 1 : K | x ) = K∏ k=1 qφ ( zmk | x , zm1 : k−1 ) , and qφ ( zc1 : K | x , zm1 : K ) = K∏ k=1 qφ ( z c k | x , zm1 : k ) , ( 7 ) with the dependence on zm1 : k−1 realised by an RNN Rφ . The RNN could , in principle , be shared with the prior , but we have not investigated this option . All conditional distributions are Gaussian . For GENESIS-S , the approximate posterior takes the form qφ ( z1 : K | x ) = ∏K k=1 qφ ( zk | x , z1 : k−1 ) . Learning GENESIS can be trained by maximising the evidence lower bound ( ELBO ) on the logmarginal likelihood log pθ ( x ) , given by LELBO ( x ) = Eqφ ( zc , zm|x ) [ log pθ ( x | zc , zm ) pθ ( zc | zm ) pθ ( zm ) qφ ( zc | zm , x ) qφ ( zm | x ) ] ( 8 ) = Eqφ ( zc , zm|x ) [ log pθ ( x | z c , zm ) ] − KL ( qφ ( zc , zm | x ) || pθ ( zc , zm ) ) . ( 9 ) However , this often leads to a strong emphasis on the likelihood term , while allowing the marginal approximate posterior qφ ( z ) = Epdata ( x ) [ qφ ( z | x ) ] to drift away from the prior distribution , hence increasing the KL-divergence . This also decreases the quality of samples drawn from the model . To prevent this behaviour , we use the Generalised ELBO with Constrained Optimisation ( GECO ) objective from Rezende & Viola ( 2018 ) instead , which changes the learning problem to minimising the KL-divergence subject to a reconstruction constraint . Let C ∈ R be the minimum allowed reconstruction log-likelihood , GECO then uses Lagrange multipliers to solve the following problem , θ ? , φ ? = argmin θ , φ KL ( qφ ( z c , zm | x ) || pθ ( zc , zm ) ) such that Eqφ ( zc , zm|x ) [ log pθ ( x | z c , zm ) ] ≥ C . ( 10 )
The authors propose a probabilistic generative latent variable model representing a 2D image as a mixture of latent components. It formulates the scene generation problem as a spatial Gaussian mixture model where each Gaussian component comes from the decoding of an object-centric latent variable. The contribution of the proposed method from previous works is the introduction of an autoregressive prior on the component latents. This allows the model to capture autoregressive dependencies among different components and thus help generate coherent scenes, which has not been shown in the previous works. In the experiments, the authors compare GENESIS with MONet and VAEs qualitatively and quantitatively and show that the model outperforms the baseline in terms of both scene decomposition and generation.
SP:00fed729e27d8c9d2a3d96fdb7e54c3e5cc0a94d
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
1 INTRODUCTION . Modern neural network classifiers are able to achieve very high accuracy on image classification tasks but are sensitive to small , adversarially chosen perturbations to the inputs ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . Given an image x that is correctly classified by a neural network , a malicious attacker may find a small adversarial perturbation δ such that the perturbed image x + δ , though visually indistinguishable from the original image , is assigned to a wrong class with high confidence by the network . Such vulnerability creates security concerns in many real-world applications . Researchers have proposed a variety of defense methods to improve the robustness of neural networks . Most of the existing defenses are based on adversarial training ( Szegedy et al. , 2013 ; Madry et al. , 2017 ; Goodfellow et al. , 2015 ; Huang et al. , 2015 ; Athalye et al. , 2018 ; Ding et al. , 2020 ) . During training , these methods first learn on-the-fly adversarial examples of the inputs with multiple attack iterations and then update model parameters using these perturbed samples together with the original labels . However , such approaches depend on a particular ( class of ) attack method . It can not be formally guaranteed whether the resulting model is also robust against other attacks . Moreover , attack iterations are usually quite expensive . As a result , adversarial training runs very slowly . Another line of algorithms trains robust models by maximizing the certified radius provided by robust certification methods ( Weng et al. , 2018 ; Wong & Kolter , 2018 ; Zhang et al. , 2018 ; Mirman et al. , 2018 ; Wang et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2019c ) . Using linear or convex relaxations of fully connected ReLU networks , a robust certification method computes a “ safe radius ” r for a classifier at a given input such that at any point within the neighboring radius-r ball of the input , the classifier is guaranteed to have unchanged predictions . However , the certification methods are usually computationally expensive and can only handle shallow neural networks with ReLU activations , so these training algorithms have troubles in scaling to modern networks . In this work , we propose an attack-free and scalable method to train robust deep neural networks . We mainly leverage the recent randomized smoothing technique ( Cohen et al. , 2019 ) . A randomized smoothed classifier g for an arbitrary classifier f is defined as g ( x ) = Eηf ( x + η ) , in which η ∼ N ( 0 , σ2I ) . While Cohen et al . ( 2019 ) derived how to analytically compute the certified radius of the randomly smoothed classifier g , they did not show how to maximize that radius to make the classifier g robust . Salman et al . ( 2019 ) proposed SmoothAdv to improve the robustness of g , but it still relies on the expensive attack iterations . Instead of adversarial training , we propose to learn robust models by directly taking the certified radius into the objective . We outline a few challenging desiderata any practical instantiation of this idea would however have to satisfy , and provide approaches to address each of these in turn . A discussion of these desiderata , as well as a detailed implementation of our approach is provided in Section 4 . And as we show both theoretically and empirically , our method is numerically stable and accounts for both classification accuracy and robustness . Our contributions are summarized as follows : • We propose an attack-free and scalable robust training algorithm by MAximizing the CErtified Radius ( MACER ) . MACER has the following advantages compared to previous works : – Different from adversarial training , we train robust models by directly maximizing the certified radius without specifying any attack strategies , and the learned model can achieve provable robustness against any possible attack in the certified region . Additionally , by avoiding time-consuming attack iterations , our proposed algorithm runs much faster than adversarial training . – Different from other methods ( Wong & Kolter , 2018 ) that maximize the certified radius but are not scalable to deep neural networks , our method can be applied to architectures of any size . This makes our algorithm more practical in real scenarios . • We empirically evaluate our proposed method through extensive experiments on Cifar-10 , ImageNet , MNIST , and SVHN . On all tasks , MACER achieves better performance than state-of-the-art algorithms . MACER is also exceptionally fast . For example , on ImageNet , MACER uses 39 % less training time than adversarial training but still performs better . 2 RELATED WORK . Neural networks trained by standard SGD are not robust – a small and human imperceptible perturbation can easily change the prediction of a network . In the white-box setting , methods have been proposed to construct adversarial examples with small ` ∞ or ` 2 perturbations ( Goodfellow et al. , 2015 ; Madry et al. , 2017 ; Carlini & Wagner , 2016 ; Moosavi-Dezfooli et al. , 2015 ) . Furthermore , even in the black-box setting where the adversary does not have access to the model structure and parameters , adversarial examples can be found by either transfer attack ( Papernot et al. , 2016 ) or optimization-based approaches ( Chen et al. , 2017 ; Rauber et al. , 2017 ; Cheng et al. , 2019 ) . It is thus important to study how to improve the robustness of neural networks against adversarial examples . Adversarial training So far , adversarial training has been the most successful robust training method according to many recent studies . Adversarial training was first proposed in Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2015 ) , where they showed that adding adversarial examples to the training set can improve the robustness against such attacks . More recently , Madry et al . ( 2017 ) formulated adversarial training as a min-max optimization problem and demonstrated that adversarial training with PGD attack leads to empirical robust models . Zhang et al . ( 2019b ) further decomposed the robust error as the sum of natural error and boundary error for better performance . Finally , Gao et al . ( 2019 ) proved the convergence of adversarial training . Although models obtained by adversarial training empirically achieve good performance , they do not have certified error guarantees . Despite the popularity of PGD-based adversarial training , one major issue is that its speed is too slow . Some recent papers propose methods to accelerate adversarial training . For example , Freem ( Shafahi et al. , 2019 ) replays an adversarial example several times in one iteration , YOPO-m-n ( Zhang et al. , 2019a ) restricts back propagation in PGD within the first layer , and Qin et al . ( 2019 ) estimates the adversary with local linearization . Robustness certification and provable defense Many defense algorithms proposed in the past few years were claimed to be effective , but Athalye et al . ( 2018 ) showed that most of them are based on “ gradient masking ” and can be bypassed by more carefully designed attacks . It is thus important to study how to measure the provable robustness of a network . A robustness certification algorithm takes a classifier f and an input point x as inputs , and outputs a “ safe radius ” r such that for any δ subject to ‖δ‖ ≤ r , f ( x ) = f ( x + δ ) . Several algorithms have been proposed recently , including the convex polytope technique ( Wong & Kolter , 2018 ) , abstract interpretation methods ( Singh et al. , 2018 ; Gehr et al. , 2018 ) and the recursive propagation algrithms ( Weng et al. , 2018 ; Zhang et al. , 2018 ) . These methods can provide attack-agnostic robust error lower bounds . Moreover , to achieve networks with nontrivial certified robust error , one can train a network by minimizing the certified robust error computed by the above-mentioned methods , and several algorithms have been proposed in the past year ( Wong & Kolter , 2018 ; Wong et al. , 2018 ; Wang et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2019c ; Mirman et al. , 2018 ) . Unfortunately , they can only be applied to shallow networks with limited activation and run very slowly . More recently , researchers found a new class of certification methods called randomized smoothing . The idea of randomization has been used for defense in several previous works ( Xie et al. , 2017 ; Liu et al. , 2018 ) but without any certification . Later on , Lecuyer et al . ( 2018 ) first showed that if a Gaussian random noise is added to the input or any intermediate layer . A certified guarantee on small ` 2 perturbation can be computed via differential privacy . Li et al . ( 2018 ) and Cohen et al . ( 2019 ) then provided improved ways to compute the ` 2 certified robust error for Gaussian smoothed models . In this paper , we propose a new algorithm to train on these ` 2 certified error bounds to significantly reduce the certified error and achieve better provable adversarial robustness . 3 PRELIMINARIES . Problem setup Consider a standard classification task with an underlying data distribution pdata over pairs of examples x ∈ X ⊂ Rd and corresponding labels y ∈ Y = { 1 , 2 , · · · , K } . Usually pdata is unknown and we can only access a training set S = { ( x1 , y1 ) , · · · , ( xn , yn ) } in which ( xi , yi ) is i.i.d . drawn from pdata , i = 1 , 2 , · · · , n. The empirical data distribution ( uniform distribution over S ) is denoted by p̂data . Let f ∈ F be the classifier of interest that maps any x ∈ X to Y . Usually f is parameterized by a set of parameters θ , so we also write it as fθ . We call x′ = x + δ an adversarial example of x to classifier fθ if fθ can correctly classify x but assigns a different label to x′ . Following many previous works ( Cohen et al. , 2019 ; Salman et al. , 2019 ) , we focus on the setting where δ satisfies ` 2 norm constraint ‖δ‖2 ≤ . We say that the model fθ is l 2-robust at ( x , y ) if it correctly classifies x as y and for any ‖δ‖2 ≤ , the model classifies x+δ as y . In the problem of robust classification , our ultimate goal is to find a model that is l 2-robust at ( x , y ) with high probability over ( x , y ) ∼ pdata for a given > 0 . Neural network In image classification we often use deep neural networks . Let uθ : X → RK be a neural network , whose output at input x is a vector ( u1θ ( x ) , ... , u K θ ( x ) ) . The classifier induced by uθ ( x ) is fθ ( x ) = arg maxc∈Y u c θ ( x ) . In order to train θ by minimizing a loss function such as cross entropy , we always use a softmax layer on uθ to normalize it into a probability distribution . The resulting network is zθ ( · ; β ) : X → P ( K ) 1 , which is given by zcθ ( x ; β ) = e βucθ ( x ) / ∑ c′∈Y e βuc ′ θ ( x ) , ∀c ∈ Y , β is the inverse temperature . For simplicity , we will use zθ ( x ) to refer to zθ ( x ; β ) when the meaning is clear from context . The vector zθ ( x ) = ( z 1 θ ( x ) , · · · , zKθ ( x ) ) is commonly regarded as the “ likelihood vector ” , and zcθ ( x ) measures how likely input x belongs to class c. Robust radius By definition , the l 2-robustness of fθ at a data point ( x , y ) depends on the radius of the largest l2 ball centered at x in which fθ does not change its prediction . This radius is called the robust radius , which is formally defined as R ( fθ ; x , y ) = { inf fθ ( x′ ) 6=fθ ( x ) ‖x′ − x‖2 , when fθ ( x ) = y 0 , when fθ ( x ) 6= y ( 1 ) Recall that our ultimate goal is to train a classifier which is l 2-robust at ( x , y ) with high probability over the sampling of ( x , y ) ∼ pdata . Mathematically the goal can be expressed as to minimize the expectation of the 0/1 robust classification error . The error is defined as l 0/1 −robust ( fθ ; x , y ) : = 1− 1 { R ( fθ ; x , y ) ≥ } , ( 2 ) and the goal is to minimize its expectation over the population L 0/1 −robust ( fθ ) : = E ( x , y ) ∼pdata l 0/1 −robust ( fθ ; x , y ) . ( 3 ) 1The probability simplex in RK . It is thus quite natural to improve model robustness via maximizing the robust radius . Unfortunately , computing the robust radius ( 1 ) of a classifier induced by a deep neural network is very difficult . Weng et al . ( 2018 ) showed that computing the l1 robust radius of a deep neural network is NP-hard . Although there is no result for the l2 radius yet , it is very likely that computing the l2 robust radius is also NP-hard . Certified radius Many previous works proposed certification methods that seek to derive a tight lower bound of R ( fθ ; x , y ) for neural networks ( see Section 2 for related work ) . We call this lower bound certified radius and denote it by CR ( fθ ; x , y ) . The certified radius satisfies 0 ≤ CR ( fθ ; x , y ) ≤ R ( fθ ; x , y ) for any fθ , x , y . The certified radius leads to a guaranteed upper bound of the 0/1 robust classification error , which is called 0/1 certified robust error . The 0/1 certified robust error of classifier fθ on sample ( x , y ) is defined as l 0/1 −certified ( fθ ; x , y ) : = 1− 1 { CR ( fθ ; x , y ) ≥ } ( 4 ) i.e . a sample is counted as correct only if the certified radius reaches . The expectation of certified robust error over ( x , y ) ∼ pdata serves as a performance metric of the provable robustness : L 0/1 −certified ( fθ ) : = E ( x , y ) ∼pdata l 0/1 −certified ( fθ ; x , y ) ( 5 ) Recall that CR ( fθ ; x , y ) is a lower bound of the true robust radius , which immediately implies that L 0/1 −certified ( fθ ) ≥ L 0/1 −robust ( fθ ) . Therefore , a small 0/1 certified robust error leads to a small 0/1 robust classification error . Randomized smoothing In this work , we use the recent randomized smoothing technique ( Cohen et al. , 2019 ) , which is scalable to any architectures , to obtain the certified radius of smoothed deep neural networks . The key part of randomized smoothing is to use the smoothed version of fθ , which is denoted by gθ , to make predictions . The formulation of gθ is defined as follows . Definition 1 . For an arbitrary classifier fθ ∈ F and σ > 0 , the smoothed classifier gθ of fθ is defined as gθ ( x ) = arg max c∈Y Pη∼N ( 0 , σ2I ) ( fθ ( x+ η ) = c ) ( 6 ) In short , the smoothed classifier gθ ( x ) returns the label most likely to be returned by fθ when its input is sampled from a Gaussian distribution N ( x , σ2I ) centered at x. Cohen et al . ( 2019 ) proves the following theorem , which provides an analytic form of certified radius : Theorem 1 . ( Cohen et al. , 2019 ) Let fθ ∈ F , and η ∼ N ( 0 , σ2I ) . Let the smoothed classifier gθ be defined as in ( 6 ) . Let the ground truth of an input x be y . If gθ classifies x correctly , i.e . Pη ( fθ ( x+ η ) = y ) ≥ max y′ 6=y Pη ( fθ ( x+ η ) = y ′ ) ( 7 ) Then gθ is provably robust at x , with the certified radius given by CR ( gθ ; x , y ) = σ 2 [ Φ−1 ( Pη ( fθ ( x+ η ) = y ) ) − Φ−1 ( max y′ 6=y Pη ( fθ ( x+ η ) = y ′ ) ) ] = σ 2 [ Φ−1 ( Eη1 { fθ ( x+η ) =y } ) − Φ −1 ( max y′ 6=y Eη1 { fθ ( x+η ) =y′ } ) ] ( 8 ) where Φ is the c.d.f . of the standard Gaussian distribution .
This paper improves the robustness of smoothed classifiers by maximizing the certified radius, which is more efficient than adversarially train the smoothed classifier and achieves higher average robust radius and better certified robustness when the radius is not much larger than the training sigma. It proposes a novel objective which is derived by decomposing the 0/1 certified loss into the sum of 0/1 classification error and 0/1 robustness error. Three conditions are identified to make the optimization doable. Two surrogate losses (CE and hinge loss on the certified radius) for the two 0/1 errors are proposed as upper bounds of the 0/1 loss. Certified radius is derived as a function of the logits of Soft-RS to make the hinge loss differentiable. Numerical stability of the proposed objective is also analyzed by showing its gradient is bounded.
SP:938f9b4e59217d2e78c405464b452ddc8ba5c459
MACER: Attack-free and Scalable Robust Training via Maximizing Certified Radius
1 INTRODUCTION . Modern neural network classifiers are able to achieve very high accuracy on image classification tasks but are sensitive to small , adversarially chosen perturbations to the inputs ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . Given an image x that is correctly classified by a neural network , a malicious attacker may find a small adversarial perturbation δ such that the perturbed image x + δ , though visually indistinguishable from the original image , is assigned to a wrong class with high confidence by the network . Such vulnerability creates security concerns in many real-world applications . Researchers have proposed a variety of defense methods to improve the robustness of neural networks . Most of the existing defenses are based on adversarial training ( Szegedy et al. , 2013 ; Madry et al. , 2017 ; Goodfellow et al. , 2015 ; Huang et al. , 2015 ; Athalye et al. , 2018 ; Ding et al. , 2020 ) . During training , these methods first learn on-the-fly adversarial examples of the inputs with multiple attack iterations and then update model parameters using these perturbed samples together with the original labels . However , such approaches depend on a particular ( class of ) attack method . It can not be formally guaranteed whether the resulting model is also robust against other attacks . Moreover , attack iterations are usually quite expensive . As a result , adversarial training runs very slowly . Another line of algorithms trains robust models by maximizing the certified radius provided by robust certification methods ( Weng et al. , 2018 ; Wong & Kolter , 2018 ; Zhang et al. , 2018 ; Mirman et al. , 2018 ; Wang et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2019c ) . Using linear or convex relaxations of fully connected ReLU networks , a robust certification method computes a “ safe radius ” r for a classifier at a given input such that at any point within the neighboring radius-r ball of the input , the classifier is guaranteed to have unchanged predictions . However , the certification methods are usually computationally expensive and can only handle shallow neural networks with ReLU activations , so these training algorithms have troubles in scaling to modern networks . In this work , we propose an attack-free and scalable method to train robust deep neural networks . We mainly leverage the recent randomized smoothing technique ( Cohen et al. , 2019 ) . A randomized smoothed classifier g for an arbitrary classifier f is defined as g ( x ) = Eηf ( x + η ) , in which η ∼ N ( 0 , σ2I ) . While Cohen et al . ( 2019 ) derived how to analytically compute the certified radius of the randomly smoothed classifier g , they did not show how to maximize that radius to make the classifier g robust . Salman et al . ( 2019 ) proposed SmoothAdv to improve the robustness of g , but it still relies on the expensive attack iterations . Instead of adversarial training , we propose to learn robust models by directly taking the certified radius into the objective . We outline a few challenging desiderata any practical instantiation of this idea would however have to satisfy , and provide approaches to address each of these in turn . A discussion of these desiderata , as well as a detailed implementation of our approach is provided in Section 4 . And as we show both theoretically and empirically , our method is numerically stable and accounts for both classification accuracy and robustness . Our contributions are summarized as follows : • We propose an attack-free and scalable robust training algorithm by MAximizing the CErtified Radius ( MACER ) . MACER has the following advantages compared to previous works : – Different from adversarial training , we train robust models by directly maximizing the certified radius without specifying any attack strategies , and the learned model can achieve provable robustness against any possible attack in the certified region . Additionally , by avoiding time-consuming attack iterations , our proposed algorithm runs much faster than adversarial training . – Different from other methods ( Wong & Kolter , 2018 ) that maximize the certified radius but are not scalable to deep neural networks , our method can be applied to architectures of any size . This makes our algorithm more practical in real scenarios . • We empirically evaluate our proposed method through extensive experiments on Cifar-10 , ImageNet , MNIST , and SVHN . On all tasks , MACER achieves better performance than state-of-the-art algorithms . MACER is also exceptionally fast . For example , on ImageNet , MACER uses 39 % less training time than adversarial training but still performs better . 2 RELATED WORK . Neural networks trained by standard SGD are not robust – a small and human imperceptible perturbation can easily change the prediction of a network . In the white-box setting , methods have been proposed to construct adversarial examples with small ` ∞ or ` 2 perturbations ( Goodfellow et al. , 2015 ; Madry et al. , 2017 ; Carlini & Wagner , 2016 ; Moosavi-Dezfooli et al. , 2015 ) . Furthermore , even in the black-box setting where the adversary does not have access to the model structure and parameters , adversarial examples can be found by either transfer attack ( Papernot et al. , 2016 ) or optimization-based approaches ( Chen et al. , 2017 ; Rauber et al. , 2017 ; Cheng et al. , 2019 ) . It is thus important to study how to improve the robustness of neural networks against adversarial examples . Adversarial training So far , adversarial training has been the most successful robust training method according to many recent studies . Adversarial training was first proposed in Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2015 ) , where they showed that adding adversarial examples to the training set can improve the robustness against such attacks . More recently , Madry et al . ( 2017 ) formulated adversarial training as a min-max optimization problem and demonstrated that adversarial training with PGD attack leads to empirical robust models . Zhang et al . ( 2019b ) further decomposed the robust error as the sum of natural error and boundary error for better performance . Finally , Gao et al . ( 2019 ) proved the convergence of adversarial training . Although models obtained by adversarial training empirically achieve good performance , they do not have certified error guarantees . Despite the popularity of PGD-based adversarial training , one major issue is that its speed is too slow . Some recent papers propose methods to accelerate adversarial training . For example , Freem ( Shafahi et al. , 2019 ) replays an adversarial example several times in one iteration , YOPO-m-n ( Zhang et al. , 2019a ) restricts back propagation in PGD within the first layer , and Qin et al . ( 2019 ) estimates the adversary with local linearization . Robustness certification and provable defense Many defense algorithms proposed in the past few years were claimed to be effective , but Athalye et al . ( 2018 ) showed that most of them are based on “ gradient masking ” and can be bypassed by more carefully designed attacks . It is thus important to study how to measure the provable robustness of a network . A robustness certification algorithm takes a classifier f and an input point x as inputs , and outputs a “ safe radius ” r such that for any δ subject to ‖δ‖ ≤ r , f ( x ) = f ( x + δ ) . Several algorithms have been proposed recently , including the convex polytope technique ( Wong & Kolter , 2018 ) , abstract interpretation methods ( Singh et al. , 2018 ; Gehr et al. , 2018 ) and the recursive propagation algrithms ( Weng et al. , 2018 ; Zhang et al. , 2018 ) . These methods can provide attack-agnostic robust error lower bounds . Moreover , to achieve networks with nontrivial certified robust error , one can train a network by minimizing the certified robust error computed by the above-mentioned methods , and several algorithms have been proposed in the past year ( Wong & Kolter , 2018 ; Wong et al. , 2018 ; Wang et al. , 2018 ; Gowal et al. , 2018 ; Zhang et al. , 2019c ; Mirman et al. , 2018 ) . Unfortunately , they can only be applied to shallow networks with limited activation and run very slowly . More recently , researchers found a new class of certification methods called randomized smoothing . The idea of randomization has been used for defense in several previous works ( Xie et al. , 2017 ; Liu et al. , 2018 ) but without any certification . Later on , Lecuyer et al . ( 2018 ) first showed that if a Gaussian random noise is added to the input or any intermediate layer . A certified guarantee on small ` 2 perturbation can be computed via differential privacy . Li et al . ( 2018 ) and Cohen et al . ( 2019 ) then provided improved ways to compute the ` 2 certified robust error for Gaussian smoothed models . In this paper , we propose a new algorithm to train on these ` 2 certified error bounds to significantly reduce the certified error and achieve better provable adversarial robustness . 3 PRELIMINARIES . Problem setup Consider a standard classification task with an underlying data distribution pdata over pairs of examples x ∈ X ⊂ Rd and corresponding labels y ∈ Y = { 1 , 2 , · · · , K } . Usually pdata is unknown and we can only access a training set S = { ( x1 , y1 ) , · · · , ( xn , yn ) } in which ( xi , yi ) is i.i.d . drawn from pdata , i = 1 , 2 , · · · , n. The empirical data distribution ( uniform distribution over S ) is denoted by p̂data . Let f ∈ F be the classifier of interest that maps any x ∈ X to Y . Usually f is parameterized by a set of parameters θ , so we also write it as fθ . We call x′ = x + δ an adversarial example of x to classifier fθ if fθ can correctly classify x but assigns a different label to x′ . Following many previous works ( Cohen et al. , 2019 ; Salman et al. , 2019 ) , we focus on the setting where δ satisfies ` 2 norm constraint ‖δ‖2 ≤ . We say that the model fθ is l 2-robust at ( x , y ) if it correctly classifies x as y and for any ‖δ‖2 ≤ , the model classifies x+δ as y . In the problem of robust classification , our ultimate goal is to find a model that is l 2-robust at ( x , y ) with high probability over ( x , y ) ∼ pdata for a given > 0 . Neural network In image classification we often use deep neural networks . Let uθ : X → RK be a neural network , whose output at input x is a vector ( u1θ ( x ) , ... , u K θ ( x ) ) . The classifier induced by uθ ( x ) is fθ ( x ) = arg maxc∈Y u c θ ( x ) . In order to train θ by minimizing a loss function such as cross entropy , we always use a softmax layer on uθ to normalize it into a probability distribution . The resulting network is zθ ( · ; β ) : X → P ( K ) 1 , which is given by zcθ ( x ; β ) = e βucθ ( x ) / ∑ c′∈Y e βuc ′ θ ( x ) , ∀c ∈ Y , β is the inverse temperature . For simplicity , we will use zθ ( x ) to refer to zθ ( x ; β ) when the meaning is clear from context . The vector zθ ( x ) = ( z 1 θ ( x ) , · · · , zKθ ( x ) ) is commonly regarded as the “ likelihood vector ” , and zcθ ( x ) measures how likely input x belongs to class c. Robust radius By definition , the l 2-robustness of fθ at a data point ( x , y ) depends on the radius of the largest l2 ball centered at x in which fθ does not change its prediction . This radius is called the robust radius , which is formally defined as R ( fθ ; x , y ) = { inf fθ ( x′ ) 6=fθ ( x ) ‖x′ − x‖2 , when fθ ( x ) = y 0 , when fθ ( x ) 6= y ( 1 ) Recall that our ultimate goal is to train a classifier which is l 2-robust at ( x , y ) with high probability over the sampling of ( x , y ) ∼ pdata . Mathematically the goal can be expressed as to minimize the expectation of the 0/1 robust classification error . The error is defined as l 0/1 −robust ( fθ ; x , y ) : = 1− 1 { R ( fθ ; x , y ) ≥ } , ( 2 ) and the goal is to minimize its expectation over the population L 0/1 −robust ( fθ ) : = E ( x , y ) ∼pdata l 0/1 −robust ( fθ ; x , y ) . ( 3 ) 1The probability simplex in RK . It is thus quite natural to improve model robustness via maximizing the robust radius . Unfortunately , computing the robust radius ( 1 ) of a classifier induced by a deep neural network is very difficult . Weng et al . ( 2018 ) showed that computing the l1 robust radius of a deep neural network is NP-hard . Although there is no result for the l2 radius yet , it is very likely that computing the l2 robust radius is also NP-hard . Certified radius Many previous works proposed certification methods that seek to derive a tight lower bound of R ( fθ ; x , y ) for neural networks ( see Section 2 for related work ) . We call this lower bound certified radius and denote it by CR ( fθ ; x , y ) . The certified radius satisfies 0 ≤ CR ( fθ ; x , y ) ≤ R ( fθ ; x , y ) for any fθ , x , y . The certified radius leads to a guaranteed upper bound of the 0/1 robust classification error , which is called 0/1 certified robust error . The 0/1 certified robust error of classifier fθ on sample ( x , y ) is defined as l 0/1 −certified ( fθ ; x , y ) : = 1− 1 { CR ( fθ ; x , y ) ≥ } ( 4 ) i.e . a sample is counted as correct only if the certified radius reaches . The expectation of certified robust error over ( x , y ) ∼ pdata serves as a performance metric of the provable robustness : L 0/1 −certified ( fθ ) : = E ( x , y ) ∼pdata l 0/1 −certified ( fθ ; x , y ) ( 5 ) Recall that CR ( fθ ; x , y ) is a lower bound of the true robust radius , which immediately implies that L 0/1 −certified ( fθ ) ≥ L 0/1 −robust ( fθ ) . Therefore , a small 0/1 certified robust error leads to a small 0/1 robust classification error . Randomized smoothing In this work , we use the recent randomized smoothing technique ( Cohen et al. , 2019 ) , which is scalable to any architectures , to obtain the certified radius of smoothed deep neural networks . The key part of randomized smoothing is to use the smoothed version of fθ , which is denoted by gθ , to make predictions . The formulation of gθ is defined as follows . Definition 1 . For an arbitrary classifier fθ ∈ F and σ > 0 , the smoothed classifier gθ of fθ is defined as gθ ( x ) = arg max c∈Y Pη∼N ( 0 , σ2I ) ( fθ ( x+ η ) = c ) ( 6 ) In short , the smoothed classifier gθ ( x ) returns the label most likely to be returned by fθ when its input is sampled from a Gaussian distribution N ( x , σ2I ) centered at x. Cohen et al . ( 2019 ) proves the following theorem , which provides an analytic form of certified radius : Theorem 1 . ( Cohen et al. , 2019 ) Let fθ ∈ F , and η ∼ N ( 0 , σ2I ) . Let the smoothed classifier gθ be defined as in ( 6 ) . Let the ground truth of an input x be y . If gθ classifies x correctly , i.e . Pη ( fθ ( x+ η ) = y ) ≥ max y′ 6=y Pη ( fθ ( x+ η ) = y ′ ) ( 7 ) Then gθ is provably robust at x , with the certified radius given by CR ( gθ ; x , y ) = σ 2 [ Φ−1 ( Pη ( fθ ( x+ η ) = y ) ) − Φ−1 ( max y′ 6=y Pη ( fθ ( x+ η ) = y ′ ) ) ] = σ 2 [ Φ−1 ( Eη1 { fθ ( x+η ) =y } ) − Φ −1 ( max y′ 6=y Eη1 { fθ ( x+η ) =y′ } ) ] ( 8 ) where Φ is the c.d.f . of the standard Gaussian distribution .
This paper proposes a new approach to training models robust to perturbations (or 'attacks') within an l_2 radius, by maximizing a surrogate---a soft randomized smoothing loss---for the *certified radius* (a lower bound for the l_2 attack radius) of the classifier. This approach has the advantage of not needing to explicitly train against specific attacks, and is thus much faster and easier to optimize. The authors provide certain theoretical guarantees and also demonstrate strong empirical results relative to two baseline approaches.
SP:938f9b4e59217d2e78c405464b452ddc8ba5c459
Variational Diffusion Autoencoders with Random Walk Sampling
Variational inference ( VI ) methods and especially variational autoencoders ( VAEs ) specify scalable generative models that enjoy an intuitive connection to manifold learning — with many default priors the posterior/likelihood pair q ( z|x ) /p ( x|z ) can be viewed as an approximate homeomorphism ( and its inverse ) between the data manifold and a latent Euclidean space . However , these approximations are well-documented to become degenerate in training . Unless the subjective prior is carefully chosen , the topologies of the prior and data distributions often will not match . Conversely , diffusion maps ( DM ) automatically infer the data topology and enjoy a rigorous connection to manifold learning , but do not scale easily or provide the inverse homeomorphism . In this paper , we propose a ) a principled measure for recognizing the mismatch between data and latent distributions and b ) a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model . The measure , the locally bi-Lipschitz property , is a sufficient condition for a homeomorphism and easy to compute and interpret . The method , the variational diffusion autoencoder ( VDAE ) , is a novel generative algorithm that first infers the topology of the data distribution , then models a diffusion random walk over the data . To achieve efficient computation in VDAEs , we use stochastic versions of both variational inference and manifold learning optimization . We prove approximation theoretic results for the dimension dependence of VDAEs , and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold . Finally , we demonstrate our method on various real and synthetic datasets , and show that it exhibits performance superior to other generative models . 1 INTRODUCTION . Recent developments in generative models such as variational auto-encoders ( VAEs , Kingma & Welling ( 2013 ) ) and generative adversarial networks ( GANs , Goodfellow et al . ( 2014 ) ) have made it possible to sample remarkably realistic points from complex high dimensional distributions at low computational cost . While their methods are very different — one is derived from variational inference and the other from game theory — their ends both involve learning smooth mappings from a user-defined prior distribution to the modeled distribution . These maps are closely tied to manifold learning when the prior is supported over a Euclidean space ( e.g . Gaussian or uniform priors ) and the data lie on a manifold ( also known as the Manifold Hypothesis , see Narayanan & Mitter ( 2010 ) ; Fefferman et al . ( 2016 ) ) . This is because manifolds themselves are defined by sets that have homeomorphisms to such spaces . Learning such maps is beneficial to any machine learning task , and may shed light on the success of VAEs and GANs in modeling complex distributions . Furthermore , the connection to manifold learning may explain why these generative models fail when they do . Known as posterior collapse in VAEs ( Alemi et al. , 2017 ; Zhao et al. , 2017 ; He et al. , 2019 ; Razavi et al. , 2019 ) and mode collapse in GANs ( Goodfellow , 2017 ) , both describe cases where the forward/reverse mapping to/from Euclidean space collapses large parts of the input to a single output . This violates the bijective requirement of the homeomorphic mapping . It also results in degenerate latent spaces and poor generative performance . A major cause of such failings is when the geometries of the prior and target data do not agree . We explore this issue of prior mismatch and previous treatments of it in Section 3 . Given their connection to manifold learning , it is natural to look to classical approaches in the field for ways to improve VAEs . One of the most principled methods is spectral learning ( Schölkopf et al. , 1998 ; Roweis & Saul , 2000 ; Belkin & Niyogi , 2002 ) which involves describing data from a manifold X ⊂ MX by the eigenfunctions of a kernel on MX . We focus specifically on DMs , where Coifman & Lafon ( 2006 ) show that normalizations of the kernel approximate a very specific diffusion process , the heat kernel over MX . A crucial property of the heat kernel is that , like its physical analogue , it defines a diffusion process that has a uniform stationary distribution — in other words , drawing from this stationary distribution draws uniformly from the data manifold . Moreover , Jones et al . ( 2008 ) established another crucial property of DMs , namely that distances in local neighborhoods in the eigenfunction space are nearly isometric to corresponding geodesic distances on the manifold . However , despite its strong theoretical guarantees , DMs are poorly equipped for large scale generative modeling as they are not easily scalable and do not provide an inverse mapping from the intrinsic feature space . In this paper we address issues in variational inference and manifold learning by combining ideas from both . Theory in manifold learning allows us to better recognize prior mismatch , whereas variational inference provides a method to learn the difficult to approximate inverse diffusion map . Our contributions : 1 ) We introduce the locally bi-Lipschitz property , a sufficient condition for a homeomorphism , for measuring the stability of a mapping between latent and data distributions . 2 ) We introduce VDAEs , a class of variational autoencoders whose encoder-decoder feedforward pass approximates the diffusion process on the data manifold with respect to a user-defined kernel k. 3 ) We show that deep neural networks are capable of learning such diffusion processes , and 4 ) that networks approximating this process produce random walks that have certain desirable properties , including well defined transition and stationary distributions . 5 ) Finally , we demonstrate the utility of the VDAE framework on a set of real and synthetic datasets , and show that they have superior performance and satisfy the locally bi-Lipschitz property where GANs and VAEs do not . 2 BACKGROUND . Variational inference ( VI , Jordan et al . ( 1999 ) ; Wainwright et al . ( 2008 ) ) is a machine learning method that combines Bayesian statistics and latent variable models to approximate some probability density p ( x ) . VI assumes and exploits a latent variable structure in the assumed data generation process , that the observations x ∼ p ( x ) are conditionally distributed given unobserved latent vari- ables z . By modeling the conditional distribution , then marginalizing over z , as in pθ ( x ) = ∫ z pθ ( x|z ) p ( z ) dz , ( 1 ) we obtain the model evidence , or likelihood that x could have instead been drawn from pθ ( x ) . Maximizing Eq . 1 leads to an algorithm for finding likely approximations of p ( x ) . As the cost of computing this integral scales exponentially with the dimension of z , we instead maximize the evidence lower bound ( ELBO ) : log pθ ( x ) ≥ −DKL ( q ( z|x ) ||p ( z ) ) + Ez∼q ( z|x ) [ log pθ ( x|z ) ] , ( 2 ) where q ( z|x ) is usually an approximation of pθ ( z|x ) . Optimizing the ELBO is sped up by taking stochastic gradients ( Hoffman et al. , 2013 ) , and further accelerated by learning a global function approximator qφ in an autoencoding structure ( Kingma & Welling , 2013 ) . Diffusion maps ( DMs , Coifman & Lafon ( 2006 ) ) on the other hand , are a class of kernel methods that perform non-linear dimensionality reduction on a set of observations X ⊆ MX , whereMX is the data manifold . Given a symmetric and positive kernel k , DM considers the induced random walk on the graph of X , where given x , y ∈ X , the transition probabilities p ( y|x ) = p ( x , y ) are row normalized versions of k ( x , y ) . Moreover , the diffusion map ψ embeds the data X ∈ Rm into the Euclidean space RD so that the diffusion distance is approximated by Euclidean distance . This is a powerful property , as it allows the arbitrarily complex random walk induced by k on MX to become an isotropic Gaussian random walk on ψ ( MX ) . SpectralNet is an algorithm introduced by algorithm in Shaham et al . ( 2018b ) to speed up the diffusion map . Until recently , the method ψk could only be computed via the eigendecomposition of K. As a result , DMs were only be tractable for small datasets , or on larger datasets by combining landmark-based estimates and Nystrom approximation techniques . However , Shaham et al . ( 2018b ) propose approximations of the function ψ itself in the case that the kernel k is symmetric . In particular , we will leverage SpectralNet to enforce our diffusion embedding prior . Locally bi-lipschitz coordinates by kernel eigenfunctions . ( Jones et al . ( 2008 ) ) analyzed the construction of local coordinates of Riemannian manifolds by Laplacian eigenfunctions and diffusion map coordinates . They establish , for all x ∈ X , the existence of some neighborhood U ( x ) and d spectral coordinates given U ( x ) that define a bi-Lipschitz mapping from U ( x ) to Rd . With a smooth compact Riemannian manifold , U ( x ) can be chosen to be a geodesic ball with radius a constant multiple of the inradius ( the radius of the largest possible ball around x without intersecting with the manifold boundary ) , where the constant is uniform for all x , but the indices of the d spectral coordinates as well as the local bi-Lipschitz constants may depend on x . Specifically , the Lipschitz constants involve inverse of the inradius at x multiplied again by some global constants . For completeness we give a simplified statement of the Jones et al . ( 2008 ) result in the supplementary material . Using the compactness of the manifold , one can always cover the manifold with m many neighborhoods ( geodesic balls ) on which the bi-Lipschitz property in Jones et al . ( 2008 ) holds . As a result , there are a total of D spectral coordinates , D ≤ md ( in practice D is much smaller than md , since the selected spectral coordinates in the proof of Jones et al . ( 2008 ) tend to be low-frequency ones , and thus the selection on different neighborhoods tend to overlap ) , such that on each of them neighborhoods , there exists a subset of d spectral coordinates out of the D ones which are bi-Lipschitz on the neighborhood , and the Lipschitz constants can be bounded uniformly from below and above . 3 MOTIVATION AND RELATED WORK . Our proposed measure and model is motivated by degenerate latent spaces and poor generative performance in a variational inference framework arising from prior mismatch : when the topologies of the data and prior distributions do not agree . In real world data , this is usually due to two factors : first , when the dimensionalities of the distributions do not match , and second , when the geometries do not match . It is easy to see that homeomorphisms between the distributions will not exist in either case : pointwise correspondences can not be established , thus the bijective condition can not be met . As a result , the model has poor generative performance — for each point not captured in the pointwise correspondence , the latent or generated distribution loses expressivity . Though the default choice of Gaussian distribution for p ( z ) is mathematically elegant and computationally expedient , there are many datasets , real and synthetic , for which this distribution is ill-suited . It is well known that spherical distributions are superior for modeling directional data ( Fisher et al. , 1993 ; Mardia , 2014 ) , which can be found in fields as diverse as bioinformatics ( Hamelryck et al. , 2006 ) , geology ( Peel et al. , 2001 ) , material science ( Krieger Lassen et al. , 1994 ) , natural image processing ( Bahlmann , 2006 ) , and simply preprocessed datasets1 . Additionally observe that no homeomorphism exists between Rk and S1 for any k. For data distributed on more complex manifolds , the literature is sparse due to the difficult nature of such study . However , the manifold hypothesis is well-known and studied ( Narayanan & Mitter , 2010 ; Fefferman et al. , 2016 ) . Previous research on alleviating prior mismatch exists . Davidson et al . ( 2018 ) ; Xu & Durrett ( 2018 ) consider VAEs with the von-Mises Fisher prior , a geometrically hyperspherical prior . ( Rey et al. , 2019 ) further model arbitrarily complex manifolds as priors , but require explicit knowledge of the manifold ( i.e . its projection map , scalar curvature , and volume ) . Finally , Tomczak & Welling ( 2017 ) consider mixtures of any pre-existing priors . But while these methods increase the expressivity of the priors available , they do not prescribe a method for choosing the prior itself . That responsibility still lies with the user . Convserly , our method chooses the best prior automatically . To our knowledge , ours is the first to take a data-driven approach to prior selection . By using some data to inform the prior , we not only guarantee the existence of a homeomorphism between data and prior distributions , we explicitly define it by the learned diffusion map ψ̃ .
The paper studies the problem of density estimation and learning accurate generative models. The authors start from the observation that this problem has been approaches either using variational inference models, that scale very well but whose approximations may lead to degenerate results in practice, and diffusion maps, that scale poorly but are very effective in capturing the underlying data manifold. From here, the authors propose integrating the notion of random walk from diffusion maps into VAEs to avoid degenerate conditions. The proposed method is first defined in its generality, a practical implementation is presented, theoretical guarantees are provided, and empirical evidence of its effectiveness is reported.
SP:820a879346c3ba370348f1086dab5b9c256175e9
Variational Diffusion Autoencoders with Random Walk Sampling
Variational inference ( VI ) methods and especially variational autoencoders ( VAEs ) specify scalable generative models that enjoy an intuitive connection to manifold learning — with many default priors the posterior/likelihood pair q ( z|x ) /p ( x|z ) can be viewed as an approximate homeomorphism ( and its inverse ) between the data manifold and a latent Euclidean space . However , these approximations are well-documented to become degenerate in training . Unless the subjective prior is carefully chosen , the topologies of the prior and data distributions often will not match . Conversely , diffusion maps ( DM ) automatically infer the data topology and enjoy a rigorous connection to manifold learning , but do not scale easily or provide the inverse homeomorphism . In this paper , we propose a ) a principled measure for recognizing the mismatch between data and latent distributions and b ) a method that combines the advantages of variational inference and diffusion maps to learn a homeomorphic generative model . The measure , the locally bi-Lipschitz property , is a sufficient condition for a homeomorphism and easy to compute and interpret . The method , the variational diffusion autoencoder ( VDAE ) , is a novel generative algorithm that first infers the topology of the data distribution , then models a diffusion random walk over the data . To achieve efficient computation in VDAEs , we use stochastic versions of both variational inference and manifold learning optimization . We prove approximation theoretic results for the dimension dependence of VDAEs , and that locally isotropic sampling in the latent space results in a random walk over the reconstructed manifold . Finally , we demonstrate our method on various real and synthetic datasets , and show that it exhibits performance superior to other generative models . 1 INTRODUCTION . Recent developments in generative models such as variational auto-encoders ( VAEs , Kingma & Welling ( 2013 ) ) and generative adversarial networks ( GANs , Goodfellow et al . ( 2014 ) ) have made it possible to sample remarkably realistic points from complex high dimensional distributions at low computational cost . While their methods are very different — one is derived from variational inference and the other from game theory — their ends both involve learning smooth mappings from a user-defined prior distribution to the modeled distribution . These maps are closely tied to manifold learning when the prior is supported over a Euclidean space ( e.g . Gaussian or uniform priors ) and the data lie on a manifold ( also known as the Manifold Hypothesis , see Narayanan & Mitter ( 2010 ) ; Fefferman et al . ( 2016 ) ) . This is because manifolds themselves are defined by sets that have homeomorphisms to such spaces . Learning such maps is beneficial to any machine learning task , and may shed light on the success of VAEs and GANs in modeling complex distributions . Furthermore , the connection to manifold learning may explain why these generative models fail when they do . Known as posterior collapse in VAEs ( Alemi et al. , 2017 ; Zhao et al. , 2017 ; He et al. , 2019 ; Razavi et al. , 2019 ) and mode collapse in GANs ( Goodfellow , 2017 ) , both describe cases where the forward/reverse mapping to/from Euclidean space collapses large parts of the input to a single output . This violates the bijective requirement of the homeomorphic mapping . It also results in degenerate latent spaces and poor generative performance . A major cause of such failings is when the geometries of the prior and target data do not agree . We explore this issue of prior mismatch and previous treatments of it in Section 3 . Given their connection to manifold learning , it is natural to look to classical approaches in the field for ways to improve VAEs . One of the most principled methods is spectral learning ( Schölkopf et al. , 1998 ; Roweis & Saul , 2000 ; Belkin & Niyogi , 2002 ) which involves describing data from a manifold X ⊂ MX by the eigenfunctions of a kernel on MX . We focus specifically on DMs , where Coifman & Lafon ( 2006 ) show that normalizations of the kernel approximate a very specific diffusion process , the heat kernel over MX . A crucial property of the heat kernel is that , like its physical analogue , it defines a diffusion process that has a uniform stationary distribution — in other words , drawing from this stationary distribution draws uniformly from the data manifold . Moreover , Jones et al . ( 2008 ) established another crucial property of DMs , namely that distances in local neighborhoods in the eigenfunction space are nearly isometric to corresponding geodesic distances on the manifold . However , despite its strong theoretical guarantees , DMs are poorly equipped for large scale generative modeling as they are not easily scalable and do not provide an inverse mapping from the intrinsic feature space . In this paper we address issues in variational inference and manifold learning by combining ideas from both . Theory in manifold learning allows us to better recognize prior mismatch , whereas variational inference provides a method to learn the difficult to approximate inverse diffusion map . Our contributions : 1 ) We introduce the locally bi-Lipschitz property , a sufficient condition for a homeomorphism , for measuring the stability of a mapping between latent and data distributions . 2 ) We introduce VDAEs , a class of variational autoencoders whose encoder-decoder feedforward pass approximates the diffusion process on the data manifold with respect to a user-defined kernel k. 3 ) We show that deep neural networks are capable of learning such diffusion processes , and 4 ) that networks approximating this process produce random walks that have certain desirable properties , including well defined transition and stationary distributions . 5 ) Finally , we demonstrate the utility of the VDAE framework on a set of real and synthetic datasets , and show that they have superior performance and satisfy the locally bi-Lipschitz property where GANs and VAEs do not . 2 BACKGROUND . Variational inference ( VI , Jordan et al . ( 1999 ) ; Wainwright et al . ( 2008 ) ) is a machine learning method that combines Bayesian statistics and latent variable models to approximate some probability density p ( x ) . VI assumes and exploits a latent variable structure in the assumed data generation process , that the observations x ∼ p ( x ) are conditionally distributed given unobserved latent vari- ables z . By modeling the conditional distribution , then marginalizing over z , as in pθ ( x ) = ∫ z pθ ( x|z ) p ( z ) dz , ( 1 ) we obtain the model evidence , or likelihood that x could have instead been drawn from pθ ( x ) . Maximizing Eq . 1 leads to an algorithm for finding likely approximations of p ( x ) . As the cost of computing this integral scales exponentially with the dimension of z , we instead maximize the evidence lower bound ( ELBO ) : log pθ ( x ) ≥ −DKL ( q ( z|x ) ||p ( z ) ) + Ez∼q ( z|x ) [ log pθ ( x|z ) ] , ( 2 ) where q ( z|x ) is usually an approximation of pθ ( z|x ) . Optimizing the ELBO is sped up by taking stochastic gradients ( Hoffman et al. , 2013 ) , and further accelerated by learning a global function approximator qφ in an autoencoding structure ( Kingma & Welling , 2013 ) . Diffusion maps ( DMs , Coifman & Lafon ( 2006 ) ) on the other hand , are a class of kernel methods that perform non-linear dimensionality reduction on a set of observations X ⊆ MX , whereMX is the data manifold . Given a symmetric and positive kernel k , DM considers the induced random walk on the graph of X , where given x , y ∈ X , the transition probabilities p ( y|x ) = p ( x , y ) are row normalized versions of k ( x , y ) . Moreover , the diffusion map ψ embeds the data X ∈ Rm into the Euclidean space RD so that the diffusion distance is approximated by Euclidean distance . This is a powerful property , as it allows the arbitrarily complex random walk induced by k on MX to become an isotropic Gaussian random walk on ψ ( MX ) . SpectralNet is an algorithm introduced by algorithm in Shaham et al . ( 2018b ) to speed up the diffusion map . Until recently , the method ψk could only be computed via the eigendecomposition of K. As a result , DMs were only be tractable for small datasets , or on larger datasets by combining landmark-based estimates and Nystrom approximation techniques . However , Shaham et al . ( 2018b ) propose approximations of the function ψ itself in the case that the kernel k is symmetric . In particular , we will leverage SpectralNet to enforce our diffusion embedding prior . Locally bi-lipschitz coordinates by kernel eigenfunctions . ( Jones et al . ( 2008 ) ) analyzed the construction of local coordinates of Riemannian manifolds by Laplacian eigenfunctions and diffusion map coordinates . They establish , for all x ∈ X , the existence of some neighborhood U ( x ) and d spectral coordinates given U ( x ) that define a bi-Lipschitz mapping from U ( x ) to Rd . With a smooth compact Riemannian manifold , U ( x ) can be chosen to be a geodesic ball with radius a constant multiple of the inradius ( the radius of the largest possible ball around x without intersecting with the manifold boundary ) , where the constant is uniform for all x , but the indices of the d spectral coordinates as well as the local bi-Lipschitz constants may depend on x . Specifically , the Lipschitz constants involve inverse of the inradius at x multiplied again by some global constants . For completeness we give a simplified statement of the Jones et al . ( 2008 ) result in the supplementary material . Using the compactness of the manifold , one can always cover the manifold with m many neighborhoods ( geodesic balls ) on which the bi-Lipschitz property in Jones et al . ( 2008 ) holds . As a result , there are a total of D spectral coordinates , D ≤ md ( in practice D is much smaller than md , since the selected spectral coordinates in the proof of Jones et al . ( 2008 ) tend to be low-frequency ones , and thus the selection on different neighborhoods tend to overlap ) , such that on each of them neighborhoods , there exists a subset of d spectral coordinates out of the D ones which are bi-Lipschitz on the neighborhood , and the Lipschitz constants can be bounded uniformly from below and above . 3 MOTIVATION AND RELATED WORK . Our proposed measure and model is motivated by degenerate latent spaces and poor generative performance in a variational inference framework arising from prior mismatch : when the topologies of the data and prior distributions do not agree . In real world data , this is usually due to two factors : first , when the dimensionalities of the distributions do not match , and second , when the geometries do not match . It is easy to see that homeomorphisms between the distributions will not exist in either case : pointwise correspondences can not be established , thus the bijective condition can not be met . As a result , the model has poor generative performance — for each point not captured in the pointwise correspondence , the latent or generated distribution loses expressivity . Though the default choice of Gaussian distribution for p ( z ) is mathematically elegant and computationally expedient , there are many datasets , real and synthetic , for which this distribution is ill-suited . It is well known that spherical distributions are superior for modeling directional data ( Fisher et al. , 1993 ; Mardia , 2014 ) , which can be found in fields as diverse as bioinformatics ( Hamelryck et al. , 2006 ) , geology ( Peel et al. , 2001 ) , material science ( Krieger Lassen et al. , 1994 ) , natural image processing ( Bahlmann , 2006 ) , and simply preprocessed datasets1 . Additionally observe that no homeomorphism exists between Rk and S1 for any k. For data distributed on more complex manifolds , the literature is sparse due to the difficult nature of such study . However , the manifold hypothesis is well-known and studied ( Narayanan & Mitter , 2010 ; Fefferman et al. , 2016 ) . Previous research on alleviating prior mismatch exists . Davidson et al . ( 2018 ) ; Xu & Durrett ( 2018 ) consider VAEs with the von-Mises Fisher prior , a geometrically hyperspherical prior . ( Rey et al. , 2019 ) further model arbitrarily complex manifolds as priors , but require explicit knowledge of the manifold ( i.e . its projection map , scalar curvature , and volume ) . Finally , Tomczak & Welling ( 2017 ) consider mixtures of any pre-existing priors . But while these methods increase the expressivity of the priors available , they do not prescribe a method for choosing the prior itself . That responsibility still lies with the user . Convserly , our method chooses the best prior automatically . To our knowledge , ours is the first to take a data-driven approach to prior selection . By using some data to inform the prior , we not only guarantee the existence of a homeomorphism between data and prior distributions , we explicitly define it by the learned diffusion map ψ̃ .
The paper proposes a new generative model for unsupervised learning, based on a diffusion random walk principle inspired by the manifold learning literature. The basic idea is to (probabilistically) map points to a latent space, perform a random walk in that space, and then map back to the original space again. Learning of the suitable maps is achieved by casting the problem in a variational inference framework.
SP:820a879346c3ba370348f1086dab5b9c256175e9
RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?
Recurrent neural networks ( RNNs ) are particularly well-suited for modeling longterm dependencies in sequential data , but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate . While a number of works attempt to mitigate this effect through gated recurrent units , skip-connections , parametric constraints and design choices , we propose a novel incremental RNN ( iRNN ) , where hidden state vectors keep track of incremental changes , and as such approximate state-vector increments of Rosenblatt ’ s ( 1962 ) continuous-time RNNs . iRNN exhibits identity gradients and is able to account for long-term dependencies ( LTD ) . We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training , while suffering no performance degradation . We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks . 1 INTRODUCTION . Recurrent neural networks ( RNNs ) in each round store a hidden state vector , hm ∈ RD , and upon receiving the input vector , xm+1 ∈ Rd , linearly transform the tuple ( hm , xm+1 ) and pass it through a memoryless non-linearity to update the state over T rounds . Subsequently , RNNs output an affine function of the hidden states as its prediction . The model parameters ( state/input/prediction parameters ) are learnt by minimizing an empirical loss . This seemingly simple update rule has had significant success in learning complex patterns for sequential input data . Nevertheless , that training RNNs can be challenging , and that performance can be uneven on tasks that require long-term-dependency ( LTD ) , was first noted by Hochreiter ( 1991 ) , Bengio et al . ( 1994 ) and later by other researchers . Pascanu et al . ( 2013b ) attributed this to the fact that the error gradient back-propagated in time ( BPTT ) , for the time-step m , is dominated by product of partials of hiddenstate vectors , ∏T−1 j=m ∂hj+1 ∂hj , and these products typically exhibit exponentially vanishing decay or explosion , resulting in incorrect credit assignment during training and test-time . Rosenblatt ( 1962 ) , on whose work we draw inspiration from , introduced continuous-time RNN ( CTRNN ) to mimic activation propagation in neural circuitry . CTRNN dynamics evolves as follows : τ ġ ( t ) = −αg ( t ) + φ ( Ug ( t ) +Wx ( t ) + b ) , t ≥ t0 . ( 1 ) Here , x ( t ) ∈ Rd is the input signal , g ( t ) ∈ RD is the hidden state vector of D neurons , ġi ( t ) is the rate of change of the i-th state component ; τ , α ∈ R+ , referred to as the post-synaptic time-constant , impacts the rate of a neuron ’ s response to the instantaneous activation φ ( Ug ( t ) +Wx ( t ) + b ) ; and U ∈ RD×D , W ∈ RD×d , b ∈ RD are model parameters . In passing , note that recent RNN works that draw inspiration from ODE ’ s ( Chang et al. , 2019 ) are special cases of CTRNN ( τ = 1 , α = 0 ) . Vanishing Gradients . The qualitative aspects of the CTRNN dynamics is transparent in its integral form : g ( t ) = e−α t−t0 τ g ( t0 ) + 1 τ ∫ t t0 e−α t−s τ φ ( Ug ( s ) +Wx ( s ) + b ) ds ( 2 ) This integral form reveals that the partials of hidden-state vector with respect to the initial condition , ∂g ( t ) ∂g ( t0 ) , gets attenuated rapidly ( first term in RHS ) , and so we face a vanishing gradient problem . We will address this issue later but we note that this is not an artifact of CTRNN but is exhibited by ODEs that have motivated other RNNs ( see Sec . 2 ) . Shannon-Nyquist Sampling . A key property of CTRNN is that the time-constant τ together with the first term −g ( t ) , is in effect a low-pass filter with bandwidth ατ−1 suppressing high frequency components of the activation signal , φ ( ( Ug ( s ) ) + ( Wx ( s ) ) + b ) . This is good , because , by virtue of the Shannon-Nyquist sampling theorem , we can now maintain fidelity of discrete samples with respect to continuous time dynamics , in contrast to conventional ODEs ( α = 0 ) . Additionally , since high-frequencies are already suppressed , in effect we may assume that the input signal x ( t ) is slowly varying relative to the post-synaptic time constant τ . Equilibrium . The combination of low pass filtering and slowly time varying input has a significant bearing . The state vector as well as the discrete samples evolve close to the equilibrium state , i.e. , g ( t ) ≈ φ ( Ug ( t ) +Wx ( t ) + b ) under general conditions ( Sec . 3 ) . Incremental Updates . Whether or not system is in equilibrium , the integral form in Eq . 2 points to gradient attenuation as a fundamental issue . To overcome this situation , we store and process increments rather than the cumulative values g ( t ) and propose dynamic evolution in terms of increments . Let us denote hidden state sequence as hm ∈ RD and input sequence xm ∈ Rd . For m = 1 , 2 , . . . , T , and a suitable β > 0 τ ġ ( t ) = −α ( g ( t ) ± hm−1 ) + φ ( U ( g ( t ) ± hm−1 ) +Wxm + b ) , g ( 0 ) = 0 , t ≥ 0 ( 3 ) hm , h β·τ m , g ( β · τ ) Intuitively , say system is in equilibrium and−α ( µ ( xm , hm−1 ) ) +φ ( Uµ ( xm , hm−1 ) +Wxm+b ) = 0 . We note state transitions are marginal changes from previous states , namely , hm = µ ( xm , hm−1 ) − hm−1 . Now for a fixed input xm , as to which equilibrium is reached depends on hm−1 , but are nevertheless finitely many . So encoding marginal changes as states leads to “ identity ” gradient . Incremental RNN ( iRNN ) achieves Identity Gradient . We propose to discretize Eq . 3 to realize iRNN ( see Sec . 3 ) . At time m , it takes the previous state hm−1 ∈ RD and input xm ∈ Rd and outputs hm ∈ RD after simulating the CTRNN evolution in discrete-time , for a suitable number of discrete steps . We show that the proposed RNN approximates the continuous dynamics and solves the vanishing/exploding gradient issue by ensuring identity gradientIn general , we consider two options , SiRNN , whose state is updated with a single CTRNN sample , similar to vanilla RNNs , and , iRNN , with many intermediate samples . SiRNN is well-suited for slowly varying inputs . Contributions . To summarize , we list our main contributions : ( A ) iRNN converges to equilibrium for typical activation functions . The partial gradients of hiddenstate vectors for iRNNs converge to identity , thus solving vanishing/exploding gradient problem ! ( B ) iRNN converges rapidly , at an exponential rate in the number of discrete samplings of Eq . 1 . SiRNN , the single-step iRNN , is efficient and can be leveraged for slowly varying input sequences . It exhibits fast training time , has fewer parameters and better accuracy relative to standard LSTMs . ( C ) Extensive experiments on LTD datasets show that we improve upon standard LSTM accuracy as well as other recent proposals that are based on designing transition matrices and/or skip connections . iRNNs/SiRNNs are robust to time-series distortions such as noise paddings ( D ) While our method extends directly ( see Appendix A.1 ) to Deep RNNs , we deem these extensions complementary , and focus on single-layer to highlight our incremental perspective . 2 RELATED WORK . Gated Architectures . Long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) is widely used in RNNs to model long-term dependency in sequential data . Gated recurrent unit ( GRU ) ( Cho et al. , 2014 ) is another gating mechanism that has been demonstrated to achieve similar performance of LSTM with fewer parameters . Some recent gated RNNs include UGRNN ( Collins et al. , 2016 ) , and FastGRNN ( Kusupati et al. , 2018 ) . While mitigating vanishing/exploding gradients , they do not eliminate it . Often , these models incur increased inference , training costs , and model size . Unitary RNNs . Arjovsky et al . ( 2016 ) ; Jing et al . ( 2017 ) ; Zhang et al . ( 2018 ) ; Mhammedi et al . ( 2016 ) focus on designing well-conditioned state transition matrices , attempting to enforce unitary-property , during training . Unitary property does not generally circumvent vanishing gradient ( Pennington et al . ( 2017 ) ) . Also , it limits expressive power and prediction accuracy while also increasing training time . Deep RNNs . These are nonlinear transition functions incorporated into RNNs for performance improvement . For instance , Pascanu et al . ( 2013a ) empirically analyzed the problem of how to construct deep RNNs . Zilly et al . ( 2017 ) proposed extending the LSTM architecture to allow step-tostep transition depths larger than one . Mujika et al . ( 2017 ) proposed incorporating the strengths of both multiscale RNNs and deep transition RNNs to learn complex transition functions . While Deep RNNs offer richer representations relative to single-layers , it is complementary to iRNNs . Residual/Skip Connections . Jaeger et al . ( 2007 ) ; Bengio et al . ( 2013 ) ; Chang et al . ( 2017 ) ; Campos et al . ( 2017 ) ; Kusupati et al . ( 2018 ) feed-forward state vectors to induce skip or residual connections , to serve as a middle ground between feed-forward and recurrent models , and to mitigate gradient decay . Nevertheless , these connections , can not entirely eliminate gradient explosion/decay . For instance , Kusupati et al . ( 2018 ) suggest hm = αmhm−1 + βmφ ( Uhm−1 +Wxm + b ) , and learn parameters so that αm ≈ 1 and βm ≈ 0 . Evidently , this setting can lead to identity gradient , observe that setting βm ≈ 0 , implies little contribution from the inputs and can conflict with good accuracy , as also observed in our experiments . Linear RNNs . ( Bradbury et al. , 2016 ; Lei et al. , 2018 ; Balduzzi & Ghifary , 2016 ) focus on speeding up RNNs by replacing recurrent connections , such as hidden-to-hidden interactions , with light weight linear components . This reduces training time , but results in significantly increased model size . For example , Lei et al . ( 2018 ) requires twice the number of cells for LSTM level performance . ODE/Dynamical Perspective . Few ODE inspired architectures attempt to address stability , but do not end up eliminating vanishing/exploding gradients . Talathi & Vartak ( 2015 ) proposed a modified weight initialization strategy based on a dynamical system perspective on weight initialization process to successfully train RNNs composed of ReLUs . Niu et al . ( 2019 ) analyzed RNN architectures using numerical methods of ODE and propose a family of ODE-RNNs . Chang et al . ( 2019 ) , propose Antisymmetric-RNN . Their key idea is to express the transition matrix in Eq . 1 , for the special case α = 0 , τ = 1 , as a difference : U = V − V T and note that the eigenspectrum is imaginary . Nevertheless , Euler discretization , in this context leads to instability , necessitating damping of the system . As such vanishing gradient can not be completely eliminated . Its behavior is analogous to FastRNN Kusupati et al . ( 2018 ) , in that , identity gradient conflicts with high accuracy . In summary , we are the first to propose evolution over the equilibrium manifold , and demonstrating identity gradients . Neural ODEs ( Chen et al. , 2018 ; Rubanova et al. , 2019 ) have also been proposed for time-series prediction to deal with irregularly sampled inputs . They parameterize the derivative of the hidden-state in terms of an autonomous differential equation and let the ODE evolve in continuous time until the next input arrives . As such , this is not our goal , our ODE explicitly depends on the input , and evolves until equilibrium for that input is reached . We introduce incremental updates to bypass vanishing/exploding gradient issues , which is not of specific concern for these works .
The authors present a novel work to address the problem of signal propagation in the recurrent neural networks. The idea is to build a attractor system for the signal transition from state h_{k-1} to h_k. If the attractor system converges to a equilibrium, then the hidden to hidden gradient is an identity matrix. This idea is elegant. The authors verify the performance of Increment RNN on long-term-dependency tasks and non-long-term-dependency tasks.
SP:2f460faff6d62d462bd80a4545f3ce435d2ab0f6
RNNs Incrementally Evolving on an Equilibrium Manifold: A Panacea for Vanishing and Exploding Gradients?
Recurrent neural networks ( RNNs ) are particularly well-suited for modeling longterm dependencies in sequential data , but are notoriously hard to train because the error backpropagated in time either vanishes or explodes at an exponential rate . While a number of works attempt to mitigate this effect through gated recurrent units , skip-connections , parametric constraints and design choices , we propose a novel incremental RNN ( iRNN ) , where hidden state vectors keep track of incremental changes , and as such approximate state-vector increments of Rosenblatt ’ s ( 1962 ) continuous-time RNNs . iRNN exhibits identity gradients and is able to account for long-term dependencies ( LTD ) . We show that our method is computationally efficient overcoming overheads of many existing methods that attempt to improve RNN training , while suffering no performance degradation . We demonstrate the utility of our approach with extensive experiments and show competitive performance against standard LSTMs on LTD and other non-LTD tasks . 1 INTRODUCTION . Recurrent neural networks ( RNNs ) in each round store a hidden state vector , hm ∈ RD , and upon receiving the input vector , xm+1 ∈ Rd , linearly transform the tuple ( hm , xm+1 ) and pass it through a memoryless non-linearity to update the state over T rounds . Subsequently , RNNs output an affine function of the hidden states as its prediction . The model parameters ( state/input/prediction parameters ) are learnt by minimizing an empirical loss . This seemingly simple update rule has had significant success in learning complex patterns for sequential input data . Nevertheless , that training RNNs can be challenging , and that performance can be uneven on tasks that require long-term-dependency ( LTD ) , was first noted by Hochreiter ( 1991 ) , Bengio et al . ( 1994 ) and later by other researchers . Pascanu et al . ( 2013b ) attributed this to the fact that the error gradient back-propagated in time ( BPTT ) , for the time-step m , is dominated by product of partials of hiddenstate vectors , ∏T−1 j=m ∂hj+1 ∂hj , and these products typically exhibit exponentially vanishing decay or explosion , resulting in incorrect credit assignment during training and test-time . Rosenblatt ( 1962 ) , on whose work we draw inspiration from , introduced continuous-time RNN ( CTRNN ) to mimic activation propagation in neural circuitry . CTRNN dynamics evolves as follows : τ ġ ( t ) = −αg ( t ) + φ ( Ug ( t ) +Wx ( t ) + b ) , t ≥ t0 . ( 1 ) Here , x ( t ) ∈ Rd is the input signal , g ( t ) ∈ RD is the hidden state vector of D neurons , ġi ( t ) is the rate of change of the i-th state component ; τ , α ∈ R+ , referred to as the post-synaptic time-constant , impacts the rate of a neuron ’ s response to the instantaneous activation φ ( Ug ( t ) +Wx ( t ) + b ) ; and U ∈ RD×D , W ∈ RD×d , b ∈ RD are model parameters . In passing , note that recent RNN works that draw inspiration from ODE ’ s ( Chang et al. , 2019 ) are special cases of CTRNN ( τ = 1 , α = 0 ) . Vanishing Gradients . The qualitative aspects of the CTRNN dynamics is transparent in its integral form : g ( t ) = e−α t−t0 τ g ( t0 ) + 1 τ ∫ t t0 e−α t−s τ φ ( Ug ( s ) +Wx ( s ) + b ) ds ( 2 ) This integral form reveals that the partials of hidden-state vector with respect to the initial condition , ∂g ( t ) ∂g ( t0 ) , gets attenuated rapidly ( first term in RHS ) , and so we face a vanishing gradient problem . We will address this issue later but we note that this is not an artifact of CTRNN but is exhibited by ODEs that have motivated other RNNs ( see Sec . 2 ) . Shannon-Nyquist Sampling . A key property of CTRNN is that the time-constant τ together with the first term −g ( t ) , is in effect a low-pass filter with bandwidth ατ−1 suppressing high frequency components of the activation signal , φ ( ( Ug ( s ) ) + ( Wx ( s ) ) + b ) . This is good , because , by virtue of the Shannon-Nyquist sampling theorem , we can now maintain fidelity of discrete samples with respect to continuous time dynamics , in contrast to conventional ODEs ( α = 0 ) . Additionally , since high-frequencies are already suppressed , in effect we may assume that the input signal x ( t ) is slowly varying relative to the post-synaptic time constant τ . Equilibrium . The combination of low pass filtering and slowly time varying input has a significant bearing . The state vector as well as the discrete samples evolve close to the equilibrium state , i.e. , g ( t ) ≈ φ ( Ug ( t ) +Wx ( t ) + b ) under general conditions ( Sec . 3 ) . Incremental Updates . Whether or not system is in equilibrium , the integral form in Eq . 2 points to gradient attenuation as a fundamental issue . To overcome this situation , we store and process increments rather than the cumulative values g ( t ) and propose dynamic evolution in terms of increments . Let us denote hidden state sequence as hm ∈ RD and input sequence xm ∈ Rd . For m = 1 , 2 , . . . , T , and a suitable β > 0 τ ġ ( t ) = −α ( g ( t ) ± hm−1 ) + φ ( U ( g ( t ) ± hm−1 ) +Wxm + b ) , g ( 0 ) = 0 , t ≥ 0 ( 3 ) hm , h β·τ m , g ( β · τ ) Intuitively , say system is in equilibrium and−α ( µ ( xm , hm−1 ) ) +φ ( Uµ ( xm , hm−1 ) +Wxm+b ) = 0 . We note state transitions are marginal changes from previous states , namely , hm = µ ( xm , hm−1 ) − hm−1 . Now for a fixed input xm , as to which equilibrium is reached depends on hm−1 , but are nevertheless finitely many . So encoding marginal changes as states leads to “ identity ” gradient . Incremental RNN ( iRNN ) achieves Identity Gradient . We propose to discretize Eq . 3 to realize iRNN ( see Sec . 3 ) . At time m , it takes the previous state hm−1 ∈ RD and input xm ∈ Rd and outputs hm ∈ RD after simulating the CTRNN evolution in discrete-time , for a suitable number of discrete steps . We show that the proposed RNN approximates the continuous dynamics and solves the vanishing/exploding gradient issue by ensuring identity gradientIn general , we consider two options , SiRNN , whose state is updated with a single CTRNN sample , similar to vanilla RNNs , and , iRNN , with many intermediate samples . SiRNN is well-suited for slowly varying inputs . Contributions . To summarize , we list our main contributions : ( A ) iRNN converges to equilibrium for typical activation functions . The partial gradients of hiddenstate vectors for iRNNs converge to identity , thus solving vanishing/exploding gradient problem ! ( B ) iRNN converges rapidly , at an exponential rate in the number of discrete samplings of Eq . 1 . SiRNN , the single-step iRNN , is efficient and can be leveraged for slowly varying input sequences . It exhibits fast training time , has fewer parameters and better accuracy relative to standard LSTMs . ( C ) Extensive experiments on LTD datasets show that we improve upon standard LSTM accuracy as well as other recent proposals that are based on designing transition matrices and/or skip connections . iRNNs/SiRNNs are robust to time-series distortions such as noise paddings ( D ) While our method extends directly ( see Appendix A.1 ) to Deep RNNs , we deem these extensions complementary , and focus on single-layer to highlight our incremental perspective . 2 RELATED WORK . Gated Architectures . Long short-term memory ( LSTM ) ( Hochreiter & Schmidhuber , 1997 ) is widely used in RNNs to model long-term dependency in sequential data . Gated recurrent unit ( GRU ) ( Cho et al. , 2014 ) is another gating mechanism that has been demonstrated to achieve similar performance of LSTM with fewer parameters . Some recent gated RNNs include UGRNN ( Collins et al. , 2016 ) , and FastGRNN ( Kusupati et al. , 2018 ) . While mitigating vanishing/exploding gradients , they do not eliminate it . Often , these models incur increased inference , training costs , and model size . Unitary RNNs . Arjovsky et al . ( 2016 ) ; Jing et al . ( 2017 ) ; Zhang et al . ( 2018 ) ; Mhammedi et al . ( 2016 ) focus on designing well-conditioned state transition matrices , attempting to enforce unitary-property , during training . Unitary property does not generally circumvent vanishing gradient ( Pennington et al . ( 2017 ) ) . Also , it limits expressive power and prediction accuracy while also increasing training time . Deep RNNs . These are nonlinear transition functions incorporated into RNNs for performance improvement . For instance , Pascanu et al . ( 2013a ) empirically analyzed the problem of how to construct deep RNNs . Zilly et al . ( 2017 ) proposed extending the LSTM architecture to allow step-tostep transition depths larger than one . Mujika et al . ( 2017 ) proposed incorporating the strengths of both multiscale RNNs and deep transition RNNs to learn complex transition functions . While Deep RNNs offer richer representations relative to single-layers , it is complementary to iRNNs . Residual/Skip Connections . Jaeger et al . ( 2007 ) ; Bengio et al . ( 2013 ) ; Chang et al . ( 2017 ) ; Campos et al . ( 2017 ) ; Kusupati et al . ( 2018 ) feed-forward state vectors to induce skip or residual connections , to serve as a middle ground between feed-forward and recurrent models , and to mitigate gradient decay . Nevertheless , these connections , can not entirely eliminate gradient explosion/decay . For instance , Kusupati et al . ( 2018 ) suggest hm = αmhm−1 + βmφ ( Uhm−1 +Wxm + b ) , and learn parameters so that αm ≈ 1 and βm ≈ 0 . Evidently , this setting can lead to identity gradient , observe that setting βm ≈ 0 , implies little contribution from the inputs and can conflict with good accuracy , as also observed in our experiments . Linear RNNs . ( Bradbury et al. , 2016 ; Lei et al. , 2018 ; Balduzzi & Ghifary , 2016 ) focus on speeding up RNNs by replacing recurrent connections , such as hidden-to-hidden interactions , with light weight linear components . This reduces training time , but results in significantly increased model size . For example , Lei et al . ( 2018 ) requires twice the number of cells for LSTM level performance . ODE/Dynamical Perspective . Few ODE inspired architectures attempt to address stability , but do not end up eliminating vanishing/exploding gradients . Talathi & Vartak ( 2015 ) proposed a modified weight initialization strategy based on a dynamical system perspective on weight initialization process to successfully train RNNs composed of ReLUs . Niu et al . ( 2019 ) analyzed RNN architectures using numerical methods of ODE and propose a family of ODE-RNNs . Chang et al . ( 2019 ) , propose Antisymmetric-RNN . Their key idea is to express the transition matrix in Eq . 1 , for the special case α = 0 , τ = 1 , as a difference : U = V − V T and note that the eigenspectrum is imaginary . Nevertheless , Euler discretization , in this context leads to instability , necessitating damping of the system . As such vanishing gradient can not be completely eliminated . Its behavior is analogous to FastRNN Kusupati et al . ( 2018 ) , in that , identity gradient conflicts with high accuracy . In summary , we are the first to propose evolution over the equilibrium manifold , and demonstrating identity gradients . Neural ODEs ( Chen et al. , 2018 ; Rubanova et al. , 2019 ) have also been proposed for time-series prediction to deal with irregularly sampled inputs . They parameterize the derivative of the hidden-state in terms of an autonomous differential equation and let the ODE evolve in continuous time until the next input arrives . As such , this is not our goal , our ODE explicitly depends on the input , and evolves until equilibrium for that input is reached . We introduce incremental updates to bypass vanishing/exploding gradient issues , which is not of specific concern for these works .
In this paper, the authors propose the incremental RNN (iRNN), which is inspired by the continuous-time RNN (CTRNN). Theoretically, the equilibrium point of iRNN exists and is unique. Furthermore, the norm of the Jacobian between two hidden states is always one, provided that the Euler iterations converge. The authors proved this property as well as the exponential convergence rate of the Euler iteration. These properties avoid the vanishing/exploding gradient problem typical in RNN with long sequences in theory. Empirically, the proposed method is compared with multiple RNN architecture on various tasks.
SP:2f460faff6d62d462bd80a4545f3ce435d2ab0f6
Self-Supervised Learning of Appliance Usage
1 INTRODUCTION . Learning home appliance usage patterns is useful for understanding user habits and optimizing electricity consumption . For example , knowing when a person uses their microwave , stove , oven , coffee machine or toaster provides information about their eating patterns . Similarly , understanding when they use their TV , air-conditioner , or washer and dryer provides knowledge of their behavior and habits . Such information can be used to encourage energy saving by optimizing appliance usage ( Armel et al. , 2013 ) , to track the wellbeing of elderly living alone ( Donini et al. , 2013 ; Debes et al. , 2016 ) , or to provide users with behavioral analytics ( Zhou & Yang , 2016 ; Zipperer et al. , 2013 ) . This data is also useful for various businesses such as home insurance companies interested in assessing accident risks and utility companies interested in optimizing energy efficiency ( Armel et al. , 2013 ) . The problem can be modeled as event detection – i.e. , given the total energy consumed by the house as a function of time , we want to detect when various appliances are turned on . Past work has looked at analyzing the energy signal from the home utility meter to detect when certain appliances are on.1 Most solutions , however , assume that the energy pattern for each appliance is unique and known , and use this knowledge to create labeled data for their supervised models . ( Kolter et al. , 2010 ; Zhong et al. , 2014 ; 2015 ; Kelly & Knottenbelt , 2015 ; Zhang et al. , 2018 ; Bonfigli et al. , 2018 ) . Unfortunately , such solutions do not generalize well because the energy pattern of an appliance depends on its brand and can differ from one home to another ( Kelly & Knottenbelt , 2015 ; Bonfigli et al. , 2018 ) .2 The literature also contains some unsupervised methods , but they typically have limited accuracy ( Kim et al. , 2011 ; Kolter & Jaakkola , 2012 ; Johnson & Willsky , 2013 ; Parson et al. , 2014 ; Wytock & Kolter , 2014 ; Zhao et al. , 2016 ; Lange & Berges , 2018 ) . Unsupervised event detection in a data stream is intrinsically challenging because we do not know what patterns to look for . In our task , not only may appliance energy patterns be unknown , but also the energy signal may include many background events unrelated to appliance activation , such as the fridge or HVAC power cycling events . One way to address this challenge is to consider the self-supervised paradigm . If a different stream of data also observes the events of interest , we can use this second modality to provide self-supervising 1The utility meter outputs the sum of the energy of all active appliances in a house as a function of time . 2For example , a Samsung dishwasher may have a different energy pattern from that of a Kenmore dishwasher . signals for event detection . To that end , we leverage the availability of new fine-resolution motion sensors which track the locations of people at home ( Adib et al. , 2015 ; Joshi et al. , 2015 ; Li et al. , 2016 ; Ghourchian et al. , 2017 ; Hsu et al. , 2017b ) . Such sensors operate as a consumer radar , providing decimeter-level location accuracy . They do not require people to wear sensors on their bodies , can operate through walls , and track people ’ s locations in different rooms . These location sensors indirectly observe the events of interest . Specifically , they capture the change in user locations as they reach out to an appliance to set it up or turn it on ( e.g . put food in a microwave and turn it on ) . Hence , the output of such sensors can provide a second modality for self-supervision . But how should one design the model ? We can not directly use location as a label for appliance activation events . People can be next to an appliance but neither activate it nor interact with it . Moreover , we do not assume appliance locations are known a priori . We also can not use the two modalities to learn a joint representation of the event in a shared space . This is because location and energy are unrelated most of the time and become related only when the event of interest occurs . Furthermore , there are typically multiple residents in the home , making it hard to tell which of them interacted with the appliance . Our model is based on cross-modal prediction . We train a neural network that , given the home energy at a particular time , predicts the location of the home residents . Our intuition is that appliance activation events have highly predictable locations , typically the location of the appliance . In contrast , background energy events ( e.g . power cycling of the fridge ) do not lead to predictable locations . Thus , our model uses this learned predictability along with the associated location and energy representation to cluster the events in the energy stream . In addition , we use a mixture distribution to disentangle irrelevant location information of other residents in the home . Interestingly , our model not only learns when each appliance is activated but also discovers the location of that appliance in the home , all without any labeled data . We summarize the contributions of this paper as follows : • The paper introduces a new method for self-supervised event detection from weakly related data streams . The method combines neural cross-modal prediction with custom clustering based on the learned predictability and representation . We apply it to the task of detecting appliance usage events using unlabeled data from two sensors in the home : the energy meter , and a location sensor . • To evaluate our design , we have created the first dataset with concurrent streams of home energy and location data , collected from 4 homes over a period of 7 months . For each home , data was collected for 2 to 4 months . Ground truth measurements are provided via smart plugs connected directly to each appliance . • Compared to past work on unsupervised learning of appliance usage and a new baseline that leverages the two modalities , our method achieves significant improvements of 67.3 % and 51.9 % respectively for the average detection F1 score . We will release our code and dataset to encourage future work on multi-modal models for understanding appliance usage patterns and the underlying user behavior . 3 2 RELATED WORK . Energy disaggregation Our work is related to past work on energy disaggregation , which refers to the problem of separating appliance-level energy from a home ’ s total ( or aggregate ) energy signal . Past work in this domain can be broadly classified into two categories : supervised and unsupervised . Supervised methods assume that the power signatures of individual appliances are available . They use data from individual appliances to obtain models for each appliance power signature , and then use those models to detect appliance events from the aggregate energy signal . Early work learns sparse codes for different appliances ( Kolter et al. , 2010 ) or uses a Factorial HMM ( FHMM ) ( Ghahramani & Jordan , 1996 ) to model each appliance as an HMM ( Zhong et al. , 2014 ; 2015 ) . Other work uses matrix factorization approaches to estimate monthly energy breakdowns ( Batra et al. , 2017 ; 2018 ) . More recently , neural networks have been used to model appliances ( Kelly & Knottenbelt , 2015 ; Zhang et al. , 2018 ; Jia et al. , 2019 ; Bonfigli et al. , 2018 ) , where extracting appliance-level energy is formulated as a de-noising problem . However , supervised solutions typically do not generalize well 3Project website : http : //sapple.csail.mit.edu to new homes ( Kelly & Knottenbelt , 2015 ; Bonfigli et al. , 2018 ) . This is because two appliances of the same type ( e.g . coffee machine ) in different homes are often manufactured by different brands , and thus have different power signatures . Unsupervised methods do not assume prior knowledge of appliance signatures ; they attempt to learn those signatures from the aggregate energy signal . Early approaches use variants of FHMM , and learn appliance HMMs with Expectation-Maximization ( Kim et al. , 2011 ) , approximate footprint extraction procedures ( Kolter & Jaakkola , 2012 ) , or using expert knowledge to configure prior parameters ( Johnson & Willsky , 2013 ; Parson et al. , 2014 ) . Some papers propose using contextual information ( such as temperature , hour of the day , and day of the week ) ( Wytock & Kolter , 2014 ) , or use event-based signal processing methods to cluster appliances ( Zhao et al. , 2016 ) . More recently , Lange & Berges ( 2018 ) proposed using a recurrent neural network as the variational distribution in learning the FHMM . In contrast , our work leverages people ’ s location data as a self-supervising signal . We cluster appliance events through learning the relation between energy events and people ’ s locations , and also learn appliance locations as a by-product . Passive location sensing Motivated by new in-home applications and continuous health monitoring , recent years have witnessed an increasing number of indoor location sensing systems ( Adib et al. , 2015 ; Joshi et al. , 2015 ; Li et al. , 2016 ; Ghourchian et al. , 2017 ) . They infer people ’ s locations passively by analyzing how people change the surrounding radio signals ( e.g . WiFi ) and do not require people to wear any sensors . These sensors have been used for various applications including activity recognition ( Wang et al. , 2014 ; 2015 ) , sleep monitoring ( Zhao et al. , 2017 ; Hsu et al. , 2017a ) , mobility and behavioral sensing ( Hsu et al. , 2017b ; 2019 ) , and health monitoring ( Kaltiokallio et al. , 2012 ) . In our work , we leverage the availability of such sensors to introduce location data as an additional data modality for learning appliance usage patterns . Self-supervised multi-modal learning Our work is related to a growing body of work on multimodal learning . Most approaches learn to encode the multi-modal data into a shared space ( Gomez et al. , 2017 ; Harwath et al. , 2018 ; Owens & Efros , 2018 ; Zhao et al. , 2018 ; 2019 ) . In contrast , since our two modalities are mostly unrelated and become related only when an activation event happens , we learn to predict one modality conditioned on the other . Our work is also related to cross-modal prediction ( Krishna et al. , 2017 ; Owens et al. , 2016 ; Zhang et al. , 2017 ) but differs from it in an essential way . Past work on cross-modal prediction typically uses the prediction as the target outcome ( e.g . output text for video captioning ) . In contrast , our objective is to discover the hidden appliance activation events . Thus , we design our method to leverage the learned predictability and cross-modal mapping for clustering activation events . Furthermore , we introduce a mixture prediction design to disentangle unrelated information in our predicted modality ( location measurements unrelated to energy events ) . 3 PROBLEM FORMULATION . Our goal is to learn appliance activation events in an unsupervised way , using two input streams : home aggregate energy and residents ’ location data . Figure 1 shows the two data modalities . We describe each of them formally and define appliance “ events ” below . Aggregate energy signal A household ’ s total energy consumption is measured by a utility meter regularly . This measures the sum of the energy consumed by all appliances at each point in time . We denote the aggregate energy signal by y = ( y1 , y2 , . . . , yT ) , where yt ∈ R+ . Suppose there are a total of K appliances in a home , and each appliance ’ s energy signal is denoted by xk = ( x1 , k , x2 , k , . . . , xT , k ) , where xt , k ∈ R+ . Only the aggregate energy signal is observed , yt =∑K k=1 xt , k + t , where t ∼ N ( 0 , σ2 ) is the background noise . Figure 1a shows one day of an aggregate energy signal . The base power level shifts constantly throughout the day , depending on the background load ( e.g . ceiling lights ) . Added on top of the base level are the various appliance events . Figure 1b zooms in around 20:30 , and shows examples of those events . The stove was turned on around 20:28 , and its power continued to cycle between a few levels . While the stove was on , the microwave was also turned on and ran for a few minutes , and the garbage disposer was turned on shortly . Indoor location data We use a single location sensor similar to that in Hsu et al . ( 2017b ) to measure people ’ s indoor locations passively . The sensor sends out radio signals and analyzes the reflections to localize multiple people . Similarly to a regular WiFi router , the sensor has a limited coverage area of up to 40 feet . Suppose there are Pt people in the coverage area at time t. The location data is denoted by lt = ( lt,1 , lt,2 , . . . , lt , Pt ) , where lt , p ∈ R2 is the x-y location of person p at time t. We can represent the location data over multiple time frames as l1 : T = ( l1 , l2 , . . . , lT ) . Figure 1c shows one minute of location data from two people , and Figure 1d shows the data from a top-down view . Appliance activation events When an appliance is turned on , it causes a jump in energy consumption , i.e . a leading edge in the energy signal , as shown in Figure 1b . We call such a pattern an appliance activation event . On the other hand , when an appliance changes its internal state , it can also cause a change in the energy signal as shown in the same figure . We call such a pattern a background event . We are interested in discovering activation events to learn appliance usage patterns . Thus , for each jump in the aggregate signal , we take a time window ( of 25 seconds ) centered around that jump , and analyze it to detect whether it is an activation event and which appliance it corresponds to .
Authors proposed a multi-modal unsupervised algorithm to uncover the electricity usage of different appliances in a home. The detection of appliance was done by using both combined electricity consumption data and user location data from sensors. The unit of detection was set to be a 25-second window centered around any electricity usage spike. Authors used a encoder/decode set up to model two different factors of usage: type of appliance and variety within the same appliance. This part of the model was trained by predicting actual consumption. Then only the type of appliance was used to predict the location of people in the house, which was also factored into appliance related and unrelated factors. Locations are represented as images to avoid complicated modeling of multiple people.
SP:cd9024c0331b487fcb0cc13872f3ddb01f57ce15
Self-Supervised Learning of Appliance Usage
1 INTRODUCTION . Learning home appliance usage patterns is useful for understanding user habits and optimizing electricity consumption . For example , knowing when a person uses their microwave , stove , oven , coffee machine or toaster provides information about their eating patterns . Similarly , understanding when they use their TV , air-conditioner , or washer and dryer provides knowledge of their behavior and habits . Such information can be used to encourage energy saving by optimizing appliance usage ( Armel et al. , 2013 ) , to track the wellbeing of elderly living alone ( Donini et al. , 2013 ; Debes et al. , 2016 ) , or to provide users with behavioral analytics ( Zhou & Yang , 2016 ; Zipperer et al. , 2013 ) . This data is also useful for various businesses such as home insurance companies interested in assessing accident risks and utility companies interested in optimizing energy efficiency ( Armel et al. , 2013 ) . The problem can be modeled as event detection – i.e. , given the total energy consumed by the house as a function of time , we want to detect when various appliances are turned on . Past work has looked at analyzing the energy signal from the home utility meter to detect when certain appliances are on.1 Most solutions , however , assume that the energy pattern for each appliance is unique and known , and use this knowledge to create labeled data for their supervised models . ( Kolter et al. , 2010 ; Zhong et al. , 2014 ; 2015 ; Kelly & Knottenbelt , 2015 ; Zhang et al. , 2018 ; Bonfigli et al. , 2018 ) . Unfortunately , such solutions do not generalize well because the energy pattern of an appliance depends on its brand and can differ from one home to another ( Kelly & Knottenbelt , 2015 ; Bonfigli et al. , 2018 ) .2 The literature also contains some unsupervised methods , but they typically have limited accuracy ( Kim et al. , 2011 ; Kolter & Jaakkola , 2012 ; Johnson & Willsky , 2013 ; Parson et al. , 2014 ; Wytock & Kolter , 2014 ; Zhao et al. , 2016 ; Lange & Berges , 2018 ) . Unsupervised event detection in a data stream is intrinsically challenging because we do not know what patterns to look for . In our task , not only may appliance energy patterns be unknown , but also the energy signal may include many background events unrelated to appliance activation , such as the fridge or HVAC power cycling events . One way to address this challenge is to consider the self-supervised paradigm . If a different stream of data also observes the events of interest , we can use this second modality to provide self-supervising 1The utility meter outputs the sum of the energy of all active appliances in a house as a function of time . 2For example , a Samsung dishwasher may have a different energy pattern from that of a Kenmore dishwasher . signals for event detection . To that end , we leverage the availability of new fine-resolution motion sensors which track the locations of people at home ( Adib et al. , 2015 ; Joshi et al. , 2015 ; Li et al. , 2016 ; Ghourchian et al. , 2017 ; Hsu et al. , 2017b ) . Such sensors operate as a consumer radar , providing decimeter-level location accuracy . They do not require people to wear sensors on their bodies , can operate through walls , and track people ’ s locations in different rooms . These location sensors indirectly observe the events of interest . Specifically , they capture the change in user locations as they reach out to an appliance to set it up or turn it on ( e.g . put food in a microwave and turn it on ) . Hence , the output of such sensors can provide a second modality for self-supervision . But how should one design the model ? We can not directly use location as a label for appliance activation events . People can be next to an appliance but neither activate it nor interact with it . Moreover , we do not assume appliance locations are known a priori . We also can not use the two modalities to learn a joint representation of the event in a shared space . This is because location and energy are unrelated most of the time and become related only when the event of interest occurs . Furthermore , there are typically multiple residents in the home , making it hard to tell which of them interacted with the appliance . Our model is based on cross-modal prediction . We train a neural network that , given the home energy at a particular time , predicts the location of the home residents . Our intuition is that appliance activation events have highly predictable locations , typically the location of the appliance . In contrast , background energy events ( e.g . power cycling of the fridge ) do not lead to predictable locations . Thus , our model uses this learned predictability along with the associated location and energy representation to cluster the events in the energy stream . In addition , we use a mixture distribution to disentangle irrelevant location information of other residents in the home . Interestingly , our model not only learns when each appliance is activated but also discovers the location of that appliance in the home , all without any labeled data . We summarize the contributions of this paper as follows : • The paper introduces a new method for self-supervised event detection from weakly related data streams . The method combines neural cross-modal prediction with custom clustering based on the learned predictability and representation . We apply it to the task of detecting appliance usage events using unlabeled data from two sensors in the home : the energy meter , and a location sensor . • To evaluate our design , we have created the first dataset with concurrent streams of home energy and location data , collected from 4 homes over a period of 7 months . For each home , data was collected for 2 to 4 months . Ground truth measurements are provided via smart plugs connected directly to each appliance . • Compared to past work on unsupervised learning of appliance usage and a new baseline that leverages the two modalities , our method achieves significant improvements of 67.3 % and 51.9 % respectively for the average detection F1 score . We will release our code and dataset to encourage future work on multi-modal models for understanding appliance usage patterns and the underlying user behavior . 3 2 RELATED WORK . Energy disaggregation Our work is related to past work on energy disaggregation , which refers to the problem of separating appliance-level energy from a home ’ s total ( or aggregate ) energy signal . Past work in this domain can be broadly classified into two categories : supervised and unsupervised . Supervised methods assume that the power signatures of individual appliances are available . They use data from individual appliances to obtain models for each appliance power signature , and then use those models to detect appliance events from the aggregate energy signal . Early work learns sparse codes for different appliances ( Kolter et al. , 2010 ) or uses a Factorial HMM ( FHMM ) ( Ghahramani & Jordan , 1996 ) to model each appliance as an HMM ( Zhong et al. , 2014 ; 2015 ) . Other work uses matrix factorization approaches to estimate monthly energy breakdowns ( Batra et al. , 2017 ; 2018 ) . More recently , neural networks have been used to model appliances ( Kelly & Knottenbelt , 2015 ; Zhang et al. , 2018 ; Jia et al. , 2019 ; Bonfigli et al. , 2018 ) , where extracting appliance-level energy is formulated as a de-noising problem . However , supervised solutions typically do not generalize well 3Project website : http : //sapple.csail.mit.edu to new homes ( Kelly & Knottenbelt , 2015 ; Bonfigli et al. , 2018 ) . This is because two appliances of the same type ( e.g . coffee machine ) in different homes are often manufactured by different brands , and thus have different power signatures . Unsupervised methods do not assume prior knowledge of appliance signatures ; they attempt to learn those signatures from the aggregate energy signal . Early approaches use variants of FHMM , and learn appliance HMMs with Expectation-Maximization ( Kim et al. , 2011 ) , approximate footprint extraction procedures ( Kolter & Jaakkola , 2012 ) , or using expert knowledge to configure prior parameters ( Johnson & Willsky , 2013 ; Parson et al. , 2014 ) . Some papers propose using contextual information ( such as temperature , hour of the day , and day of the week ) ( Wytock & Kolter , 2014 ) , or use event-based signal processing methods to cluster appliances ( Zhao et al. , 2016 ) . More recently , Lange & Berges ( 2018 ) proposed using a recurrent neural network as the variational distribution in learning the FHMM . In contrast , our work leverages people ’ s location data as a self-supervising signal . We cluster appliance events through learning the relation between energy events and people ’ s locations , and also learn appliance locations as a by-product . Passive location sensing Motivated by new in-home applications and continuous health monitoring , recent years have witnessed an increasing number of indoor location sensing systems ( Adib et al. , 2015 ; Joshi et al. , 2015 ; Li et al. , 2016 ; Ghourchian et al. , 2017 ) . They infer people ’ s locations passively by analyzing how people change the surrounding radio signals ( e.g . WiFi ) and do not require people to wear any sensors . These sensors have been used for various applications including activity recognition ( Wang et al. , 2014 ; 2015 ) , sleep monitoring ( Zhao et al. , 2017 ; Hsu et al. , 2017a ) , mobility and behavioral sensing ( Hsu et al. , 2017b ; 2019 ) , and health monitoring ( Kaltiokallio et al. , 2012 ) . In our work , we leverage the availability of such sensors to introduce location data as an additional data modality for learning appliance usage patterns . Self-supervised multi-modal learning Our work is related to a growing body of work on multimodal learning . Most approaches learn to encode the multi-modal data into a shared space ( Gomez et al. , 2017 ; Harwath et al. , 2018 ; Owens & Efros , 2018 ; Zhao et al. , 2018 ; 2019 ) . In contrast , since our two modalities are mostly unrelated and become related only when an activation event happens , we learn to predict one modality conditioned on the other . Our work is also related to cross-modal prediction ( Krishna et al. , 2017 ; Owens et al. , 2016 ; Zhang et al. , 2017 ) but differs from it in an essential way . Past work on cross-modal prediction typically uses the prediction as the target outcome ( e.g . output text for video captioning ) . In contrast , our objective is to discover the hidden appliance activation events . Thus , we design our method to leverage the learned predictability and cross-modal mapping for clustering activation events . Furthermore , we introduce a mixture prediction design to disentangle unrelated information in our predicted modality ( location measurements unrelated to energy events ) . 3 PROBLEM FORMULATION . Our goal is to learn appliance activation events in an unsupervised way , using two input streams : home aggregate energy and residents ’ location data . Figure 1 shows the two data modalities . We describe each of them formally and define appliance “ events ” below . Aggregate energy signal A household ’ s total energy consumption is measured by a utility meter regularly . This measures the sum of the energy consumed by all appliances at each point in time . We denote the aggregate energy signal by y = ( y1 , y2 , . . . , yT ) , where yt ∈ R+ . Suppose there are a total of K appliances in a home , and each appliance ’ s energy signal is denoted by xk = ( x1 , k , x2 , k , . . . , xT , k ) , where xt , k ∈ R+ . Only the aggregate energy signal is observed , yt =∑K k=1 xt , k + t , where t ∼ N ( 0 , σ2 ) is the background noise . Figure 1a shows one day of an aggregate energy signal . The base power level shifts constantly throughout the day , depending on the background load ( e.g . ceiling lights ) . Added on top of the base level are the various appliance events . Figure 1b zooms in around 20:30 , and shows examples of those events . The stove was turned on around 20:28 , and its power continued to cycle between a few levels . While the stove was on , the microwave was also turned on and ran for a few minutes , and the garbage disposer was turned on shortly . Indoor location data We use a single location sensor similar to that in Hsu et al . ( 2017b ) to measure people ’ s indoor locations passively . The sensor sends out radio signals and analyzes the reflections to localize multiple people . Similarly to a regular WiFi router , the sensor has a limited coverage area of up to 40 feet . Suppose there are Pt people in the coverage area at time t. The location data is denoted by lt = ( lt,1 , lt,2 , . . . , lt , Pt ) , where lt , p ∈ R2 is the x-y location of person p at time t. We can represent the location data over multiple time frames as l1 : T = ( l1 , l2 , . . . , lT ) . Figure 1c shows one minute of location data from two people , and Figure 1d shows the data from a top-down view . Appliance activation events When an appliance is turned on , it causes a jump in energy consumption , i.e . a leading edge in the energy signal , as shown in Figure 1b . We call such a pattern an appliance activation event . On the other hand , when an appliance changes its internal state , it can also cause a change in the energy signal as shown in the same figure . We call such a pattern a background event . We are interested in discovering activation events to learn appliance usage patterns . Thus , for each jump in the aggregate signal , we take a time window ( of 25 seconds ) centered around that jump , and analyze it to detect whether it is an activation event and which appliance it corresponds to .
This paper proposed a learning algorithm to recover the events of using an appliance and as well as the location of the appliance in a home by using smart electricity meter and a motion sensor installed a home. In the model, the input is a window of electricity energy consumption and context and the output of the model is the location collected by the motion sensor. The appliance activation as the latent variables is learned using a autoencoder architecture.
SP:cd9024c0331b487fcb0cc13872f3ddb01f57ce15
Accelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems
1 INTRODUCTION . In this paper , we mainly consider the following composite convex optimization problem : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) = 1 n n∑ i=1 fi ( x ) +R ( x ) } ( 1 ) where F ( x ) : Rd→R is the average of smooth convex component functions fi ( x ) , and R ( x ) is a relatively simple convex function ( but may not be differentiable ) . In this paper , we use ‖·‖ to denote the standard Euclidean norm , and ‖·‖1 to denote the ` 1-norm . Moreover , we use P∗ to denote the real optimal value of P ( · ) , and P̂∗ to denote the optimal value obtained by algorithms . This form of optimization problems often appears in machine learning , signal processing , data science , statistics and operations research , and has a wide range of applications such as regularized empirical risk minimization ( ERM ) , sparse coding for image and video recovery , and representation learning for object recognition . Specifically , for a collection of given training examples { ( a1 , b1 ) , ... , ( an , bn ) } , where ai ∈ Rd , bi ∈ R ( i = 1 , 2 , ... , n ) and ai is a feature vector , while bi is the desired response . When fi ( x ) = 12 ( a T i x−bi ) 2 , we can obtain the ridge regression problem by setting R ( x ) = λ2 ‖x‖ 2 . We also get the Lasso or Elastic-Net problems by setting R ( x ) =λ‖x‖1 or R ( x ) = λ22 ‖x‖ 2+λ1‖x‖1 , respectively . Moreover , if we set fi ( x ) = log ( 1+exp ( −bixTai ) ) , we will get the regularized logistic regression problem . 1.1 RECENT RESEARCH PROGRESS . The proximal gradient descent ( PGD ) method is a standard and effective method for Problem ( 1 ) , and can achieve linear convergence for strongly convex problems . Its accelerated algorithms , e.g. , accelerated proximal gradient ( APG ) ( Tseng ( 2008 ) ; Beck & Teboulle ( 2009 ) ) , attain the convergence rate ofO ( 1/T 2 ) for non-strongly convex problems , where T denotes the number of iterations . In recent years , stochastic gradient descent ( SGD ) has been successfully applied to many large-scale learning problems , such as training for deep networks and linear prediction ( Tong ( 2004 ) ) , because of its significantly lower per-iteration complexity than deterministic methods , i.e. , O ( d ) vs. O ( nd ) . Besides , many tricks for SGD have also been proposed , such as Loshchilov & Hutter ( 2016 ) . However , the variance of the stochastic gradient may be large due to random sampling ( Johnson & Tong ( 2013 ) ) , which leads that the algorithm requires a gradually reduced step size , thus it will converge slow . Even under the strongly convex condition , SGD only achieves a sub-linear convergence rate O ( 1/T ) . Recently , many SGD methods with variance reduction have been proposed . For the case of R ( x ) = 0 , Roux et al . ( 2012 ) developed a stochastic average gradient descent ( SAG ) method , which is a randomized variant of the incremental aggregated gradient method proposed by Blatt et al . ( 2007 ) . Then stochastic variance reduced gradient ( SVRG ) ( Johnson & Tong ( 2013 ) ) was proposed , and has been widely introduced into various subsequent optimization algorithms , due to its lower storage space ( i.e. , O ( d ) ) than that of SAG ( i.e. , O ( nd ) ) . SVRG reduced the variance effectively by changing the estimation of stochastic gradients . The introduction of a snapshot point x̃ mainly has the effect of correcting the direction of gradient descent , and reduces the variance . Later , Konečný & Richtárik ( 2013 ) proposed the semi-stochastic gradient descent methods as well as their mini-batch version ( Konečný et al . ( 2014 ) ) . And their asynchronous distributed variant ( Ruiliang et al . ( 2016 ) ) is also been proposed later . More recently , Lin & Tong ( 2014 ) proposed the Prox-SVRG method , which introduced the proximal operator , and then applied the idea of SVRG to solve the non-smooth optimization problems . However , Prox-SVRG can only be used to solve the strongly convex optimization problems . In order to solve the non-strongly convex problems , Zeyuan & Yuan ( 2016 ) proposed the SVRG++ algorithm . Besides , to accelerate the algorithm and reducing the complexity , by combining the main ideas of APG and Prox-SVRG , Nitanda ( 2014 ) proposed an accelerated variance reduction proximal stochastic gradient descent ( Acc-Prox-SVRG ) method , which can effectively reduce the complexity of the algorithm compared to the two basic algorithms . Very recently , Zeyuan ( 2017 ) developed a novel Katyusha algorithm which introduced the Katyusha momentum to accelerate the algorithm . With the development of parallel and distributed computing which can effectively reduce computing time and improve performance , Ryu & Wotao ( 2017 ) came up with an algorithm called Proximal Proximal Gradient , which combined the proximal gradient method and ADMM ( Gabay & Mercier ( 1976 ) ) . Furthermore , it is easy to implement in parallel and distributed environments because of its innovative algorithm structure . 1.2 OUR MAIN CONTRIBUTIONS . We find that due to the introduction of proximal operator , there is a gap between P̂∗ and P∗ , and its theoretical derivation can be seen in Appendix A . To address this issue , Nguyen et al . ( 2017 ) proposed the idea of extragradient which can be seen as a guide during the process , and introduced it into the optimization problems . Intuitively , this additional iteration allows us to examine the geometry of the problem and consider its curvature information , which is one of the most important bottlenecks for first order methods . By using the idea of extragradient , we can get a better result in each inner-iteration . Therefore , the idea of extragradient is our main motivation . In this paper , we propose a novel algorithm for solving non-smooth optimization problems . The main contributions of this paper are summarized as follows . • In order to improve the result of the gap between P̂∗ and P∗ , and achieve fast convergence , a novel algorithm , which combines the idea of extragradient , Prox-SVRG and the trick of momentum acceleration , is proposed , called accelerated variance reduced stochastic extragradient descent ( AVR-SExtraGD ) . •We provide the convergence analysis of our algorithm , which shows that AVR-SExtraGD achieves linear convergence for strongly convex problems , and the convergence condition in the non-strongly convex case is also given . According to the convergence rate , we can know that AVR-SExtraGD has the same excellent result as the best-known algorithms , such as Katyusha . • Finally , we show by experiments that the performance of AVR-SExtraGD ( as well as VRSExtraGD , which is the basic algorithm of AVR-SExtraGD ) is obviously better than the popular algorithm , Prox-SVRG , which confirms the advantage of extragradient . For the widely used accelerated algorithm , Katyusha , the performance of our algorithm is still improved . 2 RELATED WORK . 2.1 BASIC ASSUMPTIONS . We first make the following assumptions to solve the problem ( 1 ) : Assumption 1 ( Smoothness ) . The convex function F ( · ) is L-smooth , i.e. , there exists a constant L > 0 such that for any x , y∈Rd , ‖∇F ( x ) −∇F ( y ) ‖≤L‖x−y‖ . Assumption 2 ( Lower Semi-continuity ) . The regularization function R ( · ) is a lower semicontinuous function , i.e. , ∀x0∈Rd , lim inf x→x0 R ( x ) ≥R ( x0 ) . But it is not necessarily differentiable or continuous .. Assumption 3 ( Strong Convexity ) . In Problem ( 1 ) , the functionR ( · ) is µ-strongly convex , i.e. , there exists a constant µ > 0 such that for all x , y∈Rd , it holds that R ( x ) ≥ R ( y ) + 〈G , x− y〉+ µ 2 ‖x− y‖2 , ( 2 ) where G∈∂R ( y ) which is the set of sub-gradient of R ( · ) at y . 2.2 PROX-SVRG AND EXTRAGRADIENT DESCENT METHODS . An effective method for solving Problem ( 1 ) is Prox-SVRG which improved Prox-FG ( Lions & Mercier ( 1979 ) ) and Prox-SG ( Langford et al . ( 2009 ) ) by introducing the stochastic gradient and combining the idea of SVRG , respectively . For strongly convex problems , Prox-SVRG can reach linear convergence with a constant step size , and its main update rules are ∇̃fik ( xk−1 ) = ∇fik ( xk−1 ) −∇fik ( x̃ ) +∇F ( x̃ ) ; xk = ProxRη ( xk−1 − η∇̃fik ( xk−1 ) ) , ( 3 ) where x̃ is the snapshot point used in SVRG , ∇̃fik ( xk−1 ) is the variance reduced stochastic gradient estimator , and ProxRη ( · ) is the proximal operator . Although Prox-SVRG can converge fast , because of proximal operator , the final solution has the deviation , which makes the solution inaccurate , thus Prox-SVRG still needs to be further improved , which is our important motivation . The extragradient method was first proposed by Korpelevič ( 1976 ) . It is a classical method for solving variational inequality problems , and it generates an estimation sequence by using two projection gradients in each iteration . By combining this idea with some first-order descent methods , Nguyen et al . ( 2017 ) proposed an extended extragradient method ( EEG ) which can effectively solve the problem ( 1 ) , and can also solve relatively more general problems as follows : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) } where F ( x ) is not necessarily composed by multiple functions fi ( x ) . Unlike the classical extragradient method , EEG uses proximal gradient instead of orthogonal projection in each iteration . The main update rules of EEG are yk = ProxRsk ( xk − sk∇F ( xk ) ) ; xk+1 = Prox R αk ( xk − αk∇F ( yk ) ) , where sk and αk are two step sizes . From the update rules of EEG , we can see that in each iteration , EEG needs to calculate two gradients , which will definitely slow down the algorithm . Therefore , the algorithm needs to be further accelerated by an efficient technique . 2.3 MOMENTUM ACCELERATION AND MIG . Firstly , we introduce the momentum acceleration technique whose main update rules are vdwt = βvdwt−1 + ( 1− β ) dwt ; wt = wt−1 − αvdwt , where dw is the gradient of the objective function at w , β is a parameter , and α is a step size . The update rules take not only the gradient of the current position , but also the gradient of the past position into account , which makes the final descent direction of wt after using momentum Algorithm 1 AVR-SExtraGD Input : Initial vector x0 , the number of epochs S , the number of iterations m per epoch , the step sizes η1 , η2 , momentum parameter β , and the set K. Initialize : x̃0 = x10 = x0 , ρ = 1+ηµ . 1 : for s = 1 , 2 , . . . , S do 2 : Compute∇F ( x̃s−1 ) ; 3 : βs = β ( SC ) or βs = 2s+4 ( non-SC ) ; 4 : for k = 1 , 2 , . . . , m do 5 : Pick ik uniformly at random from { 1 , ... , n } ; 6 : if k ∈ K then 7 : xsk−1/2 = Prox R η1 ( xsk−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 8 : xsk = Prox R η2 ( xsk−1/2 − η2∇̃fik ( βsx s k−1/2 + ( 1−βs ) x̃ s−1 ) ) ; 9 : else 10 : xsk = Prox R η1 ( x s k−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 11 : end if 12 : end for 13 : x̃s = βs ( ∑m k=1 ρ k−1 ) −1 ∑m k=1 ρ k−1 x s k−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( SC ) or x̃s = βsm ∑m k=1 xsk−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( non-SC ) ; 14 : xs+10 = x s m ; 15 : end for Output : x̃S . reduce the oscillation of descent , thus this method can effectively accelerate the convergence of the algorithm . According to the Nesterov ’ s momentum , lots of accelerated algorithms were proposed , such as APG and Acc-Prox-SVRG . Later , Zeyuan ( 2017 ) proposed Katyusha to further accelerate the algorithm , and MiG ( Kaiwen et al . ( 2018 ) ) was proposed to simplify the structure of Katyusha , and the momentum acceleration of MiG is embodied in each iteration as follows : ysk−1 = βsx s k−1 + ( 1− βs ) x̃s−1 . Moreover , it is easy to get that the oracle complexity of MiG is less than that of Prox-SVRG and APG , which means that MiG can effectively accelerate the original Prox-SVRG algorithm . Therefore , we can also use this acceleration technique to accelerate our algorithm and address the issue of slow convergence due to the calculations of two different gradients .
The paper proposes an optimization method for solving unconstrained convex optimization problems where the objective function consists of a sum of several smooth components f_i and a (not necessarily smooth) convex function R. The proposed method AVR-SExtraGD is a stochastic descent method building on the previous algorithms Prox-SVRG (Lin 2014) and Katyusha (Zeyuan 2017). The previous Prox-SVRG method using a proximal operator is explained to converge fast but leads to inaccurate final solutions, while the Katyusha method is an algorithm based on momentum acceleration. The current paper builds on these two approaches and applies the momentum acceleration technique in a stochastic extragradient descent framework to achieve fast convergence.
SP:99d5b859a30f1825f5a21fb62fdf7a918b838b95
Accelerated Variance Reduced Stochastic Extragradient Method for Sparse Machine Learning Problems
1 INTRODUCTION . In this paper , we mainly consider the following composite convex optimization problem : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) = 1 n n∑ i=1 fi ( x ) +R ( x ) } ( 1 ) where F ( x ) : Rd→R is the average of smooth convex component functions fi ( x ) , and R ( x ) is a relatively simple convex function ( but may not be differentiable ) . In this paper , we use ‖·‖ to denote the standard Euclidean norm , and ‖·‖1 to denote the ` 1-norm . Moreover , we use P∗ to denote the real optimal value of P ( · ) , and P̂∗ to denote the optimal value obtained by algorithms . This form of optimization problems often appears in machine learning , signal processing , data science , statistics and operations research , and has a wide range of applications such as regularized empirical risk minimization ( ERM ) , sparse coding for image and video recovery , and representation learning for object recognition . Specifically , for a collection of given training examples { ( a1 , b1 ) , ... , ( an , bn ) } , where ai ∈ Rd , bi ∈ R ( i = 1 , 2 , ... , n ) and ai is a feature vector , while bi is the desired response . When fi ( x ) = 12 ( a T i x−bi ) 2 , we can obtain the ridge regression problem by setting R ( x ) = λ2 ‖x‖ 2 . We also get the Lasso or Elastic-Net problems by setting R ( x ) =λ‖x‖1 or R ( x ) = λ22 ‖x‖ 2+λ1‖x‖1 , respectively . Moreover , if we set fi ( x ) = log ( 1+exp ( −bixTai ) ) , we will get the regularized logistic regression problem . 1.1 RECENT RESEARCH PROGRESS . The proximal gradient descent ( PGD ) method is a standard and effective method for Problem ( 1 ) , and can achieve linear convergence for strongly convex problems . Its accelerated algorithms , e.g. , accelerated proximal gradient ( APG ) ( Tseng ( 2008 ) ; Beck & Teboulle ( 2009 ) ) , attain the convergence rate ofO ( 1/T 2 ) for non-strongly convex problems , where T denotes the number of iterations . In recent years , stochastic gradient descent ( SGD ) has been successfully applied to many large-scale learning problems , such as training for deep networks and linear prediction ( Tong ( 2004 ) ) , because of its significantly lower per-iteration complexity than deterministic methods , i.e. , O ( d ) vs. O ( nd ) . Besides , many tricks for SGD have also been proposed , such as Loshchilov & Hutter ( 2016 ) . However , the variance of the stochastic gradient may be large due to random sampling ( Johnson & Tong ( 2013 ) ) , which leads that the algorithm requires a gradually reduced step size , thus it will converge slow . Even under the strongly convex condition , SGD only achieves a sub-linear convergence rate O ( 1/T ) . Recently , many SGD methods with variance reduction have been proposed . For the case of R ( x ) = 0 , Roux et al . ( 2012 ) developed a stochastic average gradient descent ( SAG ) method , which is a randomized variant of the incremental aggregated gradient method proposed by Blatt et al . ( 2007 ) . Then stochastic variance reduced gradient ( SVRG ) ( Johnson & Tong ( 2013 ) ) was proposed , and has been widely introduced into various subsequent optimization algorithms , due to its lower storage space ( i.e. , O ( d ) ) than that of SAG ( i.e. , O ( nd ) ) . SVRG reduced the variance effectively by changing the estimation of stochastic gradients . The introduction of a snapshot point x̃ mainly has the effect of correcting the direction of gradient descent , and reduces the variance . Later , Konečný & Richtárik ( 2013 ) proposed the semi-stochastic gradient descent methods as well as their mini-batch version ( Konečný et al . ( 2014 ) ) . And their asynchronous distributed variant ( Ruiliang et al . ( 2016 ) ) is also been proposed later . More recently , Lin & Tong ( 2014 ) proposed the Prox-SVRG method , which introduced the proximal operator , and then applied the idea of SVRG to solve the non-smooth optimization problems . However , Prox-SVRG can only be used to solve the strongly convex optimization problems . In order to solve the non-strongly convex problems , Zeyuan & Yuan ( 2016 ) proposed the SVRG++ algorithm . Besides , to accelerate the algorithm and reducing the complexity , by combining the main ideas of APG and Prox-SVRG , Nitanda ( 2014 ) proposed an accelerated variance reduction proximal stochastic gradient descent ( Acc-Prox-SVRG ) method , which can effectively reduce the complexity of the algorithm compared to the two basic algorithms . Very recently , Zeyuan ( 2017 ) developed a novel Katyusha algorithm which introduced the Katyusha momentum to accelerate the algorithm . With the development of parallel and distributed computing which can effectively reduce computing time and improve performance , Ryu & Wotao ( 2017 ) came up with an algorithm called Proximal Proximal Gradient , which combined the proximal gradient method and ADMM ( Gabay & Mercier ( 1976 ) ) . Furthermore , it is easy to implement in parallel and distributed environments because of its innovative algorithm structure . 1.2 OUR MAIN CONTRIBUTIONS . We find that due to the introduction of proximal operator , there is a gap between P̂∗ and P∗ , and its theoretical derivation can be seen in Appendix A . To address this issue , Nguyen et al . ( 2017 ) proposed the idea of extragradient which can be seen as a guide during the process , and introduced it into the optimization problems . Intuitively , this additional iteration allows us to examine the geometry of the problem and consider its curvature information , which is one of the most important bottlenecks for first order methods . By using the idea of extragradient , we can get a better result in each inner-iteration . Therefore , the idea of extragradient is our main motivation . In this paper , we propose a novel algorithm for solving non-smooth optimization problems . The main contributions of this paper are summarized as follows . • In order to improve the result of the gap between P̂∗ and P∗ , and achieve fast convergence , a novel algorithm , which combines the idea of extragradient , Prox-SVRG and the trick of momentum acceleration , is proposed , called accelerated variance reduced stochastic extragradient descent ( AVR-SExtraGD ) . •We provide the convergence analysis of our algorithm , which shows that AVR-SExtraGD achieves linear convergence for strongly convex problems , and the convergence condition in the non-strongly convex case is also given . According to the convergence rate , we can know that AVR-SExtraGD has the same excellent result as the best-known algorithms , such as Katyusha . • Finally , we show by experiments that the performance of AVR-SExtraGD ( as well as VRSExtraGD , which is the basic algorithm of AVR-SExtraGD ) is obviously better than the popular algorithm , Prox-SVRG , which confirms the advantage of extragradient . For the widely used accelerated algorithm , Katyusha , the performance of our algorithm is still improved . 2 RELATED WORK . 2.1 BASIC ASSUMPTIONS . We first make the following assumptions to solve the problem ( 1 ) : Assumption 1 ( Smoothness ) . The convex function F ( · ) is L-smooth , i.e. , there exists a constant L > 0 such that for any x , y∈Rd , ‖∇F ( x ) −∇F ( y ) ‖≤L‖x−y‖ . Assumption 2 ( Lower Semi-continuity ) . The regularization function R ( · ) is a lower semicontinuous function , i.e. , ∀x0∈Rd , lim inf x→x0 R ( x ) ≥R ( x0 ) . But it is not necessarily differentiable or continuous .. Assumption 3 ( Strong Convexity ) . In Problem ( 1 ) , the functionR ( · ) is µ-strongly convex , i.e. , there exists a constant µ > 0 such that for all x , y∈Rd , it holds that R ( x ) ≥ R ( y ) + 〈G , x− y〉+ µ 2 ‖x− y‖2 , ( 2 ) where G∈∂R ( y ) which is the set of sub-gradient of R ( · ) at y . 2.2 PROX-SVRG AND EXTRAGRADIENT DESCENT METHODS . An effective method for solving Problem ( 1 ) is Prox-SVRG which improved Prox-FG ( Lions & Mercier ( 1979 ) ) and Prox-SG ( Langford et al . ( 2009 ) ) by introducing the stochastic gradient and combining the idea of SVRG , respectively . For strongly convex problems , Prox-SVRG can reach linear convergence with a constant step size , and its main update rules are ∇̃fik ( xk−1 ) = ∇fik ( xk−1 ) −∇fik ( x̃ ) +∇F ( x̃ ) ; xk = ProxRη ( xk−1 − η∇̃fik ( xk−1 ) ) , ( 3 ) where x̃ is the snapshot point used in SVRG , ∇̃fik ( xk−1 ) is the variance reduced stochastic gradient estimator , and ProxRη ( · ) is the proximal operator . Although Prox-SVRG can converge fast , because of proximal operator , the final solution has the deviation , which makes the solution inaccurate , thus Prox-SVRG still needs to be further improved , which is our important motivation . The extragradient method was first proposed by Korpelevič ( 1976 ) . It is a classical method for solving variational inequality problems , and it generates an estimation sequence by using two projection gradients in each iteration . By combining this idea with some first-order descent methods , Nguyen et al . ( 2017 ) proposed an extended extragradient method ( EEG ) which can effectively solve the problem ( 1 ) , and can also solve relatively more general problems as follows : min x∈Rd { P ( x ) def = F ( x ) +R ( x ) } where F ( x ) is not necessarily composed by multiple functions fi ( x ) . Unlike the classical extragradient method , EEG uses proximal gradient instead of orthogonal projection in each iteration . The main update rules of EEG are yk = ProxRsk ( xk − sk∇F ( xk ) ) ; xk+1 = Prox R αk ( xk − αk∇F ( yk ) ) , where sk and αk are two step sizes . From the update rules of EEG , we can see that in each iteration , EEG needs to calculate two gradients , which will definitely slow down the algorithm . Therefore , the algorithm needs to be further accelerated by an efficient technique . 2.3 MOMENTUM ACCELERATION AND MIG . Firstly , we introduce the momentum acceleration technique whose main update rules are vdwt = βvdwt−1 + ( 1− β ) dwt ; wt = wt−1 − αvdwt , where dw is the gradient of the objective function at w , β is a parameter , and α is a step size . The update rules take not only the gradient of the current position , but also the gradient of the past position into account , which makes the final descent direction of wt after using momentum Algorithm 1 AVR-SExtraGD Input : Initial vector x0 , the number of epochs S , the number of iterations m per epoch , the step sizes η1 , η2 , momentum parameter β , and the set K. Initialize : x̃0 = x10 = x0 , ρ = 1+ηµ . 1 : for s = 1 , 2 , . . . , S do 2 : Compute∇F ( x̃s−1 ) ; 3 : βs = β ( SC ) or βs = 2s+4 ( non-SC ) ; 4 : for k = 1 , 2 , . . . , m do 5 : Pick ik uniformly at random from { 1 , ... , n } ; 6 : if k ∈ K then 7 : xsk−1/2 = Prox R η1 ( xsk−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 8 : xsk = Prox R η2 ( xsk−1/2 − η2∇̃fik ( βsx s k−1/2 + ( 1−βs ) x̃ s−1 ) ) ; 9 : else 10 : xsk = Prox R η1 ( x s k−1 − η1∇̃fik ( βsxsk−1 + ( 1−βs ) x̃s−1 ) ) ; 11 : end if 12 : end for 13 : x̃s = βs ( ∑m k=1 ρ k−1 ) −1 ∑m k=1 ρ k−1 x s k−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( SC ) or x̃s = βsm ∑m k=1 xsk−1/2+x s k 2 + ( 1−βs ) x̃ s−1 ( non-SC ) ; 14 : xs+10 = x s m ; 15 : end for Output : x̃S . reduce the oscillation of descent , thus this method can effectively accelerate the convergence of the algorithm . According to the Nesterov ’ s momentum , lots of accelerated algorithms were proposed , such as APG and Acc-Prox-SVRG . Later , Zeyuan ( 2017 ) proposed Katyusha to further accelerate the algorithm , and MiG ( Kaiwen et al . ( 2018 ) ) was proposed to simplify the structure of Katyusha , and the momentum acceleration of MiG is embodied in each iteration as follows : ysk−1 = βsx s k−1 + ( 1− βs ) x̃s−1 . Moreover , it is easy to get that the oracle complexity of MiG is less than that of Prox-SVRG and APG , which means that MiG can effectively accelerate the original Prox-SVRG algorithm . Therefore , we can also use this acceleration technique to accelerate our algorithm and address the issue of slow convergence due to the calculations of two different gradients .
This is an optimization algorithm paper, using the idea of "extragradient" and proposing to combine acceleration with proximal gradient descent-type algorithms (Prox-SVRG). Their proposed algorithm, i.e., accelerated variance reduced stochastic extra gradient descent, combines the advantages of Prox-SVRG and momentum acceleration techniques. The authors prove the convergence rate and oracle complexity of their algorithm for strongly convex and non-strongly convex problems. Their experiments on face recognition show improvement on top of Prox-SVRG as well Katyusha. They also propose an asynchronous variant of their algorithm and show that it outperforms other asynchronous baselines.
SP:99d5b859a30f1825f5a21fb62fdf7a918b838b95
HighRes-net: Multi-Frame Super-Resolution by Recursive Fusion
1 INTRODUCTION . Multiple low-resolution images collectively contain more information than any individual lowresolution image , due to minor geometric displacements , e.g . shifts , rotations , atmospheric turbulence , and instrument noise . Multi-Frame Super-Resolution ( MFSR ) ( Tsai , 1984 ) aims to reconstruct hidden high-resolution details from multiple low-resolution views of the same scene . Single Image Super-Resolution ( SISR ) , as a special case of MFSR , has attracted much attention in the computer vision , machine learning and deep learning communities in the last 5 years , with neural networks learning complex image priors to upsample and interpolate images ( Xu et al. , 2014 ; Srivastava et al. , 2015 ; He et al. , 2016 ) . However , in the meantime not much work has explored the learning of representations for the more general problem of MFSR to address the additional challenges of co-registration and fusion of multiple low-resolution images . This paper explores how Multi-Frame Super-Resolution ( MFSR ) can benefit from recent advances in learning representations with neural networks . To the best of our knowledge , this work is the first to introduce a deep-learning approach that solves the co-registration , fusion and registration-at-theloss problems in an end-to-end learning framework . Prompting this line of research is the increasing drive towards planetary-scale Earth observation to monitor the environment and human rights violations . Such observation can be used to inform policy , achieve accountability and direct on-the-ground action , e.g . within the framework of the Sustainable Development Goals ( Jensen & Campbell , 2019 ) . Nomenclature Registration is the problem of estimating the relative geometric differences between two images ( e.g . due to shifts , rotations , deformations ) . Fusion , in the MFSR context , is the problem of mapping multiple low-res representations into a single representation . By coregistration , we mean the problem of registering all low-resolution views to improve their fusion . By registration-at-the-loss , we mean the problem of registering the super-resolved reconstruction to the high-resolution ground-truth prior to computing the loss . This gives rise to the notion of a registered loss . Co-registration of multiple images is required for longitudinal studies of land change and environmental degradation . The fusion of multiple images is key to exploiting cheap , high-revisit-frequency satellite imagery , but of low-resolution , moving away from the analysis of infrequent and expensive high-resolution images . Finally , beyond fusion itself , super-resolved generation is required throughout the technical stack : both for labeling , but also for human oversight ( Drexler , 2019 ) demanded by legal context ( Harris et al. , 2018 ) . Summary of contributions • HighRes-net : We propose a deep architecture that learns to fuse an arbitrary number of lowresolution frames with implicit co-registration through a reference-frame channel . • ShiftNet : Inspired by HomographyNet ( DeTone et al. , 2016 ) , we define a model that learns to register and align the super-resolved output of HighRes-net , using ground-truth high-resolution frames as supervision . This registration-at-the-loss mechanism enables more accurate feedback from the loss function into the fusion model , when comparing a super-resolved output to a ground truth high resolution image . Otherwise , a MFSR model would naturally yield blurry outputs to compensate for the lack of registration , to correct for sub-pixel shifts and account for misalignments in the loss . • By combining the two components above , we contribute the first architecture to learn fusion and registration end-to-end . • We test and compare our approach to several baselines on real-world imagery from the PROBA-V satellite of ESA . Our performance has topped the Kelvins competition on MFSR , organized by the Advanced Concepts Team of ESA ( Märtens et al. , 2019 ) ( see section 5 ) . The rest of the paper is divided as follows : in Section 2 , we discuss related work on SISR and MFSR ; Section 3 outlines HighRes-net and section 4 presents ShiftNet , a differentiable registration component that drives our registered loss mechanism during end-to-end training . We present our results in section 5 , and in Section 6 we discuss some opportunities for and limitations and risks of super-resolution . 2 BACKGROUND . 2.1 MULTI-FRAME SUPER-RESOLUTION . How much detail can we resolve in the digital sample of some natural phenomenon ? Nyquist ( 1928 ) observed that it depends on the instrument ’ s sampling rate and the oscillation frequency of the underlying natural signal . Shannon ( 1949 ) built a sampling theory that explained Nyquist ’ s observations when the sampling rate is constant ( uniform sampling ) and determined the conditions of aliasing in a sample . Figure 2 illustrates this phenomenon . Under review as a conference paper at ICLR 2020 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) chirp signal sin ( 2πω ( t ) t ) 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) 0 250 ( Hz ) frequency ω ( t ) 128 t sufficient ( Nyquist ) sampling rate 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) high-resolution ( HR ) sampling frequency 1 kHzfHR −400 −200 0 200 400 frequency ( Hz ) DFT magnitudeF fHR 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) chirp signal sin ( ω ( t ) t ) 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) 0 250 ( Hz ) frequency ω ( t ) /2π128 t sufficient ( Nyquist ) sampling rate 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) high-resolution ( HR ) sampling frequency 1 kHzfHR −400 −200 0 200 400 frequency ( Hz ) DFT magnitudeF fHR 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) chirp signal sin ( ω ( t ) t ) 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) 0 250 ( Hz ) frequency ω ( t ) /2π 128 t sufficient ( Nyquist ) sampling rate fLR sampling rate 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) ALIASINGlow-resolution ( LR ) sampling frequency 250 HzfLR −100 −50 0 50 100 frequency ( Hz ) DFT magnitudeF fLRFigure 2 : Top : A chirp harmonic oscillator sin ( 2πω ( t ) t ) , with instantaneous frequency ω ( t ) . Left : The shape of the high-resolution sample resembles the underlying chirp signal . Right : Close to t = 1 , the apparent frequency of the low-resolution sample does not match that of the chirp . This is an example of aliasing ( shown with red at its most extreme ) , and it happens when the sampling rate falls below the Nyquist rate , sN = 2 · sB , where sB is the highest non-zero frequency of the signal . Sampling at high-resolution ( left ) maintains the frequency of the chirp signal ( top ) . Sampling at a lower resolution ( right ) , this apparent chirped frequency is lost due to aliasing , which means that the lower-resolution sample has a fundamentally smaller capacity for resolving the information of the natural signal , and a higher sampling rate can resolve more information . Shannon ’ s sampling theory has since been generalized for multiple interleaved sampling frames ( Papoulis , 1977 ; Marks , 2012 ) . One result of the generalized sampling theory is that we can go beyond the Nyquist limit of any individual uniform sample by interleaving several uniform samples taken concurrently . When an image is down-sampled to a lower resolution , its high-frequency details are lost permanently and can not be recovered from any image in isolation . However , by combining multiple low-resolution images , it becomes possible to recover the original scene at a higher resolution . Moreover , different low-resolution samples may be sampled at different phases , such that the same high-resolution frequency information will be packed with a phase shift . As a consequence , when multiple low-resolution samples are available , the fundamental challenge of MFSR is de-aliasing , i.e . disentangling the high-frequency components ( Tsai , 1984 ) . The first work on MSFR ( Tsai , 1984 ) considered the reconstruction of a high-resolution image as a fusion of co-registered low-resolution images in the Fourier domain . With proper registration and fusion ( Irani & Peleg , 1991 ; Fitzpatrick et al. , 2000 ; Capel & Zisserman , 2001 ) , a composite super-resolved image can reveal some of the original high-frequency detail that would not have been accessible from single low-resolution image . In this work , we introduce HighRes-Net , which aims to provide an end-to-end deep learning framework for MFSR settings . Relation to Video and Stereo Super-Resolution While there are obvious similarities to Video SR ( Tao et al. , 2017 ; Sajjadi et al. , 2018 ; Yan et al. , 2019 ; Wang et al. , 2019b ) and Stereo SR ( Wang et al. , 2019d ) ( Wang et al. , 2019a ) , the setting of this work differs in several ways : HighResnet learns to super-resolve sets and not sequences of low-res views . Video SR relies on motion estimation from a sequence of observations . Also , prediction at time t = T relies on predictions at t < T ( autoregressive approach ) . Whereas in our case , we predict a single image from an unordered set of low-res inputs . Also , the low-res views are multi-temporal ( taken at different times ) . Video SR methods assume that the input is a temporal sequence of frames . Motion or optical flow can be estimated to super-resolve the sequences of frames . In this work , we do not assume low-res inputs to be ordered in time . Our training input is a set of low-res views with unknown timestamps and our target output is a single image — not another sequence . 2.2 A PROBABILISTIC APPROACH . In addition to aliasing , MFSR deals with random processes like noise , blur , geometric distortions – all contributing to random low-resolution images . Traditionally , MFSR methods assume a-priori knowledge of the data generating motion model , blur kernel , noise level and degradation process ; see for example , Pickup et al . ( 2006 ) . Given multiple low-resolution images , the challenge of MFSR is to reconstruct a plausible image of higher-resolution that could have generated the observed lowresolution images . Optimization methods aim to improve an initial guess by minimizing an error between simulated and observed low-resolution images . These methods traditionally model the additive noise and prior knowledge about natural images explicitly , to constrain the parameter search space and derive objective functions , using e.g . Total Variation ( Chan & Wong , 1998 ; Farsiu et al. , 2004 ) , Tikhonov regularization ( Nguyen et al. , 2001 ) or Huber potential ( Pickup et al. , 2006 ) to define appropriate constraints on images . In some situations , the image degradation process is complex or not available , motivating the development of nonparametric strategies . Patch-based methods learn to form high-resolution images directly from low-resolution patches , e.g . with k-nearest neighbor search ( Freeman et al. , 2002 ; Chang et al. , 2004 ) , sparse coding and sparse dictionary methods ( Yang et al. , 2010 ; Zeyde et al. , 2010 ; Kim & Kwon , 2010 ) ) . The latter represents images in an over-complete basis and allows for sharing a prior across multiple sites . In this work , we are particularly interested in super-resolving satellite imagery . Much of the recent work in Super-Resolution has focused on SISR for natural images . For instance , Dong et al . ( 2014 ) showed that training a CNN for super-resolution is equivalent to sparse coding and dictionary based approaches . Kim et al . ( 2016 ) proposed an approach to SISR using recursion to increase the receptive field of a model while maintaining capacity by sharing weights . Many more networks and learning strategies have recently been introduced for SISR and image deblurring . Benchmarks for SISR ( Timofte et al. , 2018 ) , differ mainly in their upscaling method , network design , learning strategies , etc . We refer the reader to ( Wang et al. , 2019d ) for a more comprehensive review . Few deep-learning approaches have considered the more general MFSR setting and attempted to address it in an end-to-end learning framework . Reecently , Kawulok et al . ( 2019 ) proposed a shiftand-add method and suggested “ including image registration ” in the learning process as future work . In the following sections , we describe our approach to solving both aspects of the registration problem – co-registration and registration-at-the-loss – in a memory-efficient manner .
This paper proposes an end-to-end multi-frame super-resolution algorithm, that relies on a pair-wise co-registrations and fusing blocks (convolutional residual blocks), embedded in a encoder-decoder network 'HighRes-net' that estimates the super-resolution image. Because the ground truth SR image is typically misaligned with the estimation SR image, the authors proposed to learn the shift with a neural network 'ShiftNet' in a cooperative setting with HighRes-net. The experiments were performed on the ESA challenge on satellite images, showing good results.
SP:8357fc2c4234854bf476afd5305b1191ef56c11a
HighRes-net: Multi-Frame Super-Resolution by Recursive Fusion
1 INTRODUCTION . Multiple low-resolution images collectively contain more information than any individual lowresolution image , due to minor geometric displacements , e.g . shifts , rotations , atmospheric turbulence , and instrument noise . Multi-Frame Super-Resolution ( MFSR ) ( Tsai , 1984 ) aims to reconstruct hidden high-resolution details from multiple low-resolution views of the same scene . Single Image Super-Resolution ( SISR ) , as a special case of MFSR , has attracted much attention in the computer vision , machine learning and deep learning communities in the last 5 years , with neural networks learning complex image priors to upsample and interpolate images ( Xu et al. , 2014 ; Srivastava et al. , 2015 ; He et al. , 2016 ) . However , in the meantime not much work has explored the learning of representations for the more general problem of MFSR to address the additional challenges of co-registration and fusion of multiple low-resolution images . This paper explores how Multi-Frame Super-Resolution ( MFSR ) can benefit from recent advances in learning representations with neural networks . To the best of our knowledge , this work is the first to introduce a deep-learning approach that solves the co-registration , fusion and registration-at-theloss problems in an end-to-end learning framework . Prompting this line of research is the increasing drive towards planetary-scale Earth observation to monitor the environment and human rights violations . Such observation can be used to inform policy , achieve accountability and direct on-the-ground action , e.g . within the framework of the Sustainable Development Goals ( Jensen & Campbell , 2019 ) . Nomenclature Registration is the problem of estimating the relative geometric differences between two images ( e.g . due to shifts , rotations , deformations ) . Fusion , in the MFSR context , is the problem of mapping multiple low-res representations into a single representation . By coregistration , we mean the problem of registering all low-resolution views to improve their fusion . By registration-at-the-loss , we mean the problem of registering the super-resolved reconstruction to the high-resolution ground-truth prior to computing the loss . This gives rise to the notion of a registered loss . Co-registration of multiple images is required for longitudinal studies of land change and environmental degradation . The fusion of multiple images is key to exploiting cheap , high-revisit-frequency satellite imagery , but of low-resolution , moving away from the analysis of infrequent and expensive high-resolution images . Finally , beyond fusion itself , super-resolved generation is required throughout the technical stack : both for labeling , but also for human oversight ( Drexler , 2019 ) demanded by legal context ( Harris et al. , 2018 ) . Summary of contributions • HighRes-net : We propose a deep architecture that learns to fuse an arbitrary number of lowresolution frames with implicit co-registration through a reference-frame channel . • ShiftNet : Inspired by HomographyNet ( DeTone et al. , 2016 ) , we define a model that learns to register and align the super-resolved output of HighRes-net , using ground-truth high-resolution frames as supervision . This registration-at-the-loss mechanism enables more accurate feedback from the loss function into the fusion model , when comparing a super-resolved output to a ground truth high resolution image . Otherwise , a MFSR model would naturally yield blurry outputs to compensate for the lack of registration , to correct for sub-pixel shifts and account for misalignments in the loss . • By combining the two components above , we contribute the first architecture to learn fusion and registration end-to-end . • We test and compare our approach to several baselines on real-world imagery from the PROBA-V satellite of ESA . Our performance has topped the Kelvins competition on MFSR , organized by the Advanced Concepts Team of ESA ( Märtens et al. , 2019 ) ( see section 5 ) . The rest of the paper is divided as follows : in Section 2 , we discuss related work on SISR and MFSR ; Section 3 outlines HighRes-net and section 4 presents ShiftNet , a differentiable registration component that drives our registered loss mechanism during end-to-end training . We present our results in section 5 , and in Section 6 we discuss some opportunities for and limitations and risks of super-resolution . 2 BACKGROUND . 2.1 MULTI-FRAME SUPER-RESOLUTION . How much detail can we resolve in the digital sample of some natural phenomenon ? Nyquist ( 1928 ) observed that it depends on the instrument ’ s sampling rate and the oscillation frequency of the underlying natural signal . Shannon ( 1949 ) built a sampling theory that explained Nyquist ’ s observations when the sampling rate is constant ( uniform sampling ) and determined the conditions of aliasing in a sample . Figure 2 illustrates this phenomenon . Under review as a conference paper at ICLR 2020 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) chirp signal sin ( 2πω ( t ) t ) 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) 0 250 ( Hz ) frequency ω ( t ) 128 t sufficient ( Nyquist ) sampling rate 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) high-resolution ( HR ) sampling frequency 1 kHzfHR −400 −200 0 200 400 frequency ( Hz ) DFT magnitudeF fHR 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) chirp signal sin ( ω ( t ) t ) 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) 0 250 ( Hz ) frequency ω ( t ) /2π128 t sufficient ( Nyquist ) sampling rate 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) high-resolution ( HR ) sampling frequency 1 kHzfHR −400 −200 0 200 400 frequency ( Hz ) DFT magnitudeF fHR 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) chirp signal sin ( ω ( t ) t ) 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) 0 250 ( Hz ) frequency ω ( t ) /2π 128 t sufficient ( Nyquist ) sampling rate fLR sampling rate 0.0 0.2 0.4 0.6 0.8 1.0t ( sec ) ALIASINGlow-resolution ( LR ) sampling frequency 250 HzfLR −100 −50 0 50 100 frequency ( Hz ) DFT magnitudeF fLRFigure 2 : Top : A chirp harmonic oscillator sin ( 2πω ( t ) t ) , with instantaneous frequency ω ( t ) . Left : The shape of the high-resolution sample resembles the underlying chirp signal . Right : Close to t = 1 , the apparent frequency of the low-resolution sample does not match that of the chirp . This is an example of aliasing ( shown with red at its most extreme ) , and it happens when the sampling rate falls below the Nyquist rate , sN = 2 · sB , where sB is the highest non-zero frequency of the signal . Sampling at high-resolution ( left ) maintains the frequency of the chirp signal ( top ) . Sampling at a lower resolution ( right ) , this apparent chirped frequency is lost due to aliasing , which means that the lower-resolution sample has a fundamentally smaller capacity for resolving the information of the natural signal , and a higher sampling rate can resolve more information . Shannon ’ s sampling theory has since been generalized for multiple interleaved sampling frames ( Papoulis , 1977 ; Marks , 2012 ) . One result of the generalized sampling theory is that we can go beyond the Nyquist limit of any individual uniform sample by interleaving several uniform samples taken concurrently . When an image is down-sampled to a lower resolution , its high-frequency details are lost permanently and can not be recovered from any image in isolation . However , by combining multiple low-resolution images , it becomes possible to recover the original scene at a higher resolution . Moreover , different low-resolution samples may be sampled at different phases , such that the same high-resolution frequency information will be packed with a phase shift . As a consequence , when multiple low-resolution samples are available , the fundamental challenge of MFSR is de-aliasing , i.e . disentangling the high-frequency components ( Tsai , 1984 ) . The first work on MSFR ( Tsai , 1984 ) considered the reconstruction of a high-resolution image as a fusion of co-registered low-resolution images in the Fourier domain . With proper registration and fusion ( Irani & Peleg , 1991 ; Fitzpatrick et al. , 2000 ; Capel & Zisserman , 2001 ) , a composite super-resolved image can reveal some of the original high-frequency detail that would not have been accessible from single low-resolution image . In this work , we introduce HighRes-Net , which aims to provide an end-to-end deep learning framework for MFSR settings . Relation to Video and Stereo Super-Resolution While there are obvious similarities to Video SR ( Tao et al. , 2017 ; Sajjadi et al. , 2018 ; Yan et al. , 2019 ; Wang et al. , 2019b ) and Stereo SR ( Wang et al. , 2019d ) ( Wang et al. , 2019a ) , the setting of this work differs in several ways : HighResnet learns to super-resolve sets and not sequences of low-res views . Video SR relies on motion estimation from a sequence of observations . Also , prediction at time t = T relies on predictions at t < T ( autoregressive approach ) . Whereas in our case , we predict a single image from an unordered set of low-res inputs . Also , the low-res views are multi-temporal ( taken at different times ) . Video SR methods assume that the input is a temporal sequence of frames . Motion or optical flow can be estimated to super-resolve the sequences of frames . In this work , we do not assume low-res inputs to be ordered in time . Our training input is a set of low-res views with unknown timestamps and our target output is a single image — not another sequence . 2.2 A PROBABILISTIC APPROACH . In addition to aliasing , MFSR deals with random processes like noise , blur , geometric distortions – all contributing to random low-resolution images . Traditionally , MFSR methods assume a-priori knowledge of the data generating motion model , blur kernel , noise level and degradation process ; see for example , Pickup et al . ( 2006 ) . Given multiple low-resolution images , the challenge of MFSR is to reconstruct a plausible image of higher-resolution that could have generated the observed lowresolution images . Optimization methods aim to improve an initial guess by minimizing an error between simulated and observed low-resolution images . These methods traditionally model the additive noise and prior knowledge about natural images explicitly , to constrain the parameter search space and derive objective functions , using e.g . Total Variation ( Chan & Wong , 1998 ; Farsiu et al. , 2004 ) , Tikhonov regularization ( Nguyen et al. , 2001 ) or Huber potential ( Pickup et al. , 2006 ) to define appropriate constraints on images . In some situations , the image degradation process is complex or not available , motivating the development of nonparametric strategies . Patch-based methods learn to form high-resolution images directly from low-resolution patches , e.g . with k-nearest neighbor search ( Freeman et al. , 2002 ; Chang et al. , 2004 ) , sparse coding and sparse dictionary methods ( Yang et al. , 2010 ; Zeyde et al. , 2010 ; Kim & Kwon , 2010 ) ) . The latter represents images in an over-complete basis and allows for sharing a prior across multiple sites . In this work , we are particularly interested in super-resolving satellite imagery . Much of the recent work in Super-Resolution has focused on SISR for natural images . For instance , Dong et al . ( 2014 ) showed that training a CNN for super-resolution is equivalent to sparse coding and dictionary based approaches . Kim et al . ( 2016 ) proposed an approach to SISR using recursion to increase the receptive field of a model while maintaining capacity by sharing weights . Many more networks and learning strategies have recently been introduced for SISR and image deblurring . Benchmarks for SISR ( Timofte et al. , 2018 ) , differ mainly in their upscaling method , network design , learning strategies , etc . We refer the reader to ( Wang et al. , 2019d ) for a more comprehensive review . Few deep-learning approaches have considered the more general MFSR setting and attempted to address it in an end-to-end learning framework . Reecently , Kawulok et al . ( 2019 ) proposed a shiftand-add method and suggested “ including image registration ” in the learning process as future work . In the following sections , we describe our approach to solving both aspects of the registration problem – co-registration and registration-at-the-loss – in a memory-efficient manner .
This paper presents a multi-frame super-resolution method applied to satellite imagery. It first estimates a reference image for the multiple input LR images by median filtering. Then it pairwise encodes the reference image and each of the multiple images in a recursive fashion then fuses the corresponding feature maps with residual blocks and bottleneck layers until only one feature maps for the entire multiple images obtained. In other words, LR images are fused into a single global encoding. Then, it applies a standard upsampling network to obtain the super-resolved image this image is fed into a network that estimates only the translational shift, and the shifted image with the estimated translation parameters finally resampled.
SP:8357fc2c4234854bf476afd5305b1191ef56c11a
Improving End-to-End Object Tracking Using Relational Reasoning
1 INTRODUCTION . Real-world environments can be rich and contain countless types of interacting objects . Intelligent autonomous agents need to understand both the objects and interactions between them if they are to operate in those environments . This motivates the need for class-agnostic algorithms for tracking multiple objects—a capability that is not supported by the popular tracking-by-detection paradigm . In tracking-by-detection , objects are detected in each frame independently , e. g. , by a pre-trained deep convolutional neural network ( CNN ) such as YOLO ( Redmon et al . ( 2016 ) ) , and then linked across frames . Algorithms from this family can achieve high accuracy , provided sufficient labelled data to train the object detector , and given that all encountered objects can be associated with known classes , but fail when faced with objects from previously unseen categories . Hierarchical attentive recurrent tracking ( HART ) is a recently-proposed , alternative method for single-object tracking ( SOT ) , which can track arbitrary objects indicated by the user ( Kosiorek et al . ( 2017 ) ) . This is done by providing an initial bounding-box , which may be placed over any part of the image , regardless of whether it contains an object or what class the object is . HART efficiently processes just the relevant part of an image using spatial attention ; it also integrates object detection , feature extraction , and motion modelling into one network , which is trained fully end-to-end . Contrary to tracking-by-detection , where only one video frame is typically processed at any given time to generate bounding box proposals , end-to-end learning in HART allows for discovering complex visual and spatio-temporal patterns in videos , which is conducive to inferring what an object is and how it moves . In the original formulation , HART is limited to the single-object modality—as are other existing end-to-end trackers ( Kahou et al . ( 2017 ) ; Rasouli Danesh et al . ( 2019 ) ; Gordon et al . ( 2018 ) ) . In this work , we present MOHART , a class-agnostic tracker with complex relational reasoning capabilities provided by a multi-headed self-attention module ( Vaswani et al . ( 2017 ) ; Lee et al . ( 2019 ) ) . MOHART infers the latent state of every tracked object in parallel , and uses self-attention to inform per-object states about other tracked objects . This helps to avoid performance loss under self-occlusions of tracked objects or strong camera motion . Moreover , since the model is trained end-to-end , it is able to learn how to manage faulty or missing sensor inputs . See fig . 1 for a high-level illustration of MOHART . In order to track objects , MOHART estimates their states , which can be naturally used to predict future trajectories over short temporal horizons , which is especially useful for planning in the context of autonomous agents . MOHART can be trained simultaneously for object tracking and trajectory prediction at the same time , thereby increasing statistical efficiency of learning . In contrast to prior art , where trajectory prediction and object tracking are usually addressed as separate problems with unrelated solutions , our work show trajectory prediction and object tracking are best addressed jointly . Section 2 describes prior art in tracking-by-detection , end-to-end tracking and predestrian trajectory prediction . In Section 3 , we describe our approach , which uses a permutation-invariant self-attention module to enable tracking multiple objects end-to-end with relational reasoning . Section 4 contrasts our approach with multi-object trackers which do not explicitly enforce permutation invariance but have the capacity to learn it , simpler permutation-invariant architectures , as well as multiple single-object trackers running in parallel . We show that multi-headed self-attention significantly outperforms other approaches . Finally , in Section 5 , we apply MOHART to real world datasets and show that permutation-invariant relational reasoning leads to consistent performance improvement compared to HART both in tracking and trajectory prediction . 2 RELATED WORK . Tracking-by-Detection Vision-based tracking approaches typically follow a tracking-by-detection paradigm : objects are first detected in each frame independently , and then a tracking algorithm links the detections from different frames to propose a coherent trajectory ( Zhang et al . ( 2008 ) ; Milan et al . ( 2014 ) ; Bae and Yoon ( 2017 ) ; Keuper et al . ( 2018 ) ) . Motion models and appearance are often used to improve the association between detected bounding-boxes in a postprocessing step . Tracking-by-detection algorithms currently provide the state-of-the-art in multi-object tracking on common benchmark suites , and we fully acknowledge that MOHART is not competitive at this stage in scenarios where high-quality detections are available for each frame . MOHART can in principle be equipped with the ability to use bounding boxes provided by an object detector , but this is beyond the scope of this project . End-to-End Tracking A newly established and much less explored stream of work approaches tracking in an end-to-end fashion . A key difficulty here is that extracting an image crop ( according to bounding-boxes provided by a detector ) , is non-differentiable and results in high-variance gradient estimators . Kahou et al . ( 2017 ) propose an end-to-end tracker with soft spatial-attention using a 2D grid of Gaussians instead of a hard bounding-box . HART draws inspiration from this idea , employs an additional attention mechanism , and shows promising performance on the real-world KITTI dataset ( Kosiorek et al . ( 2017 ) ) . HART forms the foundation of this work . It has also been extended to incorporate depth information from RGBD cameras ( Rasouli Danesh et al . ( 2019 ) ) . Gordon et al . ( 2018 ) propose an approach in which the crop corresponds to the scaled up previous bounding-box . This simplifies the approach , but does not allow the model to learn where to look— i. e. , no gradient is backpropagated through crop coordinates . To the best of our knowledge , there are no successful implementations of any such end-to-end approaches for multi-object tracking beyond SQAIR ( Kosiorek et al . ( 2018 ) ) , which works only on datasets with static backgrounds . On real-world data , the only end-to-end approaches correspond to applying multiple single-object trackers in parallel—a method which does not leverage the potential of scene context or inter-object interactions . Pedestrian trajectory prediction Predicting pedestrian trajectories has a long history in computer vision and robotics . Initial research modelled social forces using hand-crafted features ( Lerner et al . ( 2007 ) ; Pellegrini et al . ( 2009 ) ; Trautman and Krause ( 2010 ) ; Yamaguchi et al . ( 2011 ) ) or MDP-based motion transition models ( Rudenko et al . ( 2018 ) ) , while more recent approaches learn from context information , e. g. , positions of other pedestrians or landmarks in the environment . Social-LSTM ( Alahi et al . ( 2016 ) ) employs a long short-term memory ( LSTM ) to predict pedestrian trajectories and uses max-pooling to model global social context . Attention mechanisms have been employed to query the most relevant information , such as neighbouring pedestrians , in a learnable fashion ( Su et al . ( 2016 ) ; Fernando et al . ( 2018 ) ; Sadeghian et al . ( 2019 ) ) . Apart from relational learning , context ( Varshneya and Srinivasaraghavan ( 2017 ) ) , periodical time information ( Sun et al . ( 2018 ) ) , and constant motion priors ( Schöller et al . ( 2019 ) ) have proven effective in predicting long-term trajectories . Our work stands apart from this prior art by not relying on ground truth tracklets . It addresses the more challenging task of working directly with visual input , performing tracking , modelling interactions , and , depending on the application scenario , simultaneously predicting future motions . As such , it can also be compared to Visual Interaction Networks ( VIN ) ( Watters et al . ( 2017 ) ) , which use a CNN to encode three consecutive frames into state vectors—one per object—and feed these into a recurrent neural network ( RNN ) , which has an Interaction Network ( Battaglia et al . ( 2016 ) ) at its core . More recently , Relational Neural Expectation Maximization ( R-NEM ) has been proposed as an unsupervised approach which combines scene segmentation and relational reasoning ( van Steenkiste et al . ( 2018 ) ) . Both VINs and R-NEM make accurate predictions in physical scenarios , but , to the best of our knowledge , have not been applied to real world data . 3 RECURRENT MULTI-OBJECT TRACKING WITH SELF-ATTENTION . This section describes the model architecture in fig . 1 . We start by describing the hierarchical attentive recurrent tracking ( HART ) algorithm ( Kosiorek et al . ( 2017 ) ) , and then follow with an extension of HART to tracking multiple objects , where multiple instances of HART communicate with each other using multiheaded attention to facilitate relational reasoning . We also explain how this method can be extended to trajectory prediction instead of just tracking . 3.1 HIERARCHICAL ATTENTIVE RECURRENT TRACKING ( HART ) . HART is an attention-based recurrent algorithm , which can efficiently track single objects in a video . It uses a spatial attention mechanism to extract a glimpse gt , which corresponds to a small crop of the image xt at time-step t , containing the object of interest . This allows it to dispense with the processing of the whole image and can significantly decrease the amount of computation required . HART uses a CNN to convert the glimpse gt into features ft , which then update the hidden state ht of a LSTM core . The hidden state is used to estimate the current bounding-box bt , spatial attention parameters for the next time-step at+1 , as well as object appearance . Importantly , the recurrent core can learn to predict complicated motion conditioned on the past history of the tracked object , which leads to relatively small attention glimpses—contrary to CNN-based approaches ( Held et al . ( 2016 ) ; Valmadre et al . ( 2017 ) ) , HART does not need to analyse large regions-of-interest to search for tracked objects . In the original paper , HART processes the glimpse with an additional ventral and dorsal stream on top of the feature extractor . Early experiments have shown that this does not improve performance on the MOTChallenge dataset , presumably due to the oftentimes small objects and overall small amount of training data . Further details are provided in Appendix B . The algorithm is initialised with a bounding-box1 b1 for the first time-step , and operates on a sequence of raw images x1 : T . For time-steps t ≥ 2 , it recursively outputs bounding-box estimates for the current time-step and predicted attention parameters for the next time-step . The performance is measured as intersection-overunion ( IoU ) averaged over all time steps in which an object is present , excluding the first time step . Although HART can track arbitrary objects , it is limited to tracking one object at a time . While it can be deployed on several objects in parallel , different HART instances have no means of communication . This results in performance loss , as it is more difficult to identify occlusions , ego-motion and object interactions . Below , we propose an extension of HART which remedies these shortcomings . 3.2 MULTI-OBJECT HIERARCHICAL ATTENTIVE RECURRENT TRACKING ( MOHART ) . Multi-object support in HART requires the following modifications . Firstly , in order to handle a dynamically changing number of objects , we apply HART to multiple objects in parallel , where all parameters between HART instances are shared . We refer to each HART instance as a tracker . Secondly , we introduce a presence variable pt , m for object m. It is used to mark whether an object should interact with other objects , as well as to mask the loss function ( described in ( Kosiorek et al . ( 2017 ) ) ) for the given object when it is not present . In this setup , parallel trackers can not exchange information and are conceptually still single-object trackers , which we use as a baseline , referred to as HART ( despite it being an extension of the original algorithm ) . Finally , to enable communication between trackers , we augment HART with an additional step between feature extraction and the LSTM . 1We can use either a ground-truth bounding-box or one provided by an external detector ; the only requirement is that it contains the object of interest . For each object , a glimpse is extracted and processed by a CNN ( see fig . 1 ) . Furthermore , spatial attention parameters are linearly projected on a vector of the same size and added to this representation , acting as a positional encoding . This is then concatenated with the hidden state of the recurrent module of the respective object ( see fig . 2 ) . Let ft , m denote the resulting feature vector corresponding to the mth object , and let ft,1 : M be the set of such features for all objects . Since different objects can interact with each other , it is necessary to use a method that can inform each object about the effects of their interactions with other objects . Moreover , since features extracted from different objects comprise a set , this method should be permutation-equivariant , i. e. , the results should not depend on the order in which object features are processed . Therefore , we use the multi-head self-attention block ( SAB , Lee et al . ( 2019 ) ) , which is able to account for higher-order interactions between set elements when computing their representations . Intuitively , in our case , SAB allows any of the trackers to query other trackers about attributes of their respective objects , e. g. , distance between objects , their direction of movement , or their relation to the camera . This is implemented as follows , Q = Wqf1 : M + bq , K = Wkf1 : M + bk , V = Wvf1 : M + bv , ( 1 ) Oi = softmax ( QiK T i ) Vi , i = 1 , . . . , H , ( 2 ) o1 : M = O = concat ( Oi , . . . , OH ) , ( 3 ) where om is the output of the relational reasoning module for object m. Time-step subscripts are dropped to decrease clutter . In Eq . 1 , each of the extracted features ft , m is linearly projected into a triplet of key kt , m , query qt , m and value vt , m vectors . Together , they comprise K , Q and V matrices with M rows and dq , dk , dk columns , respectively . K , Q and V are then split up into multiple heads H ∈ N+ , which allows to query different attributes by comparing and aggregating different projection of features . Multiplying QiKTi in Eq . 2 allows to compare every query vector qt , m , i to all key vectors kt,1 : M , i , where the value of the corresponding dot-products represents the degree of similarity . Similarities are then normalised via a softmax operation and used to aggregate values V . Finally , outputs of different attention heads are concatenated in Eq . 3 . SAB produces M output vectors , one for each input , which are then concatenated with corresponding inputs and fed into separate LSTMs for further processing , as in HART—see fig . 1 . MOHART is trained fully end-to-end , contrary to other tracking approaches . It maintains a hidden state , which can contain information about the object ’ s motion . One benefit is that in order to predict future trajectories , one can simply feed black frames into the model . Our experiments show that the model learns to fall back on the motion model captured by the LSTM in this case .
The paper presents a class-agnostic method for tracking multiple moving objects (MOHART) that extends an existing single-object tracking method (Hierarchical Attentive Recurrent Tracking, HART). Similarly to HART, MOHART utilizes an attention mechanism and LSTM units. The extension form HART to MOHART is done in two main steps: HART is applied to multiple objects in parallel, with a presence variable attached to each, and a permutation-invariant network that learns the interactions between the objects.
SP:8719843b0fa8359a27642c1ffe94e17b748a0a60
Improving End-to-End Object Tracking Using Relational Reasoning
1 INTRODUCTION . Real-world environments can be rich and contain countless types of interacting objects . Intelligent autonomous agents need to understand both the objects and interactions between them if they are to operate in those environments . This motivates the need for class-agnostic algorithms for tracking multiple objects—a capability that is not supported by the popular tracking-by-detection paradigm . In tracking-by-detection , objects are detected in each frame independently , e. g. , by a pre-trained deep convolutional neural network ( CNN ) such as YOLO ( Redmon et al . ( 2016 ) ) , and then linked across frames . Algorithms from this family can achieve high accuracy , provided sufficient labelled data to train the object detector , and given that all encountered objects can be associated with known classes , but fail when faced with objects from previously unseen categories . Hierarchical attentive recurrent tracking ( HART ) is a recently-proposed , alternative method for single-object tracking ( SOT ) , which can track arbitrary objects indicated by the user ( Kosiorek et al . ( 2017 ) ) . This is done by providing an initial bounding-box , which may be placed over any part of the image , regardless of whether it contains an object or what class the object is . HART efficiently processes just the relevant part of an image using spatial attention ; it also integrates object detection , feature extraction , and motion modelling into one network , which is trained fully end-to-end . Contrary to tracking-by-detection , where only one video frame is typically processed at any given time to generate bounding box proposals , end-to-end learning in HART allows for discovering complex visual and spatio-temporal patterns in videos , which is conducive to inferring what an object is and how it moves . In the original formulation , HART is limited to the single-object modality—as are other existing end-to-end trackers ( Kahou et al . ( 2017 ) ; Rasouli Danesh et al . ( 2019 ) ; Gordon et al . ( 2018 ) ) . In this work , we present MOHART , a class-agnostic tracker with complex relational reasoning capabilities provided by a multi-headed self-attention module ( Vaswani et al . ( 2017 ) ; Lee et al . ( 2019 ) ) . MOHART infers the latent state of every tracked object in parallel , and uses self-attention to inform per-object states about other tracked objects . This helps to avoid performance loss under self-occlusions of tracked objects or strong camera motion . Moreover , since the model is trained end-to-end , it is able to learn how to manage faulty or missing sensor inputs . See fig . 1 for a high-level illustration of MOHART . In order to track objects , MOHART estimates their states , which can be naturally used to predict future trajectories over short temporal horizons , which is especially useful for planning in the context of autonomous agents . MOHART can be trained simultaneously for object tracking and trajectory prediction at the same time , thereby increasing statistical efficiency of learning . In contrast to prior art , where trajectory prediction and object tracking are usually addressed as separate problems with unrelated solutions , our work show trajectory prediction and object tracking are best addressed jointly . Section 2 describes prior art in tracking-by-detection , end-to-end tracking and predestrian trajectory prediction . In Section 3 , we describe our approach , which uses a permutation-invariant self-attention module to enable tracking multiple objects end-to-end with relational reasoning . Section 4 contrasts our approach with multi-object trackers which do not explicitly enforce permutation invariance but have the capacity to learn it , simpler permutation-invariant architectures , as well as multiple single-object trackers running in parallel . We show that multi-headed self-attention significantly outperforms other approaches . Finally , in Section 5 , we apply MOHART to real world datasets and show that permutation-invariant relational reasoning leads to consistent performance improvement compared to HART both in tracking and trajectory prediction . 2 RELATED WORK . Tracking-by-Detection Vision-based tracking approaches typically follow a tracking-by-detection paradigm : objects are first detected in each frame independently , and then a tracking algorithm links the detections from different frames to propose a coherent trajectory ( Zhang et al . ( 2008 ) ; Milan et al . ( 2014 ) ; Bae and Yoon ( 2017 ) ; Keuper et al . ( 2018 ) ) . Motion models and appearance are often used to improve the association between detected bounding-boxes in a postprocessing step . Tracking-by-detection algorithms currently provide the state-of-the-art in multi-object tracking on common benchmark suites , and we fully acknowledge that MOHART is not competitive at this stage in scenarios where high-quality detections are available for each frame . MOHART can in principle be equipped with the ability to use bounding boxes provided by an object detector , but this is beyond the scope of this project . End-to-End Tracking A newly established and much less explored stream of work approaches tracking in an end-to-end fashion . A key difficulty here is that extracting an image crop ( according to bounding-boxes provided by a detector ) , is non-differentiable and results in high-variance gradient estimators . Kahou et al . ( 2017 ) propose an end-to-end tracker with soft spatial-attention using a 2D grid of Gaussians instead of a hard bounding-box . HART draws inspiration from this idea , employs an additional attention mechanism , and shows promising performance on the real-world KITTI dataset ( Kosiorek et al . ( 2017 ) ) . HART forms the foundation of this work . It has also been extended to incorporate depth information from RGBD cameras ( Rasouli Danesh et al . ( 2019 ) ) . Gordon et al . ( 2018 ) propose an approach in which the crop corresponds to the scaled up previous bounding-box . This simplifies the approach , but does not allow the model to learn where to look— i. e. , no gradient is backpropagated through crop coordinates . To the best of our knowledge , there are no successful implementations of any such end-to-end approaches for multi-object tracking beyond SQAIR ( Kosiorek et al . ( 2018 ) ) , which works only on datasets with static backgrounds . On real-world data , the only end-to-end approaches correspond to applying multiple single-object trackers in parallel—a method which does not leverage the potential of scene context or inter-object interactions . Pedestrian trajectory prediction Predicting pedestrian trajectories has a long history in computer vision and robotics . Initial research modelled social forces using hand-crafted features ( Lerner et al . ( 2007 ) ; Pellegrini et al . ( 2009 ) ; Trautman and Krause ( 2010 ) ; Yamaguchi et al . ( 2011 ) ) or MDP-based motion transition models ( Rudenko et al . ( 2018 ) ) , while more recent approaches learn from context information , e. g. , positions of other pedestrians or landmarks in the environment . Social-LSTM ( Alahi et al . ( 2016 ) ) employs a long short-term memory ( LSTM ) to predict pedestrian trajectories and uses max-pooling to model global social context . Attention mechanisms have been employed to query the most relevant information , such as neighbouring pedestrians , in a learnable fashion ( Su et al . ( 2016 ) ; Fernando et al . ( 2018 ) ; Sadeghian et al . ( 2019 ) ) . Apart from relational learning , context ( Varshneya and Srinivasaraghavan ( 2017 ) ) , periodical time information ( Sun et al . ( 2018 ) ) , and constant motion priors ( Schöller et al . ( 2019 ) ) have proven effective in predicting long-term trajectories . Our work stands apart from this prior art by not relying on ground truth tracklets . It addresses the more challenging task of working directly with visual input , performing tracking , modelling interactions , and , depending on the application scenario , simultaneously predicting future motions . As such , it can also be compared to Visual Interaction Networks ( VIN ) ( Watters et al . ( 2017 ) ) , which use a CNN to encode three consecutive frames into state vectors—one per object—and feed these into a recurrent neural network ( RNN ) , which has an Interaction Network ( Battaglia et al . ( 2016 ) ) at its core . More recently , Relational Neural Expectation Maximization ( R-NEM ) has been proposed as an unsupervised approach which combines scene segmentation and relational reasoning ( van Steenkiste et al . ( 2018 ) ) . Both VINs and R-NEM make accurate predictions in physical scenarios , but , to the best of our knowledge , have not been applied to real world data . 3 RECURRENT MULTI-OBJECT TRACKING WITH SELF-ATTENTION . This section describes the model architecture in fig . 1 . We start by describing the hierarchical attentive recurrent tracking ( HART ) algorithm ( Kosiorek et al . ( 2017 ) ) , and then follow with an extension of HART to tracking multiple objects , where multiple instances of HART communicate with each other using multiheaded attention to facilitate relational reasoning . We also explain how this method can be extended to trajectory prediction instead of just tracking . 3.1 HIERARCHICAL ATTENTIVE RECURRENT TRACKING ( HART ) . HART is an attention-based recurrent algorithm , which can efficiently track single objects in a video . It uses a spatial attention mechanism to extract a glimpse gt , which corresponds to a small crop of the image xt at time-step t , containing the object of interest . This allows it to dispense with the processing of the whole image and can significantly decrease the amount of computation required . HART uses a CNN to convert the glimpse gt into features ft , which then update the hidden state ht of a LSTM core . The hidden state is used to estimate the current bounding-box bt , spatial attention parameters for the next time-step at+1 , as well as object appearance . Importantly , the recurrent core can learn to predict complicated motion conditioned on the past history of the tracked object , which leads to relatively small attention glimpses—contrary to CNN-based approaches ( Held et al . ( 2016 ) ; Valmadre et al . ( 2017 ) ) , HART does not need to analyse large regions-of-interest to search for tracked objects . In the original paper , HART processes the glimpse with an additional ventral and dorsal stream on top of the feature extractor . Early experiments have shown that this does not improve performance on the MOTChallenge dataset , presumably due to the oftentimes small objects and overall small amount of training data . Further details are provided in Appendix B . The algorithm is initialised with a bounding-box1 b1 for the first time-step , and operates on a sequence of raw images x1 : T . For time-steps t ≥ 2 , it recursively outputs bounding-box estimates for the current time-step and predicted attention parameters for the next time-step . The performance is measured as intersection-overunion ( IoU ) averaged over all time steps in which an object is present , excluding the first time step . Although HART can track arbitrary objects , it is limited to tracking one object at a time . While it can be deployed on several objects in parallel , different HART instances have no means of communication . This results in performance loss , as it is more difficult to identify occlusions , ego-motion and object interactions . Below , we propose an extension of HART which remedies these shortcomings . 3.2 MULTI-OBJECT HIERARCHICAL ATTENTIVE RECURRENT TRACKING ( MOHART ) . Multi-object support in HART requires the following modifications . Firstly , in order to handle a dynamically changing number of objects , we apply HART to multiple objects in parallel , where all parameters between HART instances are shared . We refer to each HART instance as a tracker . Secondly , we introduce a presence variable pt , m for object m. It is used to mark whether an object should interact with other objects , as well as to mask the loss function ( described in ( Kosiorek et al . ( 2017 ) ) ) for the given object when it is not present . In this setup , parallel trackers can not exchange information and are conceptually still single-object trackers , which we use as a baseline , referred to as HART ( despite it being an extension of the original algorithm ) . Finally , to enable communication between trackers , we augment HART with an additional step between feature extraction and the LSTM . 1We can use either a ground-truth bounding-box or one provided by an external detector ; the only requirement is that it contains the object of interest . For each object , a glimpse is extracted and processed by a CNN ( see fig . 1 ) . Furthermore , spatial attention parameters are linearly projected on a vector of the same size and added to this representation , acting as a positional encoding . This is then concatenated with the hidden state of the recurrent module of the respective object ( see fig . 2 ) . Let ft , m denote the resulting feature vector corresponding to the mth object , and let ft,1 : M be the set of such features for all objects . Since different objects can interact with each other , it is necessary to use a method that can inform each object about the effects of their interactions with other objects . Moreover , since features extracted from different objects comprise a set , this method should be permutation-equivariant , i. e. , the results should not depend on the order in which object features are processed . Therefore , we use the multi-head self-attention block ( SAB , Lee et al . ( 2019 ) ) , which is able to account for higher-order interactions between set elements when computing their representations . Intuitively , in our case , SAB allows any of the trackers to query other trackers about attributes of their respective objects , e. g. , distance between objects , their direction of movement , or their relation to the camera . This is implemented as follows , Q = Wqf1 : M + bq , K = Wkf1 : M + bk , V = Wvf1 : M + bv , ( 1 ) Oi = softmax ( QiK T i ) Vi , i = 1 , . . . , H , ( 2 ) o1 : M = O = concat ( Oi , . . . , OH ) , ( 3 ) where om is the output of the relational reasoning module for object m. Time-step subscripts are dropped to decrease clutter . In Eq . 1 , each of the extracted features ft , m is linearly projected into a triplet of key kt , m , query qt , m and value vt , m vectors . Together , they comprise K , Q and V matrices with M rows and dq , dk , dk columns , respectively . K , Q and V are then split up into multiple heads H ∈ N+ , which allows to query different attributes by comparing and aggregating different projection of features . Multiplying QiKTi in Eq . 2 allows to compare every query vector qt , m , i to all key vectors kt,1 : M , i , where the value of the corresponding dot-products represents the degree of similarity . Similarities are then normalised via a softmax operation and used to aggregate values V . Finally , outputs of different attention heads are concatenated in Eq . 3 . SAB produces M output vectors , one for each input , which are then concatenated with corresponding inputs and fed into separate LSTMs for further processing , as in HART—see fig . 1 . MOHART is trained fully end-to-end , contrary to other tracking approaches . It maintains a hidden state , which can contain information about the object ’ s motion . One benefit is that in order to predict future trajectories , one can simply feed black frames into the model . Our experiments show that the model learns to fall back on the motion model captured by the LSTM in this case .
This paper deals with the problem of multiple object tracking and trajectory prediction in multiple frames of videos. The main focus is adding a relation-reasoning building block to the original HART framework. With multiple objects, the key is to be able to learn the permutation invariant representation during potential changing and dynamic object trajectories. The paper also uses toy examples to show that the proposed block of relation reasoning is not necessarily beneficial when the object trajectory is less random and more static. Finally, experiments on real data demonstrate that the proposed method that accounts for relation reasoning is helpful by a limited magnitude.
SP:8719843b0fa8359a27642c1ffe94e17b748a0a60
A Deep Recurrent Neural Network via Unfolding Reweighted l1-l1 Minimization
1 INTRODUCTION . The problem of reconstructing sequential signals from low-dimensional measurements across time is of great importance for a number of applications such as time-series data analysis , future-frame prediction , and compressive video sensing . Specifically , we consider the problem of reconstructing a sequence of signals st ∈ Rn0 , t = 1 , 2 , . . . , T , from low-dimensional measurements xt = Ast , where A ∈ Rn×n0 ( n n0 ) is a sensing matrix . We assume that st has a sparse representation ht ∈ Rh in a dictionary D ∈ Rn0×h , that is , st = Dht . At each time step t , the signal st can be independently reconstructed using the measurements xt by solving ( Donoho , 2006 ) : min ht { 1 2 ‖xt −ADht‖22 + λ‖ht‖1 } , ( 1 ) where ‖ · ‖p is the ` p-norm and λ is a regularization parameter . The iterative shrinkage-thresholding algorithm ( ISTA ) ( Daubechies et al. , 2004 ) solves ( 1 ) by iterating over h ( l ) t = φλ c ( h ( l−1 ) t − 1 cD TAT ( ADh ( l−1 ) t − xt ) ) , where l is the iteration counter , φγ ( u ) = sign ( u ) [ 0 , |u| − γ ] + is the soft-thresholding operator , γ = λc , and c is an upper bound on the Lipschitz constant of the gradient of 12‖xt −ADht‖ 2 2 . Under the assumption that sequential signal instances are correlated , we consider the following sequential signal reconstruction problem : min ht { 1 2 ‖xt −ADht‖22 + λ1‖ht‖1 + λ2R ( ht , ht−1 ) } , ( 2 ) where λ1 , λ2 > 0 are regularization parameters and R ( ht , ht−1 ) is an added regularization term that expresses the similarity of the representations ht and ht−1 of two consecutive signals . Wisdom et al . ( 2017 ) proposed an RNN design ( coined Sista-RNN ) by unfolding the sequential version of ISTA . That study assumed that two consecutive signals are close in the ` 2-norm sense , formally , R ( ht , ht−1 ) = 1 2‖Dht − FDht−1‖ 2 2 , where F ∈ Rn0×n0 is a correlation matrix between st and st−1 . More recently , the study by Le et al . ( 2019 ) designed the ` 1- ` 1-RNN , which stems from unfolding an algorithm that solves the ` 1- ` 1 minimization problem ( Mota et al. , 2017 ; 2016 ) . This is a version of Problem ( 2 ) with R ( ht , ht−1 ) = ‖ht −Ght−1‖1 , where G ∈ Rh×h is an affine transformation that promotes the correlation between ht and ht−1 . Both studies ( Wisdom et al. , 2017 ; Le et al. , 2019 ) have shown that carefully-designed deep RNN models outperform the generic RNN model and ISTA ( Daubechies et al. , 2004 ) in the task of sequential frame reconstruction . Deep neural networks ( DNN ) have achieved state-of-the-art performance in solving ( 1 ) for individual signals , both in terms of accuracy and inference speed ( Mousavi et al. , 2015 ) . However , these models are often criticized for their lack of interpretability and theoretical guarantees ( Lucas et al. , 2018 ) . Motivated by this , several studies focus on designing DNNs that incorporate domain knowledge , namely , signal priors . These include deep unfolding methods which design neural networks to learn approximations of iterative optimization algorithms . Examples of this approach are LISTA ( Gregor & LeCun , 2010 ) and its variants , including ADMM-Net ( Sun et al. , 2016 ) , AMP ( Borgerding et al. , 2017 ) , and an unfolded version of the iterative hard thresholding algorithm ( Xin et al. , 2016 ) . LISTA ( Gregor & LeCun , 2010 ) unrolls the iterations of ISTA into a feed-forward neural network with weights , where each layer implements an iteration : h ( l ) t = φγ ( l ) ( W ( l ) h ( l−1 ) t + U ( l ) xt ) , with W ( l ) = I − 1cD TATAD , U ( l ) = 1cD TAT , and γ ( l ) being learned from data . It has been shown ( Gregor & LeCun , 2010 ; Sprechmann et al. , 2015 ) that a d-layer LISTA network with trainable parameters Θ = { W ( l ) , U ( l ) , γ ( l ) } dl=1 achieves the same performance as the original ISTA but with much fewer iterations ( i.e. , number of layers ) . Recent studies ( Chen et al. , 2018 ; Liu et al. , 2019 ) have found that exploiting dependencies between W ( l ) and U ( l ) leads to reducing the number of trainable parameters while retaining the performance of LISTA . These works provided theoretical insights to the convergence conditions of LISTA . However , the problem of designing deep unfolding methods for dealing with sequential signals is significantly less explored . In this work , we will consider a deep RNN for solving Problem ( 2 ) that outputs a sequence , ŝ1 , . . . , ŝT from an input measurement sequence , x1 , . . . , xT , as following : ht = φγ ( Wht−1 + Uxt ) , ŝt = Dht . ( 3 ) It has been shown that reweighted algorithms—such as the reweighted ` 1 minimization method by Candès et al . ( 2008 ) and the reweighted ` 1- ` 1 minimization by Luong et al . ( 2018 ) —outperform their non-reweighted counterparts . Driven by this observation , this paper proposes a novel deep RNN architecture by unfolding a reweighted- ` 1- ` 1 minimization algorithm . Due to the reweighting , our network has higher expressivity than existing RNN models leading to better data representations , especially when depth increases . This is in line with recent studies ( He et al. , 2016 ; Cortes et al . ; Huang et al. , 2017 ) , which have shown that better performance can be achieved by highly overparameterized networks , i.e. , networks with far more parameters than the number of training samples . While the most recent studies ( related over-parameterized DNNs ) consider fully-connected networks applied on classification problems ( Neyshabur et al. , 2019 ) , our approach focuses on deep-unfolding architectures and opts to understand how the networks learn a low-complexity representation for sequential signal reconstruction , which is a regression problem across time . Furthermore , while there have been efforts to build deep RNNs ( Pascanu et al. , 2014 ; Li et al. , 2018 ; Luo et al. , 2017 ; Wisdom et al. , 2017 ) , examining the generalization property of such deep RNN models on unseen sequential data still remains elusive . In this work , we derive the generalization error bound of the proposed design and further compare it with existing RNN bounds ( Zhang et al. , 2018 ; Kusupati et al. , 2018 ) . Contributions . The contributions of this work are as follows : • We propose a principled deep RNN model for sequential signal reconstruction by unfolding a reweighted ` 1- ` 1 minimization method . Our reweighted-RNN model employs different soft-thresholding functions that are adaptively learned per hidden unit . Furthermore , the proposed model is over-parameterized , has high expressivity and can be efficiently stacked . • We derive the generalization error bound of the proposed model ( and deep RNNs ) by measuring Rademacher complexity and show that the over-parameterization of our RNN ensures good generalization . To best of our knowledge , this is the first generalization error bound for deep RNNs ; moreover , our bound is tighter than existing bounds derived for shallow RNNs ( Zhang et al. , 2018 ; Kusupati et al. , 2018 ) . • We provide experiments in the task of reconstructing video sequences from low-dimensional measurements . We show significant gains when using our model compared to several state-of-the-art RNNs ( including unfolding architectures ) , especially when the depth of RNNs increases . 2 A DEEP RNN VIA UNFOLDING REWEIGHTED- ` 1- ` 1 MINIMIZATION In this section , we describe a reweighted ` 1- ` 1 minimization problem for sequential signal reconstruction and propose an iterative algorithm based on the proximal method . We then design a deep RNN architecture by unfolding this algorithm . The proposed reweighted ` 1- ` 1 minimization . We introduce the following problem : min ht { 1 2 ‖xt −ADZht‖22 + λ1‖g ◦ Zht‖1 + λ2‖g ◦ ( Zht −Ght−1 ) ‖1 } , ( 4 ) where “ ◦ ” denotes element-wise multiplication , g ∈ Rh is a vector of positive weights , Z ∈ Rh×h is a reweighting matrix , and G ∈ Rh×h is an affine transformation that promotes the correlation between ht−1 and ht . Intuitively , Z is adopted to transform ht to Zht ∈ Rh , producing a reweighted version of it . Thereafter , g aims to reweight each transformed component of Zht and Zht −Ght−1 in the ` 1-norm regularization terms . Because of applying reweighting ( Candès et al. , 2008 ) , the solution of Problem ( 4 ) is a more accurate sparse representation compared to the solution of the ` 1- ` 1 minimization problem in Le et al . ( 2019 ) ( where Z = I and g = I ) . Furthermore , the use of the reweighting matrix Z to transform ht to Zht differentiates Problem ( 4 ) from the reweighted ` 1- ` 1 minimization problem in Luong et al . ( 2018 ) where Z = I . The objective function in ( 4 ) consists of the differentiable fidelity term f ( Zht ) = 12‖xt−ADZht‖ 2 2 and the non-smooth term g ( Zht ) = λ1‖g ◦ Zht‖1 + λ2‖g ◦ ( Zht −Ght−1 ) ‖1 . We use a proximal gradient method ( Beck & Teboulle , 2009 ) to solve ( 4 ) : At iteration l , we first update h ( l−1 ) t —after being multiplied by Zl—with a gradient descent step on the fidelity term as u = Zlh ( l−1 ) t − 1 cZl∇f ( h ( l−1 ) t ) , where ∇f ( h ( l−1 ) t ) = D TAT ( ADh ( l−1 ) t − xt ) . Then , h ( l ) t is updated as h ( l ) t = Φλ1 c gl , λ2 c gl , Ght−1 ( Zlh ( l−1 ) t − 1 c Zl∇f ( h ( l−1 ) t ) ) , ( 5 ) where the proximal operator Φλ1 c gl , λ2 c gl , Ght−1 ( u ) is defined as Φλ1 c gl , λ2 c gl , ~ ( u ) = arg min v∈Rh { 1 c g ( v ) + 1 2 ||v − u||22 } , ( 6 ) with ~ = Ght−1 . Since the minimization problem is separable , we can minimize ( 6 ) independently for each of the elements gl , ~ , u of the corresponding gl , ~ , u vectors . After solving ( 6 ) , we obtain Φλ1 c gl , λ2 c gl , ~ ( u ) [ for solving ( 6 ) , we refer to Proposition B.1 in Appendix B ] . For ~ ≥ 0 : Φλ1 c gl , λ2 c gl , ~≥0 ( u ) = u− λ1c gl − λ2 c gl , ~ + λ1 c gl + λ2 c gl < u < ∞ ~ , ~ + λ1c gl − λ2 c gl ≤ u ≤ ~ + λ1 c gl + λ2 c gl u− λ1c gl + λ2 c gl , λ1 c gl − λ2 c gl < u < ~ + λ1 c gl − λ2 c gl 0 , −λ1c gl − λ2 c gl ≤ u ≤ λ1 c gl − λ2 c gl u+ λ1c gl + λ2 c gl , −∞ < u < − λ1 c gl − λ2 c gl , ( 7 ) and for ~ < 0 : Φλ1 c gl , λ2 c gl , ~ < 0 ( u ) = u− λ1c gl − λ2 c gl , λ1 c gl + λ2 c gl < u < ∞ 0 , −λ1c gl + λ2 c gl ≤ u ≤ λ1 c gl + λ2 c gl u+ λ1c gl − λ2 c gl , ~− λ1 c gl + λ2 c gl < u < − λ1 c gl + λ2 c gl ~ , ~− λ1c gl − λ2 c gl ≤ u ≤ ~− λ1 c gl + λ2 c gl u− λ1c gl + λ2 c gl , −∞ < u < ~− λ1 c gl − λ2 c gl ( 8 ) Under review as a conference paper at ICLR 2020 ℏ − − 1 2 ℏ − + 1 2 ( ) , , ℏ 1 2 ℏ + 1 2 − + 1 2 0 ℏ + + 1 2 − − 1 2 ( ) , , ℏ 1 2 ℏ + − 1 2 ℏ − 1 2 0 Algorithm 1 : The proposed algorithm for sequential signal reconstruction . 1 Input : Measurements x1 , . . . , xT , measurement matrix A , dictionary D , affine transform G , initial h ( d ) 0 ≡ h0 , reweighting matrices Z1 , . . . , Zd and vectors g1 , . . . , gd , c , λ1 , λ2 . 2 Output : Sequence of sparse codes h1 , . . . , hT . 3 for t = 1 , . . . , T do 4 h ( 0 ) t = Gh ( d ) t−1 5 for l = 1 to d do 6 u = [ Zl − 1cZlD TATAD ] h ( l−1 ) t + 1cZlD TATxt 7 h ( l ) t = Φλ1 c gl , λ2 c gl , Gh ( d ) t−1 ( u ) 8 end 9 end 10 return h ( d ) 1 , . . . , h ( d ) T Fig . 1 depicts the proximal operators for ~ ≥ 0 and ~ < 0 . Observe that different values of gl lead to different shapes of the proximal functions Φλ1 c gl , λ2 c gl , ~ ( u ) for each element u of u . Our iterative algorithm is given in Algorithm 1 . We reconstruct a sequence h1 , . . . , hT from a sequence of measurements x1 , . . . , xT . For each time step t , Step 6 applies a gradient descent update for f ( Zht−1 ) and Step 7 applies the proximal operator Φλ1 c gl , λ2 c gl , Gh ( d ) t−1 element-wise to the result . Let us compare the proposed method against the algorithm in Le et al . ( 2019 ) —which resulted in the ` 1- ` 1-RNN—that solves the ` 1- ` 1 minimization in Mota et al . ( 2016 ) ( where Zl = I and gl = I ) . In that algorithm , the update terms in Step 6 , namely I− 1cD TATAD and 1cD TAT , and the proximal operator in Step 7 are the same for all iterations of l. In contrast , Algorithm 1 uses a different Zl matrix per iteration to reparameterize the update terms ( Step 6 ) and , through updating gl , it applies a different proximal operator to each element u ( in Step 7 ) per iteration l. The proposed reweighted-RNN . We now describe the proposed architecture for sequential signal recovery , designed by unrolling the steps of Algorithm 1 across the iterations l = 1 , . . . , d ( yielding the hidden layers ) and time steps t = 1 , . . . , T . Specifically , the l-th hidden layer is given by h ( l ) t = Φλ1c g1 , λ2c g1 , Gh ( d ) t−1 ( W1h ( d ) t−1 + U1xt ) , if l = 1 , Φλ1 c gl , λ2 c gl , Gh ( d ) t−1 ( Wlh ( l−1 ) t + Ulxt ) , if l > 1 , ( 9 ) and the reconstructed signal at time step t is given by ŝt = Dh ( d ) t ; where Ul , Wl , V are defined as Ul = 1 c ZlD TAT , ∀l , ( 10 ) W1 = Z1G− 1 c Z1DTATADG , ( 11 ) Wl = Zl − 1 c ZlD TATAD , l > 1 . ( 12 ) The activation function is the proximal operator Φλ1 c gl , λ2 c gl , ~ ( u ) with learnable parameters λ1 , λ2 , c , gl ( see Fig . 1 for the shapes of the activation functions ) . Under review as a conference paper at ICLR 2020 11/12/2019 Copy of Copy of Copy of deepRNN.drawio 3/4 𝐡 ( 𝑑 ) 𝑡−1 𝐡 ( 1 ) 𝑡 𝐡 ( 2 ) 𝑡 𝐖2 𝐠1 𝐆 ... ... ... 𝐬𝑡 𝐀 𝐱𝑡 𝐔1 𝐔2 𝐔𝑑 𝐬 ̂ 𝑡 𝐆 𝐆 𝐖3 𝐖𝑑 𝐃 𝐠2 𝐠𝑑 𝐡 ( 𝑑 ) 𝑡 𝐡 ( 𝑑 ) 𝑡−1 𝐡 ( 1 ) 𝑡 𝐡 ( 2 ) 𝑡 𝐖1 𝐖2 𝐆 ... ... ... 𝐬𝑡 𝐀 𝐱𝑡 𝐔1 𝐔1 𝐔1 𝐬 ̂ 𝑡 𝐆 𝐆 𝐖2 𝐖2 𝐃 𝐡 ( 𝑑 ) 𝑡 𝐡 ( 1 ) 𝑡−1 𝐡 ( 1 ) 𝑡 𝐡 ( 2 ) 𝑡 𝐕1 𝐖2 ... ... 𝐱𝑡 𝐔1 𝐖3 𝐖𝑑 𝐃 𝐡 ( 𝑑 ) 𝑡 𝐡 ( 𝑑 ) 𝑡 𝐬𝑡 𝐬 ̂ 𝑡 𝐀 𝐡 ( 2 ) 𝑡−1 𝐕2 𝐡 ( 𝑑 ) 𝑡−1 𝐕𝑑 𝐖1 𝐡0 𝐡 ( 1 ) 1 𝐡 ( 2 ) 1 𝐖2 𝐠1 𝐆 ... ... ... 𝐬1 𝐀 𝐱1 𝐔1 𝐔2 𝐔𝑑 𝐆 𝐆 𝐖3 𝐖𝑑 𝐃 𝐠2 𝐠𝑑 𝐬 ̂ 1 𝐡 ( 𝑑 ) 1 𝐡 ( 1 ) 2 𝐡 ( 2 ) 2 𝐖2 𝐠1 𝐆 ... ... ... 𝐱2 𝐔1 𝐔2 𝐔𝑑 𝐆 𝐆 𝐖3 𝐖𝑑 𝐠2 𝐠𝑑 𝐬2 𝐬 ̂ 2𝐡 ( 𝑑 ) 2 𝐡 ( 𝑑 ) 1 𝐡 ( 𝑑 ) 𝑇−1 𝐡 ( 1 ) 𝑇 𝐡 ( 2 ) 𝑇 𝐖2 𝐠1 𝐆 ... ... ... 𝐱𝑇 𝐔1 𝐔2 𝐔𝑑 𝐆 𝐆 𝐖3 𝐖𝑑 𝐠2 𝐠𝑑 𝐬𝑇 𝐬 ̂ 𝑇𝐡 ( 𝑑 ) 𝑇 ... ... ... ... . ... 𝐀 𝐀 𝐃 𝐃 𝐖1 𝐖1 𝐖1 ( a ) The proposed reweighted-RNN 5/14/2019 deepRNN.drawio 1/2 x1s1 A U1 W2 h1 ( 2 ) g2 h1 ( 1 ) U2 Wd h1 ( d ) Ud D s1 W1 h0 G x2s2 A U1 W2 h2 ( 2 ) h2 ( 1 ) U2 Wd h2 ( d ) Ud D s2 W1 G W1 G xTA U1 W2 hT ( 2 ) hT ( 1 ) U2 Wd hT ( d ) Ud D sT W1 G ... sT ... ... ...... ... ... . ... ... ... . ... ... ... ^ ^ ^ g1 gd g2g1 gd g2g1 gd h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 g1 ˤ , , G ̌ 1 c ̌ 2 c h ( d ) t−1 G ... ... ... st A xt U1 U2 Ud s ̂ t G G W3 Wd D g2 gd h ( d ) t h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 G ... ... ... st A xt U1 U1 U1 s ̂ t G G W2 W2 D h ( d ) t ( b ) ` 1- ` 1-RNN . 5/16/2019 deepRNN.drawio 1/2 x1s1 A U1 W2 h1 ( 2 ) g2 h1 ( 1 ) U2 Wd h1 ( d ) Ud D s1 W1 h0 G x2s2 A U1 W2 h2 ( 2 ) h2 ( 1 ) U2 Wd h2 ( d ) Ud D s2 W1 G W1 G xTA U1 W2 hT ( 2 ) hT ( 1 ) U2 Wd hT ( ) Ud D sT W1 G ... sT ... ... ...... ... ... . ... ... ... . ... ... ... ^ ^ ^ g1 gd g2g1 gd g2g1 gd h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 g1 G ... ... ... st A xt U1 U2 Ud s ̂ t G G W3 Wd D g2 gd h ( d ) t h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 G ... ... ... st A xt U1 U1 U1 s ̂ t G G W2 W2 D h ( d ) t h ( 1 ) t−1 h ( 1 ) t h ( 2 ) t V1 W2 ... ... xt U1 W3 Wd D h ( d ) t h ( d ) t st s ̂ t A h ( 2 ) t−1 V2 h ( d ) t−1 Vd ( c ) Stacked RNN . Figure 2 : The proposed ( a ) reweighted-RNN vs. ( b ) ` 1- ` 1-RNN and ( c ) Stacked RNN with d layers . Fig . 2 ( a ) depicts the architecture of the proposed reweighted-RNN . Input vectors st , t = 1 , . . . , T are compressed by a linear measurement layer A , resulting in compressive measurements xt . The reconstructed vectors ŝt , t = 1 , . . . , T , are obtained by multiplying linearly the hidden representation h ( d ) t with the dictionary D. We train our network in an end-to-end fashion . During training , we minimize the loss function L ( Θ ) = E s1 , ··· , sT [ ∑T t=1 ‖st − ŝt‖22 ] using stochastic gradient descent on mini- batches , where the trainable parameters are Θ = { A , D , G , h0 , Z1 , . . . , Zd , g1 , . . . , gd , c , λ1 , λ2 } . We now compare the proposed reweighted-RNN [ Fig . 2 ( a ) ] against the recent ` 1- ` 1-RNN ( Le et al. , 2019 ) [ Fig . 2 ( b ) ] . The l-th hidden layer in ` 1- ` 1-RNN is given by h ( l ) t = Φλ1c , λ2c , Gh ( d ) t−1 ( W1h ( d ) t−1 + U1xt ) , if l = 1 , Φλ1 c , λ2 c , Gh ( d ) t−1 ( W2h ( l−1 ) t + U1xt ) , if l > 1 . ( 13 ) The proposed model has the following advantages over ` 1- ` 1-RNN . Firstly , ` 1- ` 1-RNN uses the proximal operator Φλ1 c , λ2 c , ~ ( u ) as activation function , whose learnable parameters λ1 , λ2 are fixed across the network . Conversely , the corresponding parameters λ1c gl and λ2 c gl [ see ( 7 ) , ( 8 ) , and Fig . 1 ] in our proximal operator , Φλ1 c gl , λ2 c gl , ~ ( u ) , are learned for each hidden layer due to the reweighting vector gl ; hence , the proposed model has a different activation function for each unit per layer . The second difference comes from the set of parameters { Wl , Ul } in ( 13 ) and ( 9 ) . The ` 1- ` 1-RNN model uses the same { W2 , U1 } for the second and higher layers . In contrast , our reweighted-RNN has different sets of { Wl , Ul } per hidden layer due to the reweighting matrix Zl . These two aspects [ which are schematically highlighted in blue fonts in Fig . 2 ( a ) ] can lead to an increase in the learning capability of the proposed reweighted-RNN , especially when the depth of the model increases . In comparison to a generic stacked RNN ( Pascanu et al. , 2014 ) [ Fig . 2 ( c ) ] , reweighted-RNN promotes the inherent data structure , that is , each vector st has a sparse representation ht and consecutive ht ’ s are correlated . This design characteristic of the reweighted-RNN leads to residual connections which reduce the risk of vanishing gradients during training [ the same idea has been shown in several works ( He et al. , 2016 ; Huang et al. , 2017 ) in deep neural network literature ] . Furthermore , in ( 10 ) and ( 12 ) , we see a weight coupling of Wl and Ul ( due to the shared components of A , D and Z ) . This coupling satisfies the necessary condition of the convergence in Chen et al . ( 2018 ) ( Theorem 1 ) . Using Theorem 2 in Chen et al . ( 2018 ) , it can be shown that reweighted-RNN , in theory , needs a smaller number of iterations ( i.e. , d in Algorithm 1 ) to reach convergence , compared to ISTA ( Daubechies et al. , 2004 ) and FISTA ( Beck & Teboulle , 2009 ) .
This paper proposes a new reweighted-RNN by unfolding a reweighted L1-L1 minimization problem. It develops an iterative algorithm to solve the reweighted L1-L1 minimization problem, where the soft-thresholding functions can be adaptively learned. This paper provides the generalization error bound for deep RNNs and shows that the proposed reweighted-RNN has a lower generalization error bound. In addition, the paper shows that the proposed algorithm can be applied to video-frame reconstruction and achieves favorable results against state-of-the-art methods. The paper is well organized, and the motivation is clear.
SP:4605e601a717bc05833778d0916a393ffdf8c331
A Deep Recurrent Neural Network via Unfolding Reweighted l1-l1 Minimization
1 INTRODUCTION . The problem of reconstructing sequential signals from low-dimensional measurements across time is of great importance for a number of applications such as time-series data analysis , future-frame prediction , and compressive video sensing . Specifically , we consider the problem of reconstructing a sequence of signals st ∈ Rn0 , t = 1 , 2 , . . . , T , from low-dimensional measurements xt = Ast , where A ∈ Rn×n0 ( n n0 ) is a sensing matrix . We assume that st has a sparse representation ht ∈ Rh in a dictionary D ∈ Rn0×h , that is , st = Dht . At each time step t , the signal st can be independently reconstructed using the measurements xt by solving ( Donoho , 2006 ) : min ht { 1 2 ‖xt −ADht‖22 + λ‖ht‖1 } , ( 1 ) where ‖ · ‖p is the ` p-norm and λ is a regularization parameter . The iterative shrinkage-thresholding algorithm ( ISTA ) ( Daubechies et al. , 2004 ) solves ( 1 ) by iterating over h ( l ) t = φλ c ( h ( l−1 ) t − 1 cD TAT ( ADh ( l−1 ) t − xt ) ) , where l is the iteration counter , φγ ( u ) = sign ( u ) [ 0 , |u| − γ ] + is the soft-thresholding operator , γ = λc , and c is an upper bound on the Lipschitz constant of the gradient of 12‖xt −ADht‖ 2 2 . Under the assumption that sequential signal instances are correlated , we consider the following sequential signal reconstruction problem : min ht { 1 2 ‖xt −ADht‖22 + λ1‖ht‖1 + λ2R ( ht , ht−1 ) } , ( 2 ) where λ1 , λ2 > 0 are regularization parameters and R ( ht , ht−1 ) is an added regularization term that expresses the similarity of the representations ht and ht−1 of two consecutive signals . Wisdom et al . ( 2017 ) proposed an RNN design ( coined Sista-RNN ) by unfolding the sequential version of ISTA . That study assumed that two consecutive signals are close in the ` 2-norm sense , formally , R ( ht , ht−1 ) = 1 2‖Dht − FDht−1‖ 2 2 , where F ∈ Rn0×n0 is a correlation matrix between st and st−1 . More recently , the study by Le et al . ( 2019 ) designed the ` 1- ` 1-RNN , which stems from unfolding an algorithm that solves the ` 1- ` 1 minimization problem ( Mota et al. , 2017 ; 2016 ) . This is a version of Problem ( 2 ) with R ( ht , ht−1 ) = ‖ht −Ght−1‖1 , where G ∈ Rh×h is an affine transformation that promotes the correlation between ht and ht−1 . Both studies ( Wisdom et al. , 2017 ; Le et al. , 2019 ) have shown that carefully-designed deep RNN models outperform the generic RNN model and ISTA ( Daubechies et al. , 2004 ) in the task of sequential frame reconstruction . Deep neural networks ( DNN ) have achieved state-of-the-art performance in solving ( 1 ) for individual signals , both in terms of accuracy and inference speed ( Mousavi et al. , 2015 ) . However , these models are often criticized for their lack of interpretability and theoretical guarantees ( Lucas et al. , 2018 ) . Motivated by this , several studies focus on designing DNNs that incorporate domain knowledge , namely , signal priors . These include deep unfolding methods which design neural networks to learn approximations of iterative optimization algorithms . Examples of this approach are LISTA ( Gregor & LeCun , 2010 ) and its variants , including ADMM-Net ( Sun et al. , 2016 ) , AMP ( Borgerding et al. , 2017 ) , and an unfolded version of the iterative hard thresholding algorithm ( Xin et al. , 2016 ) . LISTA ( Gregor & LeCun , 2010 ) unrolls the iterations of ISTA into a feed-forward neural network with weights , where each layer implements an iteration : h ( l ) t = φγ ( l ) ( W ( l ) h ( l−1 ) t + U ( l ) xt ) , with W ( l ) = I − 1cD TATAD , U ( l ) = 1cD TAT , and γ ( l ) being learned from data . It has been shown ( Gregor & LeCun , 2010 ; Sprechmann et al. , 2015 ) that a d-layer LISTA network with trainable parameters Θ = { W ( l ) , U ( l ) , γ ( l ) } dl=1 achieves the same performance as the original ISTA but with much fewer iterations ( i.e. , number of layers ) . Recent studies ( Chen et al. , 2018 ; Liu et al. , 2019 ) have found that exploiting dependencies between W ( l ) and U ( l ) leads to reducing the number of trainable parameters while retaining the performance of LISTA . These works provided theoretical insights to the convergence conditions of LISTA . However , the problem of designing deep unfolding methods for dealing with sequential signals is significantly less explored . In this work , we will consider a deep RNN for solving Problem ( 2 ) that outputs a sequence , ŝ1 , . . . , ŝT from an input measurement sequence , x1 , . . . , xT , as following : ht = φγ ( Wht−1 + Uxt ) , ŝt = Dht . ( 3 ) It has been shown that reweighted algorithms—such as the reweighted ` 1 minimization method by Candès et al . ( 2008 ) and the reweighted ` 1- ` 1 minimization by Luong et al . ( 2018 ) —outperform their non-reweighted counterparts . Driven by this observation , this paper proposes a novel deep RNN architecture by unfolding a reweighted- ` 1- ` 1 minimization algorithm . Due to the reweighting , our network has higher expressivity than existing RNN models leading to better data representations , especially when depth increases . This is in line with recent studies ( He et al. , 2016 ; Cortes et al . ; Huang et al. , 2017 ) , which have shown that better performance can be achieved by highly overparameterized networks , i.e. , networks with far more parameters than the number of training samples . While the most recent studies ( related over-parameterized DNNs ) consider fully-connected networks applied on classification problems ( Neyshabur et al. , 2019 ) , our approach focuses on deep-unfolding architectures and opts to understand how the networks learn a low-complexity representation for sequential signal reconstruction , which is a regression problem across time . Furthermore , while there have been efforts to build deep RNNs ( Pascanu et al. , 2014 ; Li et al. , 2018 ; Luo et al. , 2017 ; Wisdom et al. , 2017 ) , examining the generalization property of such deep RNN models on unseen sequential data still remains elusive . In this work , we derive the generalization error bound of the proposed design and further compare it with existing RNN bounds ( Zhang et al. , 2018 ; Kusupati et al. , 2018 ) . Contributions . The contributions of this work are as follows : • We propose a principled deep RNN model for sequential signal reconstruction by unfolding a reweighted ` 1- ` 1 minimization method . Our reweighted-RNN model employs different soft-thresholding functions that are adaptively learned per hidden unit . Furthermore , the proposed model is over-parameterized , has high expressivity and can be efficiently stacked . • We derive the generalization error bound of the proposed model ( and deep RNNs ) by measuring Rademacher complexity and show that the over-parameterization of our RNN ensures good generalization . To best of our knowledge , this is the first generalization error bound for deep RNNs ; moreover , our bound is tighter than existing bounds derived for shallow RNNs ( Zhang et al. , 2018 ; Kusupati et al. , 2018 ) . • We provide experiments in the task of reconstructing video sequences from low-dimensional measurements . We show significant gains when using our model compared to several state-of-the-art RNNs ( including unfolding architectures ) , especially when the depth of RNNs increases . 2 A DEEP RNN VIA UNFOLDING REWEIGHTED- ` 1- ` 1 MINIMIZATION In this section , we describe a reweighted ` 1- ` 1 minimization problem for sequential signal reconstruction and propose an iterative algorithm based on the proximal method . We then design a deep RNN architecture by unfolding this algorithm . The proposed reweighted ` 1- ` 1 minimization . We introduce the following problem : min ht { 1 2 ‖xt −ADZht‖22 + λ1‖g ◦ Zht‖1 + λ2‖g ◦ ( Zht −Ght−1 ) ‖1 } , ( 4 ) where “ ◦ ” denotes element-wise multiplication , g ∈ Rh is a vector of positive weights , Z ∈ Rh×h is a reweighting matrix , and G ∈ Rh×h is an affine transformation that promotes the correlation between ht−1 and ht . Intuitively , Z is adopted to transform ht to Zht ∈ Rh , producing a reweighted version of it . Thereafter , g aims to reweight each transformed component of Zht and Zht −Ght−1 in the ` 1-norm regularization terms . Because of applying reweighting ( Candès et al. , 2008 ) , the solution of Problem ( 4 ) is a more accurate sparse representation compared to the solution of the ` 1- ` 1 minimization problem in Le et al . ( 2019 ) ( where Z = I and g = I ) . Furthermore , the use of the reweighting matrix Z to transform ht to Zht differentiates Problem ( 4 ) from the reweighted ` 1- ` 1 minimization problem in Luong et al . ( 2018 ) where Z = I . The objective function in ( 4 ) consists of the differentiable fidelity term f ( Zht ) = 12‖xt−ADZht‖ 2 2 and the non-smooth term g ( Zht ) = λ1‖g ◦ Zht‖1 + λ2‖g ◦ ( Zht −Ght−1 ) ‖1 . We use a proximal gradient method ( Beck & Teboulle , 2009 ) to solve ( 4 ) : At iteration l , we first update h ( l−1 ) t —after being multiplied by Zl—with a gradient descent step on the fidelity term as u = Zlh ( l−1 ) t − 1 cZl∇f ( h ( l−1 ) t ) , where ∇f ( h ( l−1 ) t ) = D TAT ( ADh ( l−1 ) t − xt ) . Then , h ( l ) t is updated as h ( l ) t = Φλ1 c gl , λ2 c gl , Ght−1 ( Zlh ( l−1 ) t − 1 c Zl∇f ( h ( l−1 ) t ) ) , ( 5 ) where the proximal operator Φλ1 c gl , λ2 c gl , Ght−1 ( u ) is defined as Φλ1 c gl , λ2 c gl , ~ ( u ) = arg min v∈Rh { 1 c g ( v ) + 1 2 ||v − u||22 } , ( 6 ) with ~ = Ght−1 . Since the minimization problem is separable , we can minimize ( 6 ) independently for each of the elements gl , ~ , u of the corresponding gl , ~ , u vectors . After solving ( 6 ) , we obtain Φλ1 c gl , λ2 c gl , ~ ( u ) [ for solving ( 6 ) , we refer to Proposition B.1 in Appendix B ] . For ~ ≥ 0 : Φλ1 c gl , λ2 c gl , ~≥0 ( u ) = u− λ1c gl − λ2 c gl , ~ + λ1 c gl + λ2 c gl < u < ∞ ~ , ~ + λ1c gl − λ2 c gl ≤ u ≤ ~ + λ1 c gl + λ2 c gl u− λ1c gl + λ2 c gl , λ1 c gl − λ2 c gl < u < ~ + λ1 c gl − λ2 c gl 0 , −λ1c gl − λ2 c gl ≤ u ≤ λ1 c gl − λ2 c gl u+ λ1c gl + λ2 c gl , −∞ < u < − λ1 c gl − λ2 c gl , ( 7 ) and for ~ < 0 : Φλ1 c gl , λ2 c gl , ~ < 0 ( u ) = u− λ1c gl − λ2 c gl , λ1 c gl + λ2 c gl < u < ∞ 0 , −λ1c gl + λ2 c gl ≤ u ≤ λ1 c gl + λ2 c gl u+ λ1c gl − λ2 c gl , ~− λ1 c gl + λ2 c gl < u < − λ1 c gl + λ2 c gl ~ , ~− λ1c gl − λ2 c gl ≤ u ≤ ~− λ1 c gl + λ2 c gl u− λ1c gl + λ2 c gl , −∞ < u < ~− λ1 c gl − λ2 c gl ( 8 ) Under review as a conference paper at ICLR 2020 ℏ − − 1 2 ℏ − + 1 2 ( ) , , ℏ 1 2 ℏ + 1 2 − + 1 2 0 ℏ + + 1 2 − − 1 2 ( ) , , ℏ 1 2 ℏ + − 1 2 ℏ − 1 2 0 Algorithm 1 : The proposed algorithm for sequential signal reconstruction . 1 Input : Measurements x1 , . . . , xT , measurement matrix A , dictionary D , affine transform G , initial h ( d ) 0 ≡ h0 , reweighting matrices Z1 , . . . , Zd and vectors g1 , . . . , gd , c , λ1 , λ2 . 2 Output : Sequence of sparse codes h1 , . . . , hT . 3 for t = 1 , . . . , T do 4 h ( 0 ) t = Gh ( d ) t−1 5 for l = 1 to d do 6 u = [ Zl − 1cZlD TATAD ] h ( l−1 ) t + 1cZlD TATxt 7 h ( l ) t = Φλ1 c gl , λ2 c gl , Gh ( d ) t−1 ( u ) 8 end 9 end 10 return h ( d ) 1 , . . . , h ( d ) T Fig . 1 depicts the proximal operators for ~ ≥ 0 and ~ < 0 . Observe that different values of gl lead to different shapes of the proximal functions Φλ1 c gl , λ2 c gl , ~ ( u ) for each element u of u . Our iterative algorithm is given in Algorithm 1 . We reconstruct a sequence h1 , . . . , hT from a sequence of measurements x1 , . . . , xT . For each time step t , Step 6 applies a gradient descent update for f ( Zht−1 ) and Step 7 applies the proximal operator Φλ1 c gl , λ2 c gl , Gh ( d ) t−1 element-wise to the result . Let us compare the proposed method against the algorithm in Le et al . ( 2019 ) —which resulted in the ` 1- ` 1-RNN—that solves the ` 1- ` 1 minimization in Mota et al . ( 2016 ) ( where Zl = I and gl = I ) . In that algorithm , the update terms in Step 6 , namely I− 1cD TATAD and 1cD TAT , and the proximal operator in Step 7 are the same for all iterations of l. In contrast , Algorithm 1 uses a different Zl matrix per iteration to reparameterize the update terms ( Step 6 ) and , through updating gl , it applies a different proximal operator to each element u ( in Step 7 ) per iteration l. The proposed reweighted-RNN . We now describe the proposed architecture for sequential signal recovery , designed by unrolling the steps of Algorithm 1 across the iterations l = 1 , . . . , d ( yielding the hidden layers ) and time steps t = 1 , . . . , T . Specifically , the l-th hidden layer is given by h ( l ) t = Φλ1c g1 , λ2c g1 , Gh ( d ) t−1 ( W1h ( d ) t−1 + U1xt ) , if l = 1 , Φλ1 c gl , λ2 c gl , Gh ( d ) t−1 ( Wlh ( l−1 ) t + Ulxt ) , if l > 1 , ( 9 ) and the reconstructed signal at time step t is given by ŝt = Dh ( d ) t ; where Ul , Wl , V are defined as Ul = 1 c ZlD TAT , ∀l , ( 10 ) W1 = Z1G− 1 c Z1DTATADG , ( 11 ) Wl = Zl − 1 c ZlD TATAD , l > 1 . ( 12 ) The activation function is the proximal operator Φλ1 c gl , λ2 c gl , ~ ( u ) with learnable parameters λ1 , λ2 , c , gl ( see Fig . 1 for the shapes of the activation functions ) . Under review as a conference paper at ICLR 2020 11/12/2019 Copy of Copy of Copy of deepRNN.drawio 3/4 𝐡 ( 𝑑 ) 𝑡−1 𝐡 ( 1 ) 𝑡 𝐡 ( 2 ) 𝑡 𝐖2 𝐠1 𝐆 ... ... ... 𝐬𝑡 𝐀 𝐱𝑡 𝐔1 𝐔2 𝐔𝑑 𝐬 ̂ 𝑡 𝐆 𝐆 𝐖3 𝐖𝑑 𝐃 𝐠2 𝐠𝑑 𝐡 ( 𝑑 ) 𝑡 𝐡 ( 𝑑 ) 𝑡−1 𝐡 ( 1 ) 𝑡 𝐡 ( 2 ) 𝑡 𝐖1 𝐖2 𝐆 ... ... ... 𝐬𝑡 𝐀 𝐱𝑡 𝐔1 𝐔1 𝐔1 𝐬 ̂ 𝑡 𝐆 𝐆 𝐖2 𝐖2 𝐃 𝐡 ( 𝑑 ) 𝑡 𝐡 ( 1 ) 𝑡−1 𝐡 ( 1 ) 𝑡 𝐡 ( 2 ) 𝑡 𝐕1 𝐖2 ... ... 𝐱𝑡 𝐔1 𝐖3 𝐖𝑑 𝐃 𝐡 ( 𝑑 ) 𝑡 𝐡 ( 𝑑 ) 𝑡 𝐬𝑡 𝐬 ̂ 𝑡 𝐀 𝐡 ( 2 ) 𝑡−1 𝐕2 𝐡 ( 𝑑 ) 𝑡−1 𝐕𝑑 𝐖1 𝐡0 𝐡 ( 1 ) 1 𝐡 ( 2 ) 1 𝐖2 𝐠1 𝐆 ... ... ... 𝐬1 𝐀 𝐱1 𝐔1 𝐔2 𝐔𝑑 𝐆 𝐆 𝐖3 𝐖𝑑 𝐃 𝐠2 𝐠𝑑 𝐬 ̂ 1 𝐡 ( 𝑑 ) 1 𝐡 ( 1 ) 2 𝐡 ( 2 ) 2 𝐖2 𝐠1 𝐆 ... ... ... 𝐱2 𝐔1 𝐔2 𝐔𝑑 𝐆 𝐆 𝐖3 𝐖𝑑 𝐠2 𝐠𝑑 𝐬2 𝐬 ̂ 2𝐡 ( 𝑑 ) 2 𝐡 ( 𝑑 ) 1 𝐡 ( 𝑑 ) 𝑇−1 𝐡 ( 1 ) 𝑇 𝐡 ( 2 ) 𝑇 𝐖2 𝐠1 𝐆 ... ... ... 𝐱𝑇 𝐔1 𝐔2 𝐔𝑑 𝐆 𝐆 𝐖3 𝐖𝑑 𝐠2 𝐠𝑑 𝐬𝑇 𝐬 ̂ 𝑇𝐡 ( 𝑑 ) 𝑇 ... ... ... ... . ... 𝐀 𝐀 𝐃 𝐃 𝐖1 𝐖1 𝐖1 ( a ) The proposed reweighted-RNN 5/14/2019 deepRNN.drawio 1/2 x1s1 A U1 W2 h1 ( 2 ) g2 h1 ( 1 ) U2 Wd h1 ( d ) Ud D s1 W1 h0 G x2s2 A U1 W2 h2 ( 2 ) h2 ( 1 ) U2 Wd h2 ( d ) Ud D s2 W1 G W1 G xTA U1 W2 hT ( 2 ) hT ( 1 ) U2 Wd hT ( d ) Ud D sT W1 G ... sT ... ... ...... ... ... . ... ... ... . ... ... ... ^ ^ ^ g1 gd g2g1 gd g2g1 gd h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 g1 ˤ , , G ̌ 1 c ̌ 2 c h ( d ) t−1 G ... ... ... st A xt U1 U2 Ud s ̂ t G G W3 Wd D g2 gd h ( d ) t h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 G ... ... ... st A xt U1 U1 U1 s ̂ t G G W2 W2 D h ( d ) t ( b ) ` 1- ` 1-RNN . 5/16/2019 deepRNN.drawio 1/2 x1s1 A U1 W2 h1 ( 2 ) g2 h1 ( 1 ) U2 Wd h1 ( d ) Ud D s1 W1 h0 G x2s2 A U1 W2 h2 ( 2 ) h2 ( 1 ) U2 Wd h2 ( d ) Ud D s2 W1 G W1 G xTA U1 W2 hT ( 2 ) hT ( 1 ) U2 Wd hT ( ) Ud D sT W1 G ... sT ... ... ...... ... ... . ... ... ... . ... ... ... ^ ^ ^ g1 gd g2g1 gd g2g1 gd h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 g1 G ... ... ... st A xt U1 U2 Ud s ̂ t G G W3 Wd D g2 gd h ( d ) t h ( d ) t−1 h ( 1 ) t h ( 2 ) t W1 W2 G ... ... ... st A xt U1 U1 U1 s ̂ t G G W2 W2 D h ( d ) t h ( 1 ) t−1 h ( 1 ) t h ( 2 ) t V1 W2 ... ... xt U1 W3 Wd D h ( d ) t h ( d ) t st s ̂ t A h ( 2 ) t−1 V2 h ( d ) t−1 Vd ( c ) Stacked RNN . Figure 2 : The proposed ( a ) reweighted-RNN vs. ( b ) ` 1- ` 1-RNN and ( c ) Stacked RNN with d layers . Fig . 2 ( a ) depicts the architecture of the proposed reweighted-RNN . Input vectors st , t = 1 , . . . , T are compressed by a linear measurement layer A , resulting in compressive measurements xt . The reconstructed vectors ŝt , t = 1 , . . . , T , are obtained by multiplying linearly the hidden representation h ( d ) t with the dictionary D. We train our network in an end-to-end fashion . During training , we minimize the loss function L ( Θ ) = E s1 , ··· , sT [ ∑T t=1 ‖st − ŝt‖22 ] using stochastic gradient descent on mini- batches , where the trainable parameters are Θ = { A , D , G , h0 , Z1 , . . . , Zd , g1 , . . . , gd , c , λ1 , λ2 } . We now compare the proposed reweighted-RNN [ Fig . 2 ( a ) ] against the recent ` 1- ` 1-RNN ( Le et al. , 2019 ) [ Fig . 2 ( b ) ] . The l-th hidden layer in ` 1- ` 1-RNN is given by h ( l ) t = Φλ1c , λ2c , Gh ( d ) t−1 ( W1h ( d ) t−1 + U1xt ) , if l = 1 , Φλ1 c , λ2 c , Gh ( d ) t−1 ( W2h ( l−1 ) t + U1xt ) , if l > 1 . ( 13 ) The proposed model has the following advantages over ` 1- ` 1-RNN . Firstly , ` 1- ` 1-RNN uses the proximal operator Φλ1 c , λ2 c , ~ ( u ) as activation function , whose learnable parameters λ1 , λ2 are fixed across the network . Conversely , the corresponding parameters λ1c gl and λ2 c gl [ see ( 7 ) , ( 8 ) , and Fig . 1 ] in our proximal operator , Φλ1 c gl , λ2 c gl , ~ ( u ) , are learned for each hidden layer due to the reweighting vector gl ; hence , the proposed model has a different activation function for each unit per layer . The second difference comes from the set of parameters { Wl , Ul } in ( 13 ) and ( 9 ) . The ` 1- ` 1-RNN model uses the same { W2 , U1 } for the second and higher layers . In contrast , our reweighted-RNN has different sets of { Wl , Ul } per hidden layer due to the reweighting matrix Zl . These two aspects [ which are schematically highlighted in blue fonts in Fig . 2 ( a ) ] can lead to an increase in the learning capability of the proposed reweighted-RNN , especially when the depth of the model increases . In comparison to a generic stacked RNN ( Pascanu et al. , 2014 ) [ Fig . 2 ( c ) ] , reweighted-RNN promotes the inherent data structure , that is , each vector st has a sparse representation ht and consecutive ht ’ s are correlated . This design characteristic of the reweighted-RNN leads to residual connections which reduce the risk of vanishing gradients during training [ the same idea has been shown in several works ( He et al. , 2016 ; Huang et al. , 2017 ) in deep neural network literature ] . Furthermore , in ( 10 ) and ( 12 ) , we see a weight coupling of Wl and Ul ( due to the shared components of A , D and Z ) . This coupling satisfies the necessary condition of the convergence in Chen et al . ( 2018 ) ( Theorem 1 ) . Using Theorem 2 in Chen et al . ( 2018 ) , it can be shown that reweighted-RNN , in theory , needs a smaller number of iterations ( i.e. , d in Algorithm 1 ) to reach convergence , compared to ISTA ( Daubechies et al. , 2004 ) and FISTA ( Beck & Teboulle , 2009 ) .
This paper proposes a novel method to solve the sequential signal reconstruction problem. The method is based on the deep unfolding methods and incorporates the reweighting mechanism. Additionally, they derive the generalization error bound and show how their over-parameterized reweighting RNNs ensure good generalization. Lastly, the experiments on the task of video sequence reconstruction suggest the superior performance of the proposed method.
SP:4605e601a717bc05833778d0916a393ffdf8c331
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks
1 INTRODUCTION . Most methods for adversarial attacks on deep learning models operate in the so-called white-box setting ( Goodfellow et al. , 2014 ) , where the model being attacked , and its gradients , are assumed to be fully known . Recently , however there has also been considerable attention given to the black-box setting as well , where the model is unknown and can only be queried by a user , and which much better captures the “ typical ” state by which an attack can interact with a model ( Chen et al. , 2017 ; Tu et al. , 2019 ; Ilyas et al. , 2019 ; Moon et al. , 2019 ) . And several past methods in this area have conclusively demonstrated that , given sufficient number of queries , it is possible to achieve similarly effective attacks in the black-box setting akin to the white-box setting . However , as has also been demonstrated by past work Ilyas et al . ( 2019 ; 2018 ) , the efficiency of these black-box attacks ( the number of queries need to find an adversarial example ) is fundamentally limited unless they can exploit the spatial correlation structure inherent in the model ’ s gradients . Yet , at the same time , most previous methods have used rather ad-hoc methods of modeling such correlation structure , such as using “ tiling ” bases and priors over time Ilyas et al . ( 2019 ) that require attack vectors be constant over large regions , or by other means such as using smoothly-varying perturbations Ilyas et al . ( 2018 ) to estimate these gradients . In this work , we present a new , more rigorous approach to model the correlation structure of the gradients within the black-box adversarial setting . In particular , we propose to model the gradient of the model loss function with respect to the input image using a Gaussian Markov Random Field ( GMRF ) . This approach offers a number of advantages over prior methods : 1 ) it naturally captures the spatial correlation observed empirically in most deep learning models ; 2 ) using the model , we are able to compute exact posterior estimates over the true gradient given observed data , while also fitting the parameters of the GMRF itself via an expectation maximization ( EM ) approach ; and 3 ) the method provides a natural alternative to uniformly sampling perturbations , based upon the eigenvectors of the covariance matrix . Although representing the joint covariance over the entire input image may seem intractable for large-scale images , we can efficiently compute necessary terms for very general forms of grid-based GMRFs using the Fast Fourier Transform ( FFT ) . We evaluate our approach by attempting to find adversarial examples , over multiple different data sets and model architectures , using the GMRF combined with a very simple greedy zeroth order search technique ; the method effectively forms a “ black-box ” version of the fast gradient sign method ( FGSM ) , by constructing an estimate of the gradient at the input image itself , then taking a single signed step in this direction . Despite its simplicity , we show that owing to the correlation structure provided by the GMRF model , the approach outperforms more complex approaches such as the BANDITS-TD ( Ilyas et al. , 2019 ) or PARSIMONIOUS ( Moon et al. , 2019 ) methods ( the current state of the art in black-box attacks ) , especially for small query budgets . 2 RELATED WORK . Black-box adversarial attacks can be broadly categorized across a few different dimensions : optimization-based versus transfer-based attacks , and score-based versus decision-based attacks . In the optimization-based adversarial setting , the adversarial attack is formulated as the problem of maximizing some loss function ( e.g. , the accuracy of the classifier or some continuous variant ) using a zeroth order oracle , i.e. , by making queries to the classifier . And within this optimization setting , there is an important distinction between score-based attacks , which directly observe a traditional model loss , class probability , or other continuous output of the classifier on a given a example , versus decision-based attacks , which only observe the hard label predicted by the classifier . Decision based attacks have been studied by Brendel et al . ( 2017 ) ; Chen et al . ( 2019 ; 2017 ) , and ( not surprisingly ) typically require more queries to the classifier than the score-based setting . In the regime of score-based attacks , the first such iterative attack on a class of binary classifiers was first studied by Nelson et al . ( 2012 ) . A real-world application of black-box attacks to fool a PDF malware classifier was demonstrated by ( Xu et al. , 2016 ) , for which a genetic algorithm was used . Narodytska & Kasiviswanathan ( 2017 ) demonstrated the first black-box attack on deep neural networks . Subsequently black-box attacks based on zeroth order optimization schemes , using techniques such as KWSA ( Kiefer et al. , 1952 ) and RDSA ( Nesterov & Spokoiny , 2017 ) were developed in Chen et al . ( 2017 ) ; Ilyas et al . ( 2018 ) . Though Chen et al . ( 2017 ) generated successful attacks attaining high attack accuracy , the method was found to be extremely query hungry which was then remedied to an extent by ( Ilyas et al. , 2018 ) . In Ilyas et al . ( 2019 ) , the authors exploit correlation of gradients across iterations by setting a prior and use a piece wise constant perturbation , i.e. , tiling to develop a query efficient black-box method . Recently Moon et al . ( 2019 ) used a combinatorial optimization perspective to address the black-box adversarial attack problem . A concurrent line of work Papernot et al . ( 2017 ) has considered the transfer-based setting , rather than the optimization setting . These approaches create adversarial attacks by training a surrogate network with the aim to mimic the target model ’ s decisions , which are then obtained through blackbox queries . With the substitute model in place , the attack method then uses white-box attack strategies in order to transfer the attacks to the original target model . However , substitute network based attack strategies have been found to have a higher query complexity than those based on gradient estimation . The exploitation of the structure of the input data space so as to append a regularizer has been recently found to be effective for robust learning . In particular , in Lin et al . ( 2019 ) showed that by using Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images helps to learn discriminative models that are robust to in-class variations of the input data . Setting of this work In this paper , we are specifically focused on the optimization-based , scorebased setting , following most directly upon the work of ( Chen et al. , 2017 ; Ilyas et al. , 2018 ; 2019 ; Moon et al. , 2019 ) . However , our contribution is also largely orthogonal to the methods presented in these prior works . Specifically , we show that by modeling the covariance structure of the gradient using a Gaussian MRF , a very simple approach ( which largely mirrors the simple black box search from ( Ilyas et al. , 2018 ) ) achieves performance that is competitive than the best current methods , especially when using relatively few queries . We further emphasize that while we focus on this simple search strategy here , nothing would prevent this GMRF approach from being applied to other black-box search strategies as well . 3 ADVERSARIAL ATTACKS . In the context of classifiers , adversarial examples are carefully crafted inputs to the classifier which have been perturbed by an additive perturbation so as to cause the classifier to misclassify the input . In particular , so as to ensure minimal visual distortion , the perturbation is subjected to a constraint in terms of its magnitude typically pre-specified in terms of ` p-norm , for some fixed p , less than some p. Furthermore in the context of classifiers , attacks can be further classified into targeted or untargeted attacks . For simplicity and brevity , in this paper , we restrict our attention to untargeted attacks . Formally , define a classifier C : X 7→ Y with a corresponding classification loss function L ( x , y ) , where x ∈ X is the input to the classifier , y ∈ Y , X is the set of inputs and Y is the set of labels . Technically speaking , the objective of generating a misclassified example can be posed as an optimization problem . In particular , the aim is to generate an adversarial example x′ for a given input x which maximizes L ( x′ , y ) but still remains p-close in terms of a specified metric , to the original input . Thus , the generation of an adversarial attack can be formalized as a constrained optimization as follows : x′ = arg max x′ : ‖x′−x‖p≤ p L ( x′ , y ) . ( 1 ) We give a brief overview of adversarial attacks categorized in terms of access to information namely , white-box and black-box attacks . 3.1 WHITE-BOX ADVERSARIAL ATTACKS . White-box settings assume access to the entire classifier and the analytical form of the possibly nonconvex classifier loss function . White-box methods can be further categorized into single iteration and multiple iterations based methods . In the class of single iteration white-box methods , the Fast Gradient Sign Method ( FGSM ) has been very successful , which computes the adversarial input in the following way : x′ = x + psign ( ∇L ( x , y ) ) . ( 2 ) FGSM is however limited to generation of ` ∞ based bounded adversarial inputs . Given , a constrained optimization problem at hand with access to a first order oracle , the most effective method is projected gradient descent ( PGD ) . This multi iteration method generates the adversarial input xk by performing k iterations with x0 = x , where k is specified apriori . In particular , at the l-th iteration , PGD generates the perturbed input xl as follows : xl = ΠBp ( x , ) ( xl−1 + ηsl ) with sl = Π∂Bp ( 0,1 ) ∇xL ( xl−1 , y ) , ( 3 ) where ΠS denotes the projection onto the set S , Bp ( x′ , ε′ ) is the ` p ball of radius ε′ centered at x′ , η denotes the step size , and ∂U is the boundary of a set U . By making , sl to be the projection of the gradient ∇xL ( xl−1 , y ) at xl−1 onto the unit ` p ball , it is ensured that sl is the unit ` p-norm vector that has the largest inner product with ∇xL ( xl−1 , y ) . When p = 2 , the projection corresponds to the normalized gradient , while for p = ∞ , the projection corresponds to the sign of the gradient . Moreover , due to the projection at each iteration , the adversarial input generated at every iteration conforms to the specified constraint . However , in most real world deployments , it is impractical to assume complete access to the classifier and analytic form of the corresponding loss function , which makes black-box settings more realistic . 3.2 BLACK-BOX ADVERSARIAL ATTACKS . In a typical black-box setting , the adversary has only access to a zeroth order oracle , which when queried for an input ( x , y ) , yields the value of the loss function L ( x , y ) . In spite of the information constraints and typically high dimensional inputs , black-box attacks have been shown to be pretty effective ( Ilyas et al. , 2019 ; 2018 ; Moon et al. , 2019 ) . The main building block of black-box methods is finite difference schemes so as to estimate gradients . Two of the most widely finite difference schemes are the Kiefer-Wolfowitz Stochastic Approximation ( KWSA ) Kiefer et al . ( 1952 ) and Random Directions Stochastic Approximation ( RDSA ) ( Nesterov & Spokoiny , 2017 ) . KWSA operates as follows : ∇̂xL ( x , y ) = d∑ k=1 ek ( L ( x + δek , y ) − L ( x , y ) ) /δ ≈ d∑ k=1 ek∇xL ( x , y ) ek , ( 4 ) where e1 , . . . , ed are canonical basis vectors . The estimator can be further extended to higher order finite difference operators , but in the face of possibly non-smooth loss functions do not improve the accuracy of the estimator at the cost of additional queries . Hence , the first or the second order finite difference operators have been proven to be extremely effective . Though KWSA yields reasonably accurate gradient estimates , it is prohibitively query hungry . For instance , for Inception-v3 classifier for ImageNet , for every gradient estimate KWSA would require 299 × 299 × 3 queries . RDSA provides a better alternative which operates as follows : ∇̂xL ( x , y ) = 1 m m∑ k=1 zk ( L ( x + δzk , y ) − L ( x , y ) ) /δ , ( 5 ) where zk ’ s are usually drawn from a normal distribution . While RDSA is more query efficient than KWSA , it still needs a lot of queries to have a reasonably accurate estimate which typically scales with dimension . The step size δ > 0 in RDSA and KWSA is a key parameter of choice ; a higher δ could lead to extremely biased estimates , while a lower δ can lead to an unstable estimator . In light of the two aforementioned gradient estimation schemes , the PGD attack ( c.f . equation 3 ) can be suitably modified to suit black-box attacks as follows : xl = ΠBp ( x , ) ( xl−1 + ηŝl ) with ŝl = Π∂Bp ( 0,1 ) ∇̂xL ( xl−1 , y ) . ( 6 ) However , owing to the biased gradient estimates , though a PGD based black-box attack is successful turns out to be query hungry . In particular , in order to ensure sufficient increase of the objective at each iteration the query complexity scales with dimension and hence is prohibitively large . However , most if not all successful black-box adversarial attacks tend to be multi iteration based methods . In the sequel , we develop a query efficient single step black-box adversarial attack .
In this paper, the authors propose a method for black box adversarial image generation. The idea is to learn a parameterization of a precision matrix so that gradients of a network's loss are assumed to be drawn from a corresponding Gaussian. The parameters of this model are fit efficiently using the spectral theorem that their particular parameterization of the precision matrix allows them to use. Gradient estimation is then viewed as a Gaussian conditioning problem given observations (see last equation on page 5).
SP:896e4cb1fcd0dbbb9ecaa510dd5052721d46c68f
Gaussian MRF Covariance Modeling for Efficient Black-Box Adversarial Attacks
1 INTRODUCTION . Most methods for adversarial attacks on deep learning models operate in the so-called white-box setting ( Goodfellow et al. , 2014 ) , where the model being attacked , and its gradients , are assumed to be fully known . Recently , however there has also been considerable attention given to the black-box setting as well , where the model is unknown and can only be queried by a user , and which much better captures the “ typical ” state by which an attack can interact with a model ( Chen et al. , 2017 ; Tu et al. , 2019 ; Ilyas et al. , 2019 ; Moon et al. , 2019 ) . And several past methods in this area have conclusively demonstrated that , given sufficient number of queries , it is possible to achieve similarly effective attacks in the black-box setting akin to the white-box setting . However , as has also been demonstrated by past work Ilyas et al . ( 2019 ; 2018 ) , the efficiency of these black-box attacks ( the number of queries need to find an adversarial example ) is fundamentally limited unless they can exploit the spatial correlation structure inherent in the model ’ s gradients . Yet , at the same time , most previous methods have used rather ad-hoc methods of modeling such correlation structure , such as using “ tiling ” bases and priors over time Ilyas et al . ( 2019 ) that require attack vectors be constant over large regions , or by other means such as using smoothly-varying perturbations Ilyas et al . ( 2018 ) to estimate these gradients . In this work , we present a new , more rigorous approach to model the correlation structure of the gradients within the black-box adversarial setting . In particular , we propose to model the gradient of the model loss function with respect to the input image using a Gaussian Markov Random Field ( GMRF ) . This approach offers a number of advantages over prior methods : 1 ) it naturally captures the spatial correlation observed empirically in most deep learning models ; 2 ) using the model , we are able to compute exact posterior estimates over the true gradient given observed data , while also fitting the parameters of the GMRF itself via an expectation maximization ( EM ) approach ; and 3 ) the method provides a natural alternative to uniformly sampling perturbations , based upon the eigenvectors of the covariance matrix . Although representing the joint covariance over the entire input image may seem intractable for large-scale images , we can efficiently compute necessary terms for very general forms of grid-based GMRFs using the Fast Fourier Transform ( FFT ) . We evaluate our approach by attempting to find adversarial examples , over multiple different data sets and model architectures , using the GMRF combined with a very simple greedy zeroth order search technique ; the method effectively forms a “ black-box ” version of the fast gradient sign method ( FGSM ) , by constructing an estimate of the gradient at the input image itself , then taking a single signed step in this direction . Despite its simplicity , we show that owing to the correlation structure provided by the GMRF model , the approach outperforms more complex approaches such as the BANDITS-TD ( Ilyas et al. , 2019 ) or PARSIMONIOUS ( Moon et al. , 2019 ) methods ( the current state of the art in black-box attacks ) , especially for small query budgets . 2 RELATED WORK . Black-box adversarial attacks can be broadly categorized across a few different dimensions : optimization-based versus transfer-based attacks , and score-based versus decision-based attacks . In the optimization-based adversarial setting , the adversarial attack is formulated as the problem of maximizing some loss function ( e.g. , the accuracy of the classifier or some continuous variant ) using a zeroth order oracle , i.e. , by making queries to the classifier . And within this optimization setting , there is an important distinction between score-based attacks , which directly observe a traditional model loss , class probability , or other continuous output of the classifier on a given a example , versus decision-based attacks , which only observe the hard label predicted by the classifier . Decision based attacks have been studied by Brendel et al . ( 2017 ) ; Chen et al . ( 2019 ; 2017 ) , and ( not surprisingly ) typically require more queries to the classifier than the score-based setting . In the regime of score-based attacks , the first such iterative attack on a class of binary classifiers was first studied by Nelson et al . ( 2012 ) . A real-world application of black-box attacks to fool a PDF malware classifier was demonstrated by ( Xu et al. , 2016 ) , for which a genetic algorithm was used . Narodytska & Kasiviswanathan ( 2017 ) demonstrated the first black-box attack on deep neural networks . Subsequently black-box attacks based on zeroth order optimization schemes , using techniques such as KWSA ( Kiefer et al. , 1952 ) and RDSA ( Nesterov & Spokoiny , 2017 ) were developed in Chen et al . ( 2017 ) ; Ilyas et al . ( 2018 ) . Though Chen et al . ( 2017 ) generated successful attacks attaining high attack accuracy , the method was found to be extremely query hungry which was then remedied to an extent by ( Ilyas et al. , 2018 ) . In Ilyas et al . ( 2019 ) , the authors exploit correlation of gradients across iterations by setting a prior and use a piece wise constant perturbation , i.e. , tiling to develop a query efficient black-box method . Recently Moon et al . ( 2019 ) used a combinatorial optimization perspective to address the black-box adversarial attack problem . A concurrent line of work Papernot et al . ( 2017 ) has considered the transfer-based setting , rather than the optimization setting . These approaches create adversarial attacks by training a surrogate network with the aim to mimic the target model ’ s decisions , which are then obtained through blackbox queries . With the substitute model in place , the attack method then uses white-box attack strategies in order to transfer the attacks to the original target model . However , substitute network based attack strategies have been found to have a higher query complexity than those based on gradient estimation . The exploitation of the structure of the input data space so as to append a regularizer has been recently found to be effective for robust learning . In particular , in Lin et al . ( 2019 ) showed that by using Wasserstein-2 geometry to capture semantically meaningful neighborhoods in the space of images helps to learn discriminative models that are robust to in-class variations of the input data . Setting of this work In this paper , we are specifically focused on the optimization-based , scorebased setting , following most directly upon the work of ( Chen et al. , 2017 ; Ilyas et al. , 2018 ; 2019 ; Moon et al. , 2019 ) . However , our contribution is also largely orthogonal to the methods presented in these prior works . Specifically , we show that by modeling the covariance structure of the gradient using a Gaussian MRF , a very simple approach ( which largely mirrors the simple black box search from ( Ilyas et al. , 2018 ) ) achieves performance that is competitive than the best current methods , especially when using relatively few queries . We further emphasize that while we focus on this simple search strategy here , nothing would prevent this GMRF approach from being applied to other black-box search strategies as well . 3 ADVERSARIAL ATTACKS . In the context of classifiers , adversarial examples are carefully crafted inputs to the classifier which have been perturbed by an additive perturbation so as to cause the classifier to misclassify the input . In particular , so as to ensure minimal visual distortion , the perturbation is subjected to a constraint in terms of its magnitude typically pre-specified in terms of ` p-norm , for some fixed p , less than some p. Furthermore in the context of classifiers , attacks can be further classified into targeted or untargeted attacks . For simplicity and brevity , in this paper , we restrict our attention to untargeted attacks . Formally , define a classifier C : X 7→ Y with a corresponding classification loss function L ( x , y ) , where x ∈ X is the input to the classifier , y ∈ Y , X is the set of inputs and Y is the set of labels . Technically speaking , the objective of generating a misclassified example can be posed as an optimization problem . In particular , the aim is to generate an adversarial example x′ for a given input x which maximizes L ( x′ , y ) but still remains p-close in terms of a specified metric , to the original input . Thus , the generation of an adversarial attack can be formalized as a constrained optimization as follows : x′ = arg max x′ : ‖x′−x‖p≤ p L ( x′ , y ) . ( 1 ) We give a brief overview of adversarial attacks categorized in terms of access to information namely , white-box and black-box attacks . 3.1 WHITE-BOX ADVERSARIAL ATTACKS . White-box settings assume access to the entire classifier and the analytical form of the possibly nonconvex classifier loss function . White-box methods can be further categorized into single iteration and multiple iterations based methods . In the class of single iteration white-box methods , the Fast Gradient Sign Method ( FGSM ) has been very successful , which computes the adversarial input in the following way : x′ = x + psign ( ∇L ( x , y ) ) . ( 2 ) FGSM is however limited to generation of ` ∞ based bounded adversarial inputs . Given , a constrained optimization problem at hand with access to a first order oracle , the most effective method is projected gradient descent ( PGD ) . This multi iteration method generates the adversarial input xk by performing k iterations with x0 = x , where k is specified apriori . In particular , at the l-th iteration , PGD generates the perturbed input xl as follows : xl = ΠBp ( x , ) ( xl−1 + ηsl ) with sl = Π∂Bp ( 0,1 ) ∇xL ( xl−1 , y ) , ( 3 ) where ΠS denotes the projection onto the set S , Bp ( x′ , ε′ ) is the ` p ball of radius ε′ centered at x′ , η denotes the step size , and ∂U is the boundary of a set U . By making , sl to be the projection of the gradient ∇xL ( xl−1 , y ) at xl−1 onto the unit ` p ball , it is ensured that sl is the unit ` p-norm vector that has the largest inner product with ∇xL ( xl−1 , y ) . When p = 2 , the projection corresponds to the normalized gradient , while for p = ∞ , the projection corresponds to the sign of the gradient . Moreover , due to the projection at each iteration , the adversarial input generated at every iteration conforms to the specified constraint . However , in most real world deployments , it is impractical to assume complete access to the classifier and analytic form of the corresponding loss function , which makes black-box settings more realistic . 3.2 BLACK-BOX ADVERSARIAL ATTACKS . In a typical black-box setting , the adversary has only access to a zeroth order oracle , which when queried for an input ( x , y ) , yields the value of the loss function L ( x , y ) . In spite of the information constraints and typically high dimensional inputs , black-box attacks have been shown to be pretty effective ( Ilyas et al. , 2019 ; 2018 ; Moon et al. , 2019 ) . The main building block of black-box methods is finite difference schemes so as to estimate gradients . Two of the most widely finite difference schemes are the Kiefer-Wolfowitz Stochastic Approximation ( KWSA ) Kiefer et al . ( 1952 ) and Random Directions Stochastic Approximation ( RDSA ) ( Nesterov & Spokoiny , 2017 ) . KWSA operates as follows : ∇̂xL ( x , y ) = d∑ k=1 ek ( L ( x + δek , y ) − L ( x , y ) ) /δ ≈ d∑ k=1 ek∇xL ( x , y ) ek , ( 4 ) where e1 , . . . , ed are canonical basis vectors . The estimator can be further extended to higher order finite difference operators , but in the face of possibly non-smooth loss functions do not improve the accuracy of the estimator at the cost of additional queries . Hence , the first or the second order finite difference operators have been proven to be extremely effective . Though KWSA yields reasonably accurate gradient estimates , it is prohibitively query hungry . For instance , for Inception-v3 classifier for ImageNet , for every gradient estimate KWSA would require 299 × 299 × 3 queries . RDSA provides a better alternative which operates as follows : ∇̂xL ( x , y ) = 1 m m∑ k=1 zk ( L ( x + δzk , y ) − L ( x , y ) ) /δ , ( 5 ) where zk ’ s are usually drawn from a normal distribution . While RDSA is more query efficient than KWSA , it still needs a lot of queries to have a reasonably accurate estimate which typically scales with dimension . The step size δ > 0 in RDSA and KWSA is a key parameter of choice ; a higher δ could lead to extremely biased estimates , while a lower δ can lead to an unstable estimator . In light of the two aforementioned gradient estimation schemes , the PGD attack ( c.f . equation 3 ) can be suitably modified to suit black-box attacks as follows : xl = ΠBp ( x , ) ( xl−1 + ηŝl ) with ŝl = Π∂Bp ( 0,1 ) ∇̂xL ( xl−1 , y ) . ( 6 ) However , owing to the biased gradient estimates , though a PGD based black-box attack is successful turns out to be query hungry . In particular , in order to ensure sufficient increase of the objective at each iteration the query complexity scales with dimension and hence is prohibitively large . However , most if not all successful black-box adversarial attacks tend to be multi iteration based methods . In the sequel , we develop a query efficient single step black-box adversarial attack .
This paper deals with the problem of finding an adversarial examples when only the output of a model can be evaluated, but not its gradient. The key idea of the paper is building a Gaussian MRF (a Gaussian with a sparse inverse covariance matrix with a special band structure) to maintain a model for the gradients for predicting search directions. The approach is sensible and uses the FFT trick applicable for diagonalizing covariance matrices with circulant structure.
SP:896e4cb1fcd0dbbb9ecaa510dd5052721d46c68f
DeepPCM: Predicting Protein-Ligand Binding using Unsupervised Learned Representations
1 INTRODUCTION . A main goal of cheminformatics in the area of drug discovery is to model the interaction of small molecules with proteins in-silico . The ability to accurately predict the binding affinity of a ligand towards a biological target without the need to conduct expensive in-vitro experiments has the potential to accelerate the drug development process by enabling early prioritization of promising drug candidates ( Cortés-Ciriano et al. , 2015 ) . A common approach is to train a machine learning algorithm to predict the binding affinity of ligands towards a certain biological target using a training set of compounds that have been experimentally measured on this target . This modality is commonly referred to as a quantitative structure-activity-relationship ( QSAR ) model ( van Westen et al. , 2011 ) . QSAR models can be broadly classified into two types : single-task QSAR models and multi-task QSAR models ( Figure 1 ) . In single-task QSAR modeling , a model is trained separately for each protein to predict a binary or continuous outcome ( binding vs not-binding or the binding affinity ) given a compound input . The machine learning model used could be anything from logistic regression to deep neural networks ( Lenselink et al. , 2017 ; Cherkasov et al. , 2014 ) .In multi-task modeling , a single model is trained to predict binding across multiple proteins simultaneously , allowing the model to take advantage of the correlations in binding activity between compounds on different targets ( Caruana , 1997 ; Yuan et al. , 2016 ) . This is done , for example , by using a neural network with multiple output nodes where each output node corresponds to a different protein . Thus , multiple outputs are predicted given a compound input ( Simões et al. , 2018 ; Dahl et al. , 2014 ) . While these methods have been employed on various protein targets , there are methodological concerns to their use ( Lima et al. , 2016 ; Mitchell , 2014 ) . Both single-task and multi-task models must be retrained from scratch if one wishes to incorporate binding data for a new protein , and both can not be used at all to make predictions on new protein targets for which experimental data is absent ( van Westen et al. , 2011 ) . An attractive solution to this problem is to also include protein information in a so called proteochemometric model ( Figure 1 ) . The additional protein information enables a PCM model to directly utilize similarities between proteins for bioactivity modeling ( Cortés-Ciriano et al. , 2015 ; van Westen et al. , 2011 ) . This leads to many potential benefits over classical QSAR single-task and multi-task modeling . First , as proteins are explicitly represented by a defined featurization , PCM models can be used to make activity predictions on proteins without pre-existing bioactivity data , which is impossible with single-task or multi-task modeling . They can also be used to model binding on proteins for which experimental data may be too limited to train an effective single-task model ( van Westen et al. , 2011 ) . Second , with an expressive protein descriptor , a model could leverage similarities and differences between proteins directly to model their binding behaviors , rather than merely using correlations found among the compounds they bind to , as in multi-task modeling , or ignoring protein relationships altogether as in single-task modeling ( Cortés-Ciriano et al. , 2015 ; van Westen et al. , 2011 ) . Consequently , PCM models have found success on a variety of protein targets , and using many different machine learning methods , including random forests and SVMs ( Ballester & Mitchell , 2010 ; Weill et al. , 2011 ; van Westen et al. , 2013 ; Shiraishi et al. , 2013 ; Cheng et al. , 2012 ) . Recent advances in the field of Deep Learning have also led to its use in QSAR single and multitask modeling as well as PCM modeling ( Lenselink et al. , 2017 ; Menden et al. , 2013 ; LeCun et al. , 2015 ; Schmidhuber , 2015 ) . Lenselink et al . ( 2017 ) compared deep learning methods against other machine learning methods for PCM , single-task QSAR , and multi-task QSAR models on a benchmark dataset , and the authors found that PCM models using deep neural networks outperform other machine learning methods as well as single-task and multi-task QSAR models . All of the aforementioned works make features for both the small molecules and the proteins based on hand-crafted feature extraction protocols . Small molecules are often represented by counting and aggregating smaller substructures , while proteins are often represented by aggregation of computed physio-chemical features of their amino-acids or by encoding the differences in aligned sequences . In many application domains of Deep Learning , recent research has shown that these methods generally work better when the input data representation is lower-level and unabstracted , allowing the model to learn hierarchical features directly rather than relying on features which are hand-crafted by humans ( LeCun et al. , 2015 ; Schmidhuber , 2015 ) . For example , this is the case in computer vision , where deep learning on pixel value features has been the state of the art for several years ( Schmidhuber , 2015 ) . In cases where the input space is too high-dimensional , and especially if there is not enough labeled data to train a model end-to-end , unsupervised representation learning is used to generate lower-dimensional embeddings , again relying on machine learning , rather than human engineering , for feature extraction . This is the case in natural language processing , as well as in video analysis ( Srivastava et al. , 2015 ; Erhan et al. , 2010 ; Mikolov et al. , 2013 ) . In this work , we follow this reasoning and utilize such unsupervised-learned embeddings to represent both the ligand and protein spaces for proteochemometric modelling . To the best of our knowl- edge , we combine , for the first time , compound and protein representations both generated by deep learning models that were pretrained on an unsupervised learning task , as inputs to a PCM model . Moreover , we believe that this is the first work which combines unsupervised-learned embeddings of biological and chemical entities simultaneously in order to solve a downstream task . 1.1 COMPOUND DESCRIPTORS . The following section describes the currently used handcrafted ligand descriptors , evaluates their properties , and introduces the ligand descriptors that we use for our model — known as CDDD descriptors — which are generated via unsupervised machine learning . Handcrafted Compound Descriptors The state-of-the-art compound descriptors that are used for a vast majority of PCM and other chemoinformatics tasks are different varieties of circular fingerprints . In binary keyed circular fingerprints , each bit refers to a specific substructure 's presence or absence in the molecule , while in counts format the bits refer to the number of occurrences of the substructure ( van Westen et al. , 2011 ; Glen et al. , 2006 ; Rogers & Hahn , 2010 ) . As the number of potential substructures is vast ( ∼232 ) , the resulting sparse set of bits is usually hashed and folded to a much smaller size ( ∼103 ) at the expense of hash and bit collisions ( Rogers & Hahn , 2010 ) . These structure-based descriptors can be augmented with physicochemical descriptors , such as DRAGON or PaDEL , along with other chemical descriptors like atom identity/type , MACCs keys , and topological indices ( Mauri et al. , 2006 ; Yap , 2011 ) . Issues Circular fingerprints contain information about only the presence or absence of certain substructures in the compound – they thus fail to capture the shape or arrangement of those substructures within the compound . Furthermore , fingerprints rely on a hashing protocol to compress the millions of different substructures that are recorded into a smaller vector – as a result , hash collisions can mean that completely different substructures can correspond to the same fingerprint bit . This also means that one can not determine which exact substructures are responsible for the model prediction . CDDD – Ligand Descriptor To avoid these issues , we use the CDDD ( Continuous and Data Driven Molecular Descriptors ) , a model for the generation of lower-dimensional representation vectors of molecules developed by Winter et al . ( 2019 ) . This model uses a recurrent autoencoder trained on the task of translating non-canonical SMILES string representations of compounds into their canonical form . After unsupervised learning on approximately 72 million compound SMILES , a given compound is represented by the 512-length bottleneck of the translation model , and thus is encoded into a 512-length vector . These embeddings have been shown to be effective on QSAR prediction and virtual screening tasks ( Winter et al. , 2019 ) . CDDD descriptors offer a unique , compact , and continuous vector representation for each compound , as opposed to fingerprints , which are non-unique , discrete , and must be hashed to be made compact . Molecules with the same substructures but differently arranged will correspond to different vectors . Additionally , these unsupervised-learned descriptors have demonstrated competitive or superior results compared to molecular fingerprints on a variety of other tasks , indicating their ability to effectively represent compound properties and behaviors ( Winter et al. , 2019 ) . A diagram can be found in Appendix section D . 1.2 PROTEIN DESCRIPTORS . Next , we describe the various methodologies used for handcrafted protein descriptors . There are many different examples of highly specialized protein feature-extraction protocols used for PCM models that operate over a narrow range of targets . We describe the broad categories , examine their drawbacks and introduce the unsupervised protein descriptor which we use , known as UniRep . Handcrafted Protein Descriptors Most commonly used protein descriptors can be described as either amino-acid-based , or sequence-based . Amino-acid-based descriptors are computed using some combination of physicochemical and structural properties of individual amino acids . For example , the commonly used Z-scale descriptors and their counterpart ProtFP descriptors are constructed by computing various amino acid physicochemical properties ( such as hydrophobicity or polarity ) , taking a PCA over these properties , and then representing each amino acid by its first few principal components ( Sandberg et al. , 1998 ; van Westen et al. , 2013 ) . These descriptors can be combined with structural information descriptors , such as normalized van der Waals volume , charge , and the secondary structure of the protein at that amino acid residue ( Lapins et al. , 2013 ) . The whole protein is then represented by aggregating the amino-acid descriptors over the whole sequence . Sequence-based descriptors are constructed by taking aligned sequences or regions and using encodings to represent the amino acid identity at a given location in the aligned region . For example , in Nabu et al . ( 2015 ) , the authors build a model that predicts the bioactivity on mutated vs wild type penicillin binding proteins based on the positions of the mutations in the sequence . This was done by aligning the protein sequences and uniquely representing the positions a mutation may have occurred at with a one-hot-encoding . Thus , from 75 potential mutation sites , a 75-length vector is returned for each protein . Other variations of this technique use motifs or encodings of multiple varying segments ( Lapinsh et al. , 2005 ) . Additionally , on proteins for which 3D-information is available , this information can be incorporated into the protein descriptor . Methods vary – in general , 3D-models , for example crystal structures or water-field maps , are first aligned . Subsequently , descriptors are built based on amino acid identity at specific positions at the binding pocket or taking a PCA over the aligned field maps ( van Westen et al. , 2013 ; Lapinsh et al. , 2005 ; Subramanian et al. , 2016 ; Kruger & Overington , 2012 ) . Issues Amino acid-based descriptors require crude aggregation or binning methods , as they must convert variable length sequences of amino acids into a constant-length vector . For example , a common strategy is to divide the sequence into equal-length segments , then average over the descriptors for each segment ( Lenselink et al. , 2017 ; van Westen et al. , 2013 ) . This leads to the undesirable property where a single amino acid insertion early in the sequence would shift all subsequent segments , thus changing every descriptor bit despite a small change to the protein itself . Additionally , motifs and functional domains have variable numbers of amino acids which are not necessarily contiguous , so averaging over fixed-length contiguous segments can fail to capture relevant features consistently . Sequence and 3D-structure based descriptors must be aligned , which restricts the usable bioactivity data to only a very small fraction of closely related proteins for which an alignment is meaningful . These descriptors also can not be applied to new regions of the protein space , where bioactivity measurements of similar proteins are unavailable . Essentially , these methods build descriptors that are explicitly based on the differences between a few proteins , which greatly limits the scope of the problems that can be approached with such descriptors , as well as the amount of data that can be leveraged to train models . These data availability issues are exacerbated when using 3D-descriptors , since 3D structures are only available for a subset of proteins . UniRep – Protein Descriptor We use UniRep developed by Alley et al . ( 2019 ) , which uses a multiplicative LSTM architecture on approximately 24 million protein amino acid sequences taken from UniRef50 ( Krause et al. , 2016 ) . The UniRep model is trained on the next-character-prediction task , where the LSTM predicts the next amino acid in the protein sequence given the previous amino acids . To generate fixed-length embeddings , the hidden states of the model that are generated during the forward pass are averaged over the sequence dimension . For implementation details , refer to Alley et al . ( 2019 ) . The pre-trained model can be used out-of-the-box , taking as input amino acid sequences and outputting embeddings of length 64 , 256 , or 1900 depending on the architecture used . For the PCM model which we propose here , we found that the 256-length embeddings performed best , and so we will use UniRep to refer to the 256-length UniRep descriptor in this paper . A diagram can be found in Appendix section E UniRep descriptors offer many advantages over their handcrafted counterparts . The descriptors are alignment-free sequence-based representations , which therefore allow the utilization of protein bioactivity assay data across families and species instead of just highly similar proteins , greatly increasing the available training data . Moreover , these descriptors do not require 3D-structure information , which is available for only a small subset of proteins . Additionally , there is no need for binning or averaging across arbitrary length contiguous chunks of sequence .
The authors present a model with state-of-the-art performance for predicting protein-ligand affinity and provide a thorough set of benchmarks to illustrate the superiority of combining learned low-dimensional embedding representations of both ligands and proteins. The authors then show that these learned representations are more powerful than handcrafted features such as circular fingerprints, etc. when combined into a model that jointly takes as input both the ligand and protein.
SP:62e72e469e4e6d1f0c6eac1074fd35439086e08a
DeepPCM: Predicting Protein-Ligand Binding using Unsupervised Learned Representations
1 INTRODUCTION . A main goal of cheminformatics in the area of drug discovery is to model the interaction of small molecules with proteins in-silico . The ability to accurately predict the binding affinity of a ligand towards a biological target without the need to conduct expensive in-vitro experiments has the potential to accelerate the drug development process by enabling early prioritization of promising drug candidates ( Cortés-Ciriano et al. , 2015 ) . A common approach is to train a machine learning algorithm to predict the binding affinity of ligands towards a certain biological target using a training set of compounds that have been experimentally measured on this target . This modality is commonly referred to as a quantitative structure-activity-relationship ( QSAR ) model ( van Westen et al. , 2011 ) . QSAR models can be broadly classified into two types : single-task QSAR models and multi-task QSAR models ( Figure 1 ) . In single-task QSAR modeling , a model is trained separately for each protein to predict a binary or continuous outcome ( binding vs not-binding or the binding affinity ) given a compound input . The machine learning model used could be anything from logistic regression to deep neural networks ( Lenselink et al. , 2017 ; Cherkasov et al. , 2014 ) .In multi-task modeling , a single model is trained to predict binding across multiple proteins simultaneously , allowing the model to take advantage of the correlations in binding activity between compounds on different targets ( Caruana , 1997 ; Yuan et al. , 2016 ) . This is done , for example , by using a neural network with multiple output nodes where each output node corresponds to a different protein . Thus , multiple outputs are predicted given a compound input ( Simões et al. , 2018 ; Dahl et al. , 2014 ) . While these methods have been employed on various protein targets , there are methodological concerns to their use ( Lima et al. , 2016 ; Mitchell , 2014 ) . Both single-task and multi-task models must be retrained from scratch if one wishes to incorporate binding data for a new protein , and both can not be used at all to make predictions on new protein targets for which experimental data is absent ( van Westen et al. , 2011 ) . An attractive solution to this problem is to also include protein information in a so called proteochemometric model ( Figure 1 ) . The additional protein information enables a PCM model to directly utilize similarities between proteins for bioactivity modeling ( Cortés-Ciriano et al. , 2015 ; van Westen et al. , 2011 ) . This leads to many potential benefits over classical QSAR single-task and multi-task modeling . First , as proteins are explicitly represented by a defined featurization , PCM models can be used to make activity predictions on proteins without pre-existing bioactivity data , which is impossible with single-task or multi-task modeling . They can also be used to model binding on proteins for which experimental data may be too limited to train an effective single-task model ( van Westen et al. , 2011 ) . Second , with an expressive protein descriptor , a model could leverage similarities and differences between proteins directly to model their binding behaviors , rather than merely using correlations found among the compounds they bind to , as in multi-task modeling , or ignoring protein relationships altogether as in single-task modeling ( Cortés-Ciriano et al. , 2015 ; van Westen et al. , 2011 ) . Consequently , PCM models have found success on a variety of protein targets , and using many different machine learning methods , including random forests and SVMs ( Ballester & Mitchell , 2010 ; Weill et al. , 2011 ; van Westen et al. , 2013 ; Shiraishi et al. , 2013 ; Cheng et al. , 2012 ) . Recent advances in the field of Deep Learning have also led to its use in QSAR single and multitask modeling as well as PCM modeling ( Lenselink et al. , 2017 ; Menden et al. , 2013 ; LeCun et al. , 2015 ; Schmidhuber , 2015 ) . Lenselink et al . ( 2017 ) compared deep learning methods against other machine learning methods for PCM , single-task QSAR , and multi-task QSAR models on a benchmark dataset , and the authors found that PCM models using deep neural networks outperform other machine learning methods as well as single-task and multi-task QSAR models . All of the aforementioned works make features for both the small molecules and the proteins based on hand-crafted feature extraction protocols . Small molecules are often represented by counting and aggregating smaller substructures , while proteins are often represented by aggregation of computed physio-chemical features of their amino-acids or by encoding the differences in aligned sequences . In many application domains of Deep Learning , recent research has shown that these methods generally work better when the input data representation is lower-level and unabstracted , allowing the model to learn hierarchical features directly rather than relying on features which are hand-crafted by humans ( LeCun et al. , 2015 ; Schmidhuber , 2015 ) . For example , this is the case in computer vision , where deep learning on pixel value features has been the state of the art for several years ( Schmidhuber , 2015 ) . In cases where the input space is too high-dimensional , and especially if there is not enough labeled data to train a model end-to-end , unsupervised representation learning is used to generate lower-dimensional embeddings , again relying on machine learning , rather than human engineering , for feature extraction . This is the case in natural language processing , as well as in video analysis ( Srivastava et al. , 2015 ; Erhan et al. , 2010 ; Mikolov et al. , 2013 ) . In this work , we follow this reasoning and utilize such unsupervised-learned embeddings to represent both the ligand and protein spaces for proteochemometric modelling . To the best of our knowl- edge , we combine , for the first time , compound and protein representations both generated by deep learning models that were pretrained on an unsupervised learning task , as inputs to a PCM model . Moreover , we believe that this is the first work which combines unsupervised-learned embeddings of biological and chemical entities simultaneously in order to solve a downstream task . 1.1 COMPOUND DESCRIPTORS . The following section describes the currently used handcrafted ligand descriptors , evaluates their properties , and introduces the ligand descriptors that we use for our model — known as CDDD descriptors — which are generated via unsupervised machine learning . Handcrafted Compound Descriptors The state-of-the-art compound descriptors that are used for a vast majority of PCM and other chemoinformatics tasks are different varieties of circular fingerprints . In binary keyed circular fingerprints , each bit refers to a specific substructure 's presence or absence in the molecule , while in counts format the bits refer to the number of occurrences of the substructure ( van Westen et al. , 2011 ; Glen et al. , 2006 ; Rogers & Hahn , 2010 ) . As the number of potential substructures is vast ( ∼232 ) , the resulting sparse set of bits is usually hashed and folded to a much smaller size ( ∼103 ) at the expense of hash and bit collisions ( Rogers & Hahn , 2010 ) . These structure-based descriptors can be augmented with physicochemical descriptors , such as DRAGON or PaDEL , along with other chemical descriptors like atom identity/type , MACCs keys , and topological indices ( Mauri et al. , 2006 ; Yap , 2011 ) . Issues Circular fingerprints contain information about only the presence or absence of certain substructures in the compound – they thus fail to capture the shape or arrangement of those substructures within the compound . Furthermore , fingerprints rely on a hashing protocol to compress the millions of different substructures that are recorded into a smaller vector – as a result , hash collisions can mean that completely different substructures can correspond to the same fingerprint bit . This also means that one can not determine which exact substructures are responsible for the model prediction . CDDD – Ligand Descriptor To avoid these issues , we use the CDDD ( Continuous and Data Driven Molecular Descriptors ) , a model for the generation of lower-dimensional representation vectors of molecules developed by Winter et al . ( 2019 ) . This model uses a recurrent autoencoder trained on the task of translating non-canonical SMILES string representations of compounds into their canonical form . After unsupervised learning on approximately 72 million compound SMILES , a given compound is represented by the 512-length bottleneck of the translation model , and thus is encoded into a 512-length vector . These embeddings have been shown to be effective on QSAR prediction and virtual screening tasks ( Winter et al. , 2019 ) . CDDD descriptors offer a unique , compact , and continuous vector representation for each compound , as opposed to fingerprints , which are non-unique , discrete , and must be hashed to be made compact . Molecules with the same substructures but differently arranged will correspond to different vectors . Additionally , these unsupervised-learned descriptors have demonstrated competitive or superior results compared to molecular fingerprints on a variety of other tasks , indicating their ability to effectively represent compound properties and behaviors ( Winter et al. , 2019 ) . A diagram can be found in Appendix section D . 1.2 PROTEIN DESCRIPTORS . Next , we describe the various methodologies used for handcrafted protein descriptors . There are many different examples of highly specialized protein feature-extraction protocols used for PCM models that operate over a narrow range of targets . We describe the broad categories , examine their drawbacks and introduce the unsupervised protein descriptor which we use , known as UniRep . Handcrafted Protein Descriptors Most commonly used protein descriptors can be described as either amino-acid-based , or sequence-based . Amino-acid-based descriptors are computed using some combination of physicochemical and structural properties of individual amino acids . For example , the commonly used Z-scale descriptors and their counterpart ProtFP descriptors are constructed by computing various amino acid physicochemical properties ( such as hydrophobicity or polarity ) , taking a PCA over these properties , and then representing each amino acid by its first few principal components ( Sandberg et al. , 1998 ; van Westen et al. , 2013 ) . These descriptors can be combined with structural information descriptors , such as normalized van der Waals volume , charge , and the secondary structure of the protein at that amino acid residue ( Lapins et al. , 2013 ) . The whole protein is then represented by aggregating the amino-acid descriptors over the whole sequence . Sequence-based descriptors are constructed by taking aligned sequences or regions and using encodings to represent the amino acid identity at a given location in the aligned region . For example , in Nabu et al . ( 2015 ) , the authors build a model that predicts the bioactivity on mutated vs wild type penicillin binding proteins based on the positions of the mutations in the sequence . This was done by aligning the protein sequences and uniquely representing the positions a mutation may have occurred at with a one-hot-encoding . Thus , from 75 potential mutation sites , a 75-length vector is returned for each protein . Other variations of this technique use motifs or encodings of multiple varying segments ( Lapinsh et al. , 2005 ) . Additionally , on proteins for which 3D-information is available , this information can be incorporated into the protein descriptor . Methods vary – in general , 3D-models , for example crystal structures or water-field maps , are first aligned . Subsequently , descriptors are built based on amino acid identity at specific positions at the binding pocket or taking a PCA over the aligned field maps ( van Westen et al. , 2013 ; Lapinsh et al. , 2005 ; Subramanian et al. , 2016 ; Kruger & Overington , 2012 ) . Issues Amino acid-based descriptors require crude aggregation or binning methods , as they must convert variable length sequences of amino acids into a constant-length vector . For example , a common strategy is to divide the sequence into equal-length segments , then average over the descriptors for each segment ( Lenselink et al. , 2017 ; van Westen et al. , 2013 ) . This leads to the undesirable property where a single amino acid insertion early in the sequence would shift all subsequent segments , thus changing every descriptor bit despite a small change to the protein itself . Additionally , motifs and functional domains have variable numbers of amino acids which are not necessarily contiguous , so averaging over fixed-length contiguous segments can fail to capture relevant features consistently . Sequence and 3D-structure based descriptors must be aligned , which restricts the usable bioactivity data to only a very small fraction of closely related proteins for which an alignment is meaningful . These descriptors also can not be applied to new regions of the protein space , where bioactivity measurements of similar proteins are unavailable . Essentially , these methods build descriptors that are explicitly based on the differences between a few proteins , which greatly limits the scope of the problems that can be approached with such descriptors , as well as the amount of data that can be leveraged to train models . These data availability issues are exacerbated when using 3D-descriptors , since 3D structures are only available for a subset of proteins . UniRep – Protein Descriptor We use UniRep developed by Alley et al . ( 2019 ) , which uses a multiplicative LSTM architecture on approximately 24 million protein amino acid sequences taken from UniRef50 ( Krause et al. , 2016 ) . The UniRep model is trained on the next-character-prediction task , where the LSTM predicts the next amino acid in the protein sequence given the previous amino acids . To generate fixed-length embeddings , the hidden states of the model that are generated during the forward pass are averaged over the sequence dimension . For implementation details , refer to Alley et al . ( 2019 ) . The pre-trained model can be used out-of-the-box , taking as input amino acid sequences and outputting embeddings of length 64 , 256 , or 1900 depending on the architecture used . For the PCM model which we propose here , we found that the 256-length embeddings performed best , and so we will use UniRep to refer to the 256-length UniRep descriptor in this paper . A diagram can be found in Appendix section E UniRep descriptors offer many advantages over their handcrafted counterparts . The descriptors are alignment-free sequence-based representations , which therefore allow the utilization of protein bioactivity assay data across families and species instead of just highly similar proteins , greatly increasing the available training data . Moreover , these descriptors do not require 3D-structure information , which is available for only a small subset of proteins . Additionally , there is no need for binning or averaging across arbitrary length contiguous chunks of sequence .
This paper tries to solve the protein-legend binding prediction problem in the computational biology field. It uses the learned embedding for protein and legend, separately, from two published papers. Then those two embeddings were inputted to another deep learning model, performing the final prediction. Tested on one dataset, it shows the proposed method can outperform the other baseline methods.
SP:62e72e469e4e6d1f0c6eac1074fd35439086e08a
Removing input features via a generative model to explain their attributions to classifier's decisions
1 INTRODUCTION . Explaining a classifier ’ s outputs given a certain input is increasingly important , especially for lifecritical applications ( Doshi-Velez & Kim , 2017 ) . A popular means for visually explaining an image classifier ’ s decisions is an attribution map i.e . a heatmap that highlights the input pixels that are the evidence for and against the classification outputs ( Montavon et al. , 2018 ) . To construct an attribution map , many methods approximate the attribution value of an input region by the classification probability change when that region is absent i.e . removed from the image . That is , most perturbation-based attribution methods implement the absence of an input feature by replacing it with ( a ) mean pixels ; ( b ) random noise ; or ( c ) blurred versions of the original content . While removing an input feature to measure its attribution is a principle method in causal reasoning , the existing removal ( i.e . perturbation ) techniques often produce out-of-distribution images ( Fig . 1b , d ) i.e . a type of adversarial example , which ( 1 ) we found to produce heatmaps that are sensitive to hyperparameter settings ; and ( 2 ) questions the correctness of the heatmaps ( Adebayo et al. , 2018 ) . To combat these two issues , we propose to harness a state-of-the-art generative inpainting model ( hereafter , an inpainter ) to remove features from an input image and fill in with content that is plausible under the true data distribution . We test our approach on three representative attribution methods of Sliding-Patch ( SP ) ( Zeiler & Fergus , 2014 ) , LIME ( Ribeiro et al. , 2016 ) , and Meaningful-Perturbation ( MP ) ( Fong & Vedaldi , 2017 ) across two large-scale datasets of ImageNet ( Russakovsky et al. , 2015 ) and Places365 ( Zhou et al. , 2017 ) . For each dataset , we use a separate pair of pre-trained image classifiers and inpainters . Our main findings are : 1 1 . Blurring or graying out the object in a photo yields images that are up to 3 times more recognizable to classifiers and remain more similar to the original photo ( via MS-SSIM and LPIPS ) than the images whose objects were removed via inpainting ( Sec . 4.2 ) . 2 . Attribution methods with an inpainter produces ( 1 ) more plausible perturbation samples ; ( 2 ) attribution maps that perform on par or better2 than their original counterparts on three existing benchmarks : object localization , Insertion , and Deletion ( see Sec . 4.4 ) ; ( 3 ) explanations that are more robust to hyperparameter changes ( Sec . 4.3 ) . 1All our code will be available on github . 2Our positive results here are new and in contrast with a previous result by Chang et al . ( 2019 ) . 2 RELATED WORK . Attribution methods can be categorized into two main classes : ( 1 ) white-box and ( 2 ) black-box . White-box Given the access to the network architecture and parameters , attribution maps can be constructed analytically from ( a ) the gradients of the output w.r.t . the input image ( Simonyan et al. , 2013 ) , ( b ) the class activation map in fully-convolutional neural networks ( Zhou et al. , 2016 ) , ( c ) both the gradients and activations ( Selvaraju et al. , 2017 ) , or ( d ) the gradient times the input image ( Shrikumar et al. , 2017 ) . However , these heatmaps can be too noisy to be human-interpretable because the gradients in the pixel space are often local . Importantly , some gradient-based attribution maps can be unfaithful explanations e.g . acting like edge detectors ( Adebayo et al. , 2018 ) . To make a gradient-based heatmap more robust and smooth , a number of methods essentially average out the resultant heatmaps across a large set of perturbed inputs that are created via ( a ) adding random noise to the input image ( Smilkov et al. , 2017 ; Fong & Vedaldi , 2017 ) , ( b ) blurring or jittering the image ( Fong & Vedaldi , 2017 ) , or ( c ) linearly interpolating between the input and a reference “ baseline ” image ( Sundararajan et al. , 2017 ) . Black-box In addition to the white-box setting , perturbation techniques are even more important in approximating attributions under the black-box setting i.e . when we do not have access to the network parameters . Black-box methods often iteratively remove ( i.e . occlude or perturb ) an input region and take the average resultant classification probability change to be the attribution value for that region . While the idea is principle in causal reasoning , the physical interventions—taking an object out of a scene ( revealing the content behind it ) while keeping the rest of the scene intact—are impractical for most real-world applications . Feature removal Therefore , the absence of an input region is often implemented by replacing it with ( a ) mean pixels ( Zeiler & Fergus , 2014 ; Ribeiro et al. , 2016 ) ; ( b ) random noise ( Dabkowski & Gal , 2017 ; Lundberg & Lee , 2017 ) ; or ( c ) blurred versions of the original content ( Fong & Vedaldi , 2017 ) . However , these removal techniques often produce unrealistic , out-of-samples ( Fig . 1 ) , which raise huge concerns on the sensitivity and faithfulness of the explanations . Do explanations become more robust and faithful if input features are removed via a learned , natural image prior ? Here , we systematically study that question across three representative attribution methods : two black-box ( i.e . SP and LIME ) and one white-box ( i.e . MP ) . We chose these three methods because they represent diverse approaches and perturb different types of input features : pixels ( i.e . MP ) , superpixels ( i.e . LIME ) ; and square patches ( i.e . SP ) . Similar to us , Chang et al . ( 2019 ) and Uzunova et al . ( 2019 ) also harnessed image generative models to remove input features . However , their findings were ( 1 ) both only within the MP framework ; ( 2 ) either for grayscale , medical-image datasets ( Uzunova et al. , 2019 ) or based on unrealistic samples ( see a comparison in Sec . 4.1 ) . Furthermore , their results were not relevant to our question of whether integrating an inpainter helps attribution maps become more robust to hyperparameters . Note that Chang et al . ( 2019 ) found negative results for integrating the inpainter whereas our methods improved both the robustness of attribution maps ( Sec . 4.3 ) and their ability to localize objects ( Sec . 4.4 ) . Counterfactual explanations A task that is related but not the same as ours is to generate a textual explanation for why an image is predicted as class c instead of a some other class c′ ( Anne Hendricks et al. , 2018 ) . For visual explanations , Goyal et al . ( 2019 ) proposed to find a minimal input region such that when exchanged with another region in a reference image would change the classification for the original image into some target class . However , their counterfactual sample was generated by swapping patches between two images rather than by a generative model . 3 METHODS . 3.1 DATASETS AND NETWORKS . Classifiers Our experiments were conducted separately with each of the two ResNet-50 image classifiers ( He et al. , 2016 ) that were pre-trained on the 1000-class ImageNet 2012 and Places365 , respectively . The two models were officially released by the PyTorch ( 2019 ) model zoo and by the authors ( CSAILVision , 2019 ) , respectively . Datasets We chose these two datasets because they are large-scale , natural-image sets and cover a wide range of images from object-centric ( i.e . ImageNet ) to scenery ( i.e . Places365 ) . While state-of-the-art image synthesis has advanced rapidly , unconditionally inpainting a large free-form mask in an arbitrary photo remains challenging ( Yu et al. , 2018a ) . Therefore , we ran our study on two subsets called ImageNet-S and Places365-S of the original validation sets of ImageNet and Places365 , respectively , after filtering out semantically complex images . That is , we filtered out images that a YOLO-v3 object detector ( Redmon & Farhadi , 2018 ) found more than one object . For ImageNet-S , we also filtered out images with more than one ImageNet bounding boxes . In total , ImageNet-S and Places365-S contains 15,082 and 13,864 images respectively ( see Figs . S5 & S6 for example images ) . Inpainter We used two TensorFlow DeepFill-v1 models pre-trained by Yu et al . ( 2018b ) for ImageNet and Places365 , respectively . DeepFill-v1 takes as input a color image and a binary mask , both at resolution 256× 256 , and outputs an inpainted image of the same size . 3.2 PROBLEM FORMULATION . Let s : RD×D×3 → R be an image classifier that maps a square , color image x of spatial dimension D ×D onto a softmax probability of a target class . An attribution map A ∈ [ −1 , 1 ] D×D associates each input pixel xi to a scalar Ai ∈ [ −1 , 1 ] which indicates how much xi contributes for or against the prediction score s ( x ) . We will describe below three different methods for generating attribution maps together with our three respective proposed variants which harness an inpainter . 3.3 SLIDING-PATCH . SP Zeiler & Fergus ( 2014 ) proposed to slide a gray , occlusion patch across the image and record the probability changes as attribution values in corresponding locations in the heatmap . That is , given a binary occlusion mask m ∈ { 0 , 1 } D×D ( here , 1 ’ s inside the patch region and 0 ’ s otherwise ) and a filler image f ∈ RD×D×3 , a perturbed image x̄ ∈ RD×D×3 ( see Fig . 1b ) is given by : x̄ = x ( 1−m ) + f m ( 1 ) where denotes the Hadamard product and f is a zero image i.e . a gray image3 before input pre-processing . For every pixel xi , one can generate a perturbation sample x̄i ( i.e . by setting the patch center at xi ) and compute the attribution value Ai = s ( x ) −s ( x̄i ) . However , sliding the patch densely across the 224×224 input image of ResNet-50 is computationally expensive . Therefore , we chose a 29×29 occlusion patch size with stride 3 , which yields a smaller heatmap A′ of size 66×66 . We bi-linearly upsampled A′ to the image size to create the full-resolution A . We implemented SP from scratch in PyTorch following a MATLAB implementation ( MathWorks , 2019 ) . SP-G Note that the stride , size , and color of the SP sliding patch are three hyperparameters that are often chosen heuristically , and varying them can change the final heatmaps radically . To ameliorate the sensitivity to hyperparameter choices , we propose a variant called SP-G by only replacing the gray filler image of SP with the output image of an inpainter ( described in Sec . 3.1 ) i.e . f = G ( m , x ) while keeping the rest of SP the same ( Fig . 1b vs. c ; top row ) . That is , at every location of the sliding window , SP-G queries the inpainter for content to fill in the window .
The paper proposes a deep visualization technique for black-box image classifiers that feeds modified versions of the original input by means of an off-the-shelf (black box too) image inpainting approach (DeepFill-v1), in order to capture changes in the classification performance. In particular, the substitution of the input image follows three published paradigms: Sliding Patch (SP), Local Interpretable Model-Agnostic Explanations (LIME), Meaningful Perturbation (MP). Whereas the states of the art use gray images (SP, LIME)/blurred versions (MP) as substitution on different spatial supports (regular patch SP, random-shaped superpixel regions LIME, learned continuous region MP), the proposed approach inserts there the output of the inpainting.
SP:6ef6e5580db4cfa041bd6a0063953dc52c29a2a5
Removing input features via a generative model to explain their attributions to classifier's decisions
1 INTRODUCTION . Explaining a classifier ’ s outputs given a certain input is increasingly important , especially for lifecritical applications ( Doshi-Velez & Kim , 2017 ) . A popular means for visually explaining an image classifier ’ s decisions is an attribution map i.e . a heatmap that highlights the input pixels that are the evidence for and against the classification outputs ( Montavon et al. , 2018 ) . To construct an attribution map , many methods approximate the attribution value of an input region by the classification probability change when that region is absent i.e . removed from the image . That is , most perturbation-based attribution methods implement the absence of an input feature by replacing it with ( a ) mean pixels ; ( b ) random noise ; or ( c ) blurred versions of the original content . While removing an input feature to measure its attribution is a principle method in causal reasoning , the existing removal ( i.e . perturbation ) techniques often produce out-of-distribution images ( Fig . 1b , d ) i.e . a type of adversarial example , which ( 1 ) we found to produce heatmaps that are sensitive to hyperparameter settings ; and ( 2 ) questions the correctness of the heatmaps ( Adebayo et al. , 2018 ) . To combat these two issues , we propose to harness a state-of-the-art generative inpainting model ( hereafter , an inpainter ) to remove features from an input image and fill in with content that is plausible under the true data distribution . We test our approach on three representative attribution methods of Sliding-Patch ( SP ) ( Zeiler & Fergus , 2014 ) , LIME ( Ribeiro et al. , 2016 ) , and Meaningful-Perturbation ( MP ) ( Fong & Vedaldi , 2017 ) across two large-scale datasets of ImageNet ( Russakovsky et al. , 2015 ) and Places365 ( Zhou et al. , 2017 ) . For each dataset , we use a separate pair of pre-trained image classifiers and inpainters . Our main findings are : 1 1 . Blurring or graying out the object in a photo yields images that are up to 3 times more recognizable to classifiers and remain more similar to the original photo ( via MS-SSIM and LPIPS ) than the images whose objects were removed via inpainting ( Sec . 4.2 ) . 2 . Attribution methods with an inpainter produces ( 1 ) more plausible perturbation samples ; ( 2 ) attribution maps that perform on par or better2 than their original counterparts on three existing benchmarks : object localization , Insertion , and Deletion ( see Sec . 4.4 ) ; ( 3 ) explanations that are more robust to hyperparameter changes ( Sec . 4.3 ) . 1All our code will be available on github . 2Our positive results here are new and in contrast with a previous result by Chang et al . ( 2019 ) . 2 RELATED WORK . Attribution methods can be categorized into two main classes : ( 1 ) white-box and ( 2 ) black-box . White-box Given the access to the network architecture and parameters , attribution maps can be constructed analytically from ( a ) the gradients of the output w.r.t . the input image ( Simonyan et al. , 2013 ) , ( b ) the class activation map in fully-convolutional neural networks ( Zhou et al. , 2016 ) , ( c ) both the gradients and activations ( Selvaraju et al. , 2017 ) , or ( d ) the gradient times the input image ( Shrikumar et al. , 2017 ) . However , these heatmaps can be too noisy to be human-interpretable because the gradients in the pixel space are often local . Importantly , some gradient-based attribution maps can be unfaithful explanations e.g . acting like edge detectors ( Adebayo et al. , 2018 ) . To make a gradient-based heatmap more robust and smooth , a number of methods essentially average out the resultant heatmaps across a large set of perturbed inputs that are created via ( a ) adding random noise to the input image ( Smilkov et al. , 2017 ; Fong & Vedaldi , 2017 ) , ( b ) blurring or jittering the image ( Fong & Vedaldi , 2017 ) , or ( c ) linearly interpolating between the input and a reference “ baseline ” image ( Sundararajan et al. , 2017 ) . Black-box In addition to the white-box setting , perturbation techniques are even more important in approximating attributions under the black-box setting i.e . when we do not have access to the network parameters . Black-box methods often iteratively remove ( i.e . occlude or perturb ) an input region and take the average resultant classification probability change to be the attribution value for that region . While the idea is principle in causal reasoning , the physical interventions—taking an object out of a scene ( revealing the content behind it ) while keeping the rest of the scene intact—are impractical for most real-world applications . Feature removal Therefore , the absence of an input region is often implemented by replacing it with ( a ) mean pixels ( Zeiler & Fergus , 2014 ; Ribeiro et al. , 2016 ) ; ( b ) random noise ( Dabkowski & Gal , 2017 ; Lundberg & Lee , 2017 ) ; or ( c ) blurred versions of the original content ( Fong & Vedaldi , 2017 ) . However , these removal techniques often produce unrealistic , out-of-samples ( Fig . 1 ) , which raise huge concerns on the sensitivity and faithfulness of the explanations . Do explanations become more robust and faithful if input features are removed via a learned , natural image prior ? Here , we systematically study that question across three representative attribution methods : two black-box ( i.e . SP and LIME ) and one white-box ( i.e . MP ) . We chose these three methods because they represent diverse approaches and perturb different types of input features : pixels ( i.e . MP ) , superpixels ( i.e . LIME ) ; and square patches ( i.e . SP ) . Similar to us , Chang et al . ( 2019 ) and Uzunova et al . ( 2019 ) also harnessed image generative models to remove input features . However , their findings were ( 1 ) both only within the MP framework ; ( 2 ) either for grayscale , medical-image datasets ( Uzunova et al. , 2019 ) or based on unrealistic samples ( see a comparison in Sec . 4.1 ) . Furthermore , their results were not relevant to our question of whether integrating an inpainter helps attribution maps become more robust to hyperparameters . Note that Chang et al . ( 2019 ) found negative results for integrating the inpainter whereas our methods improved both the robustness of attribution maps ( Sec . 4.3 ) and their ability to localize objects ( Sec . 4.4 ) . Counterfactual explanations A task that is related but not the same as ours is to generate a textual explanation for why an image is predicted as class c instead of a some other class c′ ( Anne Hendricks et al. , 2018 ) . For visual explanations , Goyal et al . ( 2019 ) proposed to find a minimal input region such that when exchanged with another region in a reference image would change the classification for the original image into some target class . However , their counterfactual sample was generated by swapping patches between two images rather than by a generative model . 3 METHODS . 3.1 DATASETS AND NETWORKS . Classifiers Our experiments were conducted separately with each of the two ResNet-50 image classifiers ( He et al. , 2016 ) that were pre-trained on the 1000-class ImageNet 2012 and Places365 , respectively . The two models were officially released by the PyTorch ( 2019 ) model zoo and by the authors ( CSAILVision , 2019 ) , respectively . Datasets We chose these two datasets because they are large-scale , natural-image sets and cover a wide range of images from object-centric ( i.e . ImageNet ) to scenery ( i.e . Places365 ) . While state-of-the-art image synthesis has advanced rapidly , unconditionally inpainting a large free-form mask in an arbitrary photo remains challenging ( Yu et al. , 2018a ) . Therefore , we ran our study on two subsets called ImageNet-S and Places365-S of the original validation sets of ImageNet and Places365 , respectively , after filtering out semantically complex images . That is , we filtered out images that a YOLO-v3 object detector ( Redmon & Farhadi , 2018 ) found more than one object . For ImageNet-S , we also filtered out images with more than one ImageNet bounding boxes . In total , ImageNet-S and Places365-S contains 15,082 and 13,864 images respectively ( see Figs . S5 & S6 for example images ) . Inpainter We used two TensorFlow DeepFill-v1 models pre-trained by Yu et al . ( 2018b ) for ImageNet and Places365 , respectively . DeepFill-v1 takes as input a color image and a binary mask , both at resolution 256× 256 , and outputs an inpainted image of the same size . 3.2 PROBLEM FORMULATION . Let s : RD×D×3 → R be an image classifier that maps a square , color image x of spatial dimension D ×D onto a softmax probability of a target class . An attribution map A ∈ [ −1 , 1 ] D×D associates each input pixel xi to a scalar Ai ∈ [ −1 , 1 ] which indicates how much xi contributes for or against the prediction score s ( x ) . We will describe below three different methods for generating attribution maps together with our three respective proposed variants which harness an inpainter . 3.3 SLIDING-PATCH . SP Zeiler & Fergus ( 2014 ) proposed to slide a gray , occlusion patch across the image and record the probability changes as attribution values in corresponding locations in the heatmap . That is , given a binary occlusion mask m ∈ { 0 , 1 } D×D ( here , 1 ’ s inside the patch region and 0 ’ s otherwise ) and a filler image f ∈ RD×D×3 , a perturbed image x̄ ∈ RD×D×3 ( see Fig . 1b ) is given by : x̄ = x ( 1−m ) + f m ( 1 ) where denotes the Hadamard product and f is a zero image i.e . a gray image3 before input pre-processing . For every pixel xi , one can generate a perturbation sample x̄i ( i.e . by setting the patch center at xi ) and compute the attribution value Ai = s ( x ) −s ( x̄i ) . However , sliding the patch densely across the 224×224 input image of ResNet-50 is computationally expensive . Therefore , we chose a 29×29 occlusion patch size with stride 3 , which yields a smaller heatmap A′ of size 66×66 . We bi-linearly upsampled A′ to the image size to create the full-resolution A . We implemented SP from scratch in PyTorch following a MATLAB implementation ( MathWorks , 2019 ) . SP-G Note that the stride , size , and color of the SP sliding patch are three hyperparameters that are often chosen heuristically , and varying them can change the final heatmaps radically . To ameliorate the sensitivity to hyperparameter choices , we propose a variant called SP-G by only replacing the gray filler image of SP with the output image of an inpainter ( described in Sec . 3.1 ) i.e . f = G ( m , x ) while keeping the rest of SP the same ( Fig . 1b vs. c ; top row ) . That is , at every location of the sliding window , SP-G queries the inpainter for content to fill in the window .
The paper is focused on perturbation-based local explanation methods; methods that only need black-box(ish) access to the model and generally seek to find a region, pixel, etc's importance score through by removing that region, pixel, etc.. The intuition is that an important region if removed, will result in a large drop in the predicted class confidence. One main issue with such methods is that the removal itself can throw the image out of the data distribution and therefore result in a drop in confidence, not because of the region's importance, but because of the network's unpredictable behavior in such unseen parts of the input space. The work is focused on giving a solution to this problem: instead of removal through blurring, graying, etc, use inpainting; i.e. replace the removed region with using given the rest of the image. The idea has already been discussed out in the literature and the novelty of the work seems to be twofold: They introduce the same method in a way that is not curated for a specific perturbation-based method and could be concatenated with "ANY" given (or future) perturbation-based local explanation method (which authors notate by calling it ${existing_method}-G, they study robustness to hyper-parameter choice.
SP:6ef6e5580db4cfa041bd6a0063953dc52c29a2a5
Domain-Independent Dominance of Adaptive Methods
1 INTRODUCTION . Deep network architectures are becoming increasingly complex , often containing parameters that can be grouped according to multiple functionalities , such as gating , attention , convolution , and generation . Such parameter groups should arguably be treated differently during training , as their gradient statistics might be highly distinct . Adaptive gradient methods designate parameter-wise learning rates based on gradient histories , treating such parameters groups differently and , in principle , promise to be better suited for training complex neural network architectures . Nonetheless , advances in neural architectures have not been matched by progress in adaptive gradient descent algorithms . SGD is still prevalent , in spite of the development of seemingly more sophisticated adaptive alternatives , such as RMSProp ( Dauphin et al. , 2015 ) and Adam ( Kingma & Ba , 2015 ) . Such adaptive methods have been observed to yield poor generalization compared to SGD in classification tasks ( Wilson et al. , 2017 ) , and hence have been mostly adopted for training complex models ( Vaswani et al. , 2017 ; Arjovsky et al. , 2017 ) . For relatively simple architectures , such as ResNets ( He et al. , 2016a ) and DenseNets ( Huang et al. , 2017 ) , SGD is still the dominant choice . At a theoretical level , concerns have also emerged about the current crop of adaptive methods . Recently , Reddi et al . ( 2018 ) has identified cases , even in the stochastic convex setting , where Adam ( Kingma & Ba , 2015 ) fails to converge . Modifications to Adam that provide convergence guarantees have been formulated , but have shortcomings . AMSGrad ( Reddi et al. , 2018 ) requires non-increasing learning rates , while AdamNC ( Reddi et al. , 2018 ) and AdaBound ( Luo et al. , 2019 ) require that adaptivity be gradually eliminated during training . Moreover , while most of the recently proposed variants do not provide formal guarantees for non-convex problems , the few current convergence rate analyses in the literature ( Zaheer et al. , 2018 ; Chen et al. , 2019 ) do not match SGD ’ s . Section 3 fully details the convergence rates of the most popular Adam variants , along with their shortcomings . Our contribution is marked improvements to adaptive optimizers , from both theoretical and practical perspectives . At the theoretical level , we focus on convergence guarantees , deriving new algorithms : • Delayed Adam . Inspired by Zaheer et al . ( 2018 ) ’ s analysis of Adam , Section 4 proposes a simple modification for adaptive gradient methods which yields a provable convergence rate ofO ( 1/ √ T ) in the stochastic non-convex setting – the same as SGD . Our modification can be implemented by swapping two lines of code and preserves adaptivity without incurring extra memory costs . To illustrate these results , we present a non-convex problem where Adam fails to converge to a stationary point , while Delayed Adam – Adam with our proposed modification – provably converges with a rate of O ( 1/ √ T ) . • AvaGrad . Inspecting the convergence rate of Delayed Adam , we show that it would improve with an adaptive global learning rate , which self-regulates based on global statistics of the gradient second moments . Following this insight , Section 5 proposes a new adaptive method , AvaGrad , whose hyperparameters decouple learning rate and adaptability . Through extensive experiments , Section 6 demonstrates that AvaGrad is not merely a theoretical exercise . AvaGrad performs as well as both SGD and Adam in their respectively favored usage scenarios . Along this experimental journey , we happen to disprove some conventional wisdom , finding adaptive optimizers , including Adam , to be superior to SGD for training CNNs . The caveat is that , excepting AvaGrad , these methods are sensitive to hyperparameter values . AvaGrad is a uniquely attractive adaptive optimizer , yielding near best results over a wide range of hyperparameters . 2 PRELIMINARIES . 2.1 NOTATION . For vectors a = [ a1 , a2 , . . . ] , b = [ b1 , b2 , . . . ] ∈ Rd , we use the following notation : 1a for elementwise division ( 1a = [ 1 a1 , 1a2 , . . . ] ) , √ a for element-wise square root ( √ a = [ √ a1 , √ a2 , . . . ] ) , a+ b for element-wise addition ( a+ b = [ a1 + b1 , a2 + b2 , . . . ] ) , a b for element-wise multiplication ( a b = [ a1b1 , a2b2 , . . . ] ) . Moreover , ‖a‖ is used to denote the ` 2-norm : other norms will be specified whenever used ( e.g. , ‖a‖∞ ) . For subscripts and vector indexing , we adopt the following convention : the subscript t is used to denote an object related to the t-th iteration of an algorithm ( e.g. , wt ∈ Rd denotes the iterate at time step t ) ; the subscript i is used for indexing : wi ∈ R denotes the i-th coordinate of w ∈ Rd . When used together , t precedes i : wt , i ∈ R denotes the i-th coordinate of wt ∈ Rd . 2.2 STOCHASTIC NON-CONVEX OPTIMIZATION . In the stochastic non-convex setting , we are concerned with the optimization problem : min w∈Rd f ( w ) = Es∼D [ fs ( w ) ] ( 1 ) whereD is a probability distribution over a set S of “ data points ” . We also assume that f isM -smooth in w , as is typically done in non-convex optimization : ∀ w , w′ f ( w′ ) ≤ f ( w ) + 〈∇f ( w ) , w′ − w〉+ M 2 ‖w − w′‖2 ( 2 ) Methods for stochastic non-convex optimization are evaluated in terms of number of iterations or gradient evaluations required to achieve small loss gradients . This differs from the stochastic convex setting where convergence is measured w.r.t . suboptimality f ( w ) − minw∈Rd f ( w ) . We assume that the algorithm takes a sequence of data points S = ( s1 , . . . , sT ) from which it deterministically computes a sequence of parameter settingsw1 , . . . , wT together with a distributionP over { 1 , . . . , T } . We say an algorithm has a convergence rate of O ( g ( T ) ) if E S∼DT t∼P ( t|S ) [ ‖∇f ( wt ) ‖2 ] ≤ O ( g ( T ) ) where , as defined above , f ( w ) = Es∼D [ fs ( w ) ] . We also assume that the functions fs have bounded gradients : there exists some G∞ such that ‖∇fs ( w ) ‖∞ ≤ G∞ for all s ∈ S and w ∈ Rd . Throughout the paper , we also let G2 denote an upper bound on ‖∇fs ( w ) ‖ . 3 RELATED WORK . Here we present a brief overview of optimization methods commonly used for training neural networks , along with their convergence rate guarantees for stochastic smooth non-convex problems . We consider methods which , at each iteration t , receive or compute a gradient estimate : gt : = ∇fst ( wt ) , st ∼ D ( 3 ) and perform an update of the form : wt+1 = wt − αt · ηt mt ( 4 ) where αt ∈ R is the global learning rate , ηt ∈ Rd are the parameter-wise learning rates , and mt ∈ Rd is the update direction , typically defined as : mt = β1 , tmt−1 + ( 1− β1 , t ) gt and m0 = 0 . ( 5 ) Non-momentum methods such as SGD , AdaGrad , and RMSProp ( Dauphin et al. , 2015 ; Duchi et al. , 2011 ) have mt = gt ( i.e. , β1 , t = 0 ) , while momentum SGD and Adam ( Kingma & Ba , 2015 ) have β1 , t ∈ ( 0 , 1 ) . Note that while αt can always be absorbed into ηt , representing the update in this form will be convenient throughout the paper . SGD uses the same learning rate for all parameters , i.e. , ηt = ~1 . Although SGD is simple and offers no adaptation , it has a convergence rate of O ( 1/ √ T ) with either constant , increasing , or decreasing learning rates ( Ghadimi & Lan , 2013 ) , and is widely used when training deep networks , especially CNNs ( He et al. , 2016a ; Huang et al. , 2017 ) . At the heart of its convergence proof is the fact that Est [ αt · ηt gt ] = αt · ∇f ( wt ) . Popular adaptive methods such as RMSProp ( Dauphin et al. , 2015 ) , AdaGrad ( Duchi et al. , 2011 ) , and Adam ( Kingma & Ba , 2015 ) have ηt = 1√vt+ , where vt ∈ R d is given by : vt = β2 , tvt−1 + ( 1− β2 , t ) g2t and v0 = 0 . ( 6 ) As vt is an estimate of the second moments of the gradients , the optimizer designates smaller learning rates for parameters with larger uncertainty in their stochastic gradients . However , in this setting ηt and st are no longer independent , hence Est [ αt · ηt gt ] 6= αt · Est [ ηt ] ∇f ( wt ) . This “ bias ” can cause RMSProp and Adam to present convergence issues , even in the stochastic convex setting ( Reddi et al. , 2018 ) . Recently , Zaheer et al . ( 2018 ) showed that , with a constant learning rate , RMSProp and Adam have a convergence rate of O ( σ2 + 1/T ) , where σ2 = supw∈Rd Es∼D [ ‖∇fs ( w ) −∇f ( w ) ‖2 ] , hence their result does not generally guarantee convergence . Chen et al . ( 2019 ) showed that AdaGrad and AMSGrad enjoy a convergence rate of O ( log T/ √ T ) when a decaying learning rate is used . Note that both methods constrain ηt in some form , the former with β2 , t = 1− 1/t ( adaptability diminishes with t ) , and the latter explicitly enforces vt ≥ vj for all j < t ( ηt is point-wise non-increasing ) . In both cases , the method is less adaptive than Adam , and yet analyses so far have not delivered a convergence rate that matches SGD ’ s . 4 SGD-LIKE CONVERGENCE WITHOUT CONSTRAINED RATES . Algorithm 1 DELAYED ADAM Input : w1 ∈ Rd , αt , > 0 , β1 , t , β2 , t ∈ [ 0 , 1 ) 1 : Set m0 = 0 , v0 = 0 2 : for t = 1 to T do 3 : Draw st ∼ D 4 : Compute gt = ∇fst ( wt ) 5 : mt = β1 , tmt−1 + ( 1− β1 , t ) gt 6 : ηt = 1√ vt−1+ 7 : wt+1 = wt − αt · ηt mt 8 : vt = β2 , tvt−1 + ( 1− β2 , t ) g2t 9 : end for We first take a step back to note the following : to show that Adam might not converge in the stochastic convex setting , Reddi et al . ( 2018 ) provide a stochastic linear problem where Adam fails to converge w.r.t . suboptimality . Since non-convex optimization is evaluated w.r.t . norm of the gradients , a different instance is required to characterize Adam ’ s behavior in this setting . The following result shows that even for a quadratic problem , Adam indeed does not converge to a stationary point : Theorem 1 . For any ≥ 0 and constant β2 , t = β2 ∈ [ 0 , 1 ) , there is a stochastic convex optimization problem for which Adam does not converge to a stationary point . Proof . The full proof is given in Appendix A . The argument follows closely from Reddi et al . ( 2018 ) , where we explicitly present a stochastic optimization problem : min w∈ [ 0,1 ] f ( w ) : = Es∼D [ fs ( w ) ] fs ( w ) = { C w 2 2 , with probability p : = 1+δ C+1 −w , otherwise ( 7 ) We show that , for large enough C ( as a function of δ , , β2 ) , Adam will move towards w = 1 where ∇f ( 1 ) = δ , and that the constraint w ∈ [ 0 , 1 ] does not make w = 1 a stationary point . This result , like the one in Reddi et al . ( 2018 ) , relies on the fact that ηt and st are correlated : upon a draw of the rare sample C w 2 2 , the learning rate ηt decreases significantly and Adam takes a small step in the correct direction . On the other hand , a sequence of common samples increases ηt and Adam moves faster towards w = 1 . Instead of enforcing ηt to be point-wise non-increasing in t ( Reddi et al. , 2018 ) , which forces the optimizer to take small steps even for a long sequence of common samples , we propose to simply have ηt be independent of st. As an extra motivation for this approach , note that successful proof strategies ( Zaheer et al. , 2018 ) to analyzing adaptive methods include the following step : Est [ ηt gt ] = Est [ ( ηt−1 + ηt − ηt−1 ) gt ] = ηt−1 ∇f ( wt ) + Est [ ( ηt − ηt−1 ) gt ] ( 8 ) where bounding Est [ ( ηt − ηt−1 ) gt ] , seen as a form of bias , is a key part of recent convergence analyses . Replacing ηt by ηt−1 in the update equation of Adam removes this bias and can be implemented by simply swapping lines of code ( updating η after w ) , yielding a simple convergence analysis without hindering the adaptability of the method in any way . Algorithm 1 provides pseudocode when applying this modification , highlighted in red , to Adam , yielding Delayed Adam . The following Theorem shows that this modification is enough to guarantee a SGD-like convergence rate of O ( 1/ √ T ) in the stochastic non-convex setting for general adaptive gradient methods . Theorem 2 . Consider any optimization method which updates parameters as follows : wt+1 = wt − αt · ηt gt ( 9 ) where gt : = ∇fst ( wt ) , st ∼ D , and αt , ηt are independent of st . Assume that f ( w1 ) − f ( w ? ) ≤ D , f ( w ) = Es∼D [ fs ( w ) ] is M -smooth , and ‖∇fs ( w ) ‖∞ ≤ G∞ for all s ∈ S , w ∈ Rd . Moreover , let Z = ∑T t=1 αt mini ηt , i . For αt = γt √ 2D TMG2∞ , if p ( Z|st ) = p ( Z ) for all st ∈ S , then : E S∼DT t∼P ( t|S ) [ ‖∇f ( wt ) ‖2 ] ≤ √ MDG2∞ 2T · ES∼DT [ ∑T t=1 1 + γ 2 t ‖ηt‖ 2∑T t=1 γt mini ηt , i ] ( 10 ) where P assigns probabilities p ( t ) ∝ αt ·mini ηt , i . Proof . The full proof is given in Appendix B , along with analysis for the case with momentum β1 , t ∈ ( 0 , 1 ) in Appendix B.1 , and in particular β1 , t = β1/ √ t , which yields a similar rate . The convergence rate depends on ‖ηt‖ and mini ηt , i , which are random variables for Adam-like algorithms . However , if there are constants H and L such that 0 < L ≤ ηt , i ≤ H < ∞ for all i and t , then a rate of O ( 1/ √ T ) is guaranteed . This is the case for Delayed Adam , where 1/ ( G2 + ) ≤ ηt , i ≤ 1/ for all t and i. Theorem 2 also requires that αt and ηt are independent of st , which can be assured to hold by applying a “ delay ” to their respective computations , if necessary ( i.e. , replacing ηt by ηt−1 , as in Delayed Adam ) . Additionally , the assumption that p ( Z|st ) = p ( Z ) , meaning that a single sample should not affect the distribution of Z = ∑T t=1 αt mini ηt , i , is required since P is conditioned on the samples S ( unlike in standard analysis , where Z = ∑T t=1 αt and αt is deterministic ) , and is expected to hold as T →∞ . Practitioners typically use the last iterate wT or perform early-stopping : in this case , whether the assumption holds or not does not affect the behavior of the algorithm . Nonetheless , we also show in Appendix B.2 a similar rate that does not require this assumption to hold , which also yields a O ( 1/ √ T ) convergence rate taken that the parameter-wise learning rates are bounded from above and below .
In this paper the authors develop variants of Adam which corrects for the relationship of the gradient and adaptive terms that causes convergence issues, naming them Delayed Adam and AvaGrad. They also provide proofs demonstrating they solve the convergence issues of Adam in O(1/sqrt(T)) time. They also introduce a convex problem where Adam fails to converge to a stationary point.
SP:99b801c2541124ead1b8ff8904e7aa82c422c37c
Domain-Independent Dominance of Adaptive Methods
1 INTRODUCTION . Deep network architectures are becoming increasingly complex , often containing parameters that can be grouped according to multiple functionalities , such as gating , attention , convolution , and generation . Such parameter groups should arguably be treated differently during training , as their gradient statistics might be highly distinct . Adaptive gradient methods designate parameter-wise learning rates based on gradient histories , treating such parameters groups differently and , in principle , promise to be better suited for training complex neural network architectures . Nonetheless , advances in neural architectures have not been matched by progress in adaptive gradient descent algorithms . SGD is still prevalent , in spite of the development of seemingly more sophisticated adaptive alternatives , such as RMSProp ( Dauphin et al. , 2015 ) and Adam ( Kingma & Ba , 2015 ) . Such adaptive methods have been observed to yield poor generalization compared to SGD in classification tasks ( Wilson et al. , 2017 ) , and hence have been mostly adopted for training complex models ( Vaswani et al. , 2017 ; Arjovsky et al. , 2017 ) . For relatively simple architectures , such as ResNets ( He et al. , 2016a ) and DenseNets ( Huang et al. , 2017 ) , SGD is still the dominant choice . At a theoretical level , concerns have also emerged about the current crop of adaptive methods . Recently , Reddi et al . ( 2018 ) has identified cases , even in the stochastic convex setting , where Adam ( Kingma & Ba , 2015 ) fails to converge . Modifications to Adam that provide convergence guarantees have been formulated , but have shortcomings . AMSGrad ( Reddi et al. , 2018 ) requires non-increasing learning rates , while AdamNC ( Reddi et al. , 2018 ) and AdaBound ( Luo et al. , 2019 ) require that adaptivity be gradually eliminated during training . Moreover , while most of the recently proposed variants do not provide formal guarantees for non-convex problems , the few current convergence rate analyses in the literature ( Zaheer et al. , 2018 ; Chen et al. , 2019 ) do not match SGD ’ s . Section 3 fully details the convergence rates of the most popular Adam variants , along with their shortcomings . Our contribution is marked improvements to adaptive optimizers , from both theoretical and practical perspectives . At the theoretical level , we focus on convergence guarantees , deriving new algorithms : • Delayed Adam . Inspired by Zaheer et al . ( 2018 ) ’ s analysis of Adam , Section 4 proposes a simple modification for adaptive gradient methods which yields a provable convergence rate ofO ( 1/ √ T ) in the stochastic non-convex setting – the same as SGD . Our modification can be implemented by swapping two lines of code and preserves adaptivity without incurring extra memory costs . To illustrate these results , we present a non-convex problem where Adam fails to converge to a stationary point , while Delayed Adam – Adam with our proposed modification – provably converges with a rate of O ( 1/ √ T ) . • AvaGrad . Inspecting the convergence rate of Delayed Adam , we show that it would improve with an adaptive global learning rate , which self-regulates based on global statistics of the gradient second moments . Following this insight , Section 5 proposes a new adaptive method , AvaGrad , whose hyperparameters decouple learning rate and adaptability . Through extensive experiments , Section 6 demonstrates that AvaGrad is not merely a theoretical exercise . AvaGrad performs as well as both SGD and Adam in their respectively favored usage scenarios . Along this experimental journey , we happen to disprove some conventional wisdom , finding adaptive optimizers , including Adam , to be superior to SGD for training CNNs . The caveat is that , excepting AvaGrad , these methods are sensitive to hyperparameter values . AvaGrad is a uniquely attractive adaptive optimizer , yielding near best results over a wide range of hyperparameters . 2 PRELIMINARIES . 2.1 NOTATION . For vectors a = [ a1 , a2 , . . . ] , b = [ b1 , b2 , . . . ] ∈ Rd , we use the following notation : 1a for elementwise division ( 1a = [ 1 a1 , 1a2 , . . . ] ) , √ a for element-wise square root ( √ a = [ √ a1 , √ a2 , . . . ] ) , a+ b for element-wise addition ( a+ b = [ a1 + b1 , a2 + b2 , . . . ] ) , a b for element-wise multiplication ( a b = [ a1b1 , a2b2 , . . . ] ) . Moreover , ‖a‖ is used to denote the ` 2-norm : other norms will be specified whenever used ( e.g. , ‖a‖∞ ) . For subscripts and vector indexing , we adopt the following convention : the subscript t is used to denote an object related to the t-th iteration of an algorithm ( e.g. , wt ∈ Rd denotes the iterate at time step t ) ; the subscript i is used for indexing : wi ∈ R denotes the i-th coordinate of w ∈ Rd . When used together , t precedes i : wt , i ∈ R denotes the i-th coordinate of wt ∈ Rd . 2.2 STOCHASTIC NON-CONVEX OPTIMIZATION . In the stochastic non-convex setting , we are concerned with the optimization problem : min w∈Rd f ( w ) = Es∼D [ fs ( w ) ] ( 1 ) whereD is a probability distribution over a set S of “ data points ” . We also assume that f isM -smooth in w , as is typically done in non-convex optimization : ∀ w , w′ f ( w′ ) ≤ f ( w ) + 〈∇f ( w ) , w′ − w〉+ M 2 ‖w − w′‖2 ( 2 ) Methods for stochastic non-convex optimization are evaluated in terms of number of iterations or gradient evaluations required to achieve small loss gradients . This differs from the stochastic convex setting where convergence is measured w.r.t . suboptimality f ( w ) − minw∈Rd f ( w ) . We assume that the algorithm takes a sequence of data points S = ( s1 , . . . , sT ) from which it deterministically computes a sequence of parameter settingsw1 , . . . , wT together with a distributionP over { 1 , . . . , T } . We say an algorithm has a convergence rate of O ( g ( T ) ) if E S∼DT t∼P ( t|S ) [ ‖∇f ( wt ) ‖2 ] ≤ O ( g ( T ) ) where , as defined above , f ( w ) = Es∼D [ fs ( w ) ] . We also assume that the functions fs have bounded gradients : there exists some G∞ such that ‖∇fs ( w ) ‖∞ ≤ G∞ for all s ∈ S and w ∈ Rd . Throughout the paper , we also let G2 denote an upper bound on ‖∇fs ( w ) ‖ . 3 RELATED WORK . Here we present a brief overview of optimization methods commonly used for training neural networks , along with their convergence rate guarantees for stochastic smooth non-convex problems . We consider methods which , at each iteration t , receive or compute a gradient estimate : gt : = ∇fst ( wt ) , st ∼ D ( 3 ) and perform an update of the form : wt+1 = wt − αt · ηt mt ( 4 ) where αt ∈ R is the global learning rate , ηt ∈ Rd are the parameter-wise learning rates , and mt ∈ Rd is the update direction , typically defined as : mt = β1 , tmt−1 + ( 1− β1 , t ) gt and m0 = 0 . ( 5 ) Non-momentum methods such as SGD , AdaGrad , and RMSProp ( Dauphin et al. , 2015 ; Duchi et al. , 2011 ) have mt = gt ( i.e. , β1 , t = 0 ) , while momentum SGD and Adam ( Kingma & Ba , 2015 ) have β1 , t ∈ ( 0 , 1 ) . Note that while αt can always be absorbed into ηt , representing the update in this form will be convenient throughout the paper . SGD uses the same learning rate for all parameters , i.e. , ηt = ~1 . Although SGD is simple and offers no adaptation , it has a convergence rate of O ( 1/ √ T ) with either constant , increasing , or decreasing learning rates ( Ghadimi & Lan , 2013 ) , and is widely used when training deep networks , especially CNNs ( He et al. , 2016a ; Huang et al. , 2017 ) . At the heart of its convergence proof is the fact that Est [ αt · ηt gt ] = αt · ∇f ( wt ) . Popular adaptive methods such as RMSProp ( Dauphin et al. , 2015 ) , AdaGrad ( Duchi et al. , 2011 ) , and Adam ( Kingma & Ba , 2015 ) have ηt = 1√vt+ , where vt ∈ R d is given by : vt = β2 , tvt−1 + ( 1− β2 , t ) g2t and v0 = 0 . ( 6 ) As vt is an estimate of the second moments of the gradients , the optimizer designates smaller learning rates for parameters with larger uncertainty in their stochastic gradients . However , in this setting ηt and st are no longer independent , hence Est [ αt · ηt gt ] 6= αt · Est [ ηt ] ∇f ( wt ) . This “ bias ” can cause RMSProp and Adam to present convergence issues , even in the stochastic convex setting ( Reddi et al. , 2018 ) . Recently , Zaheer et al . ( 2018 ) showed that , with a constant learning rate , RMSProp and Adam have a convergence rate of O ( σ2 + 1/T ) , where σ2 = supw∈Rd Es∼D [ ‖∇fs ( w ) −∇f ( w ) ‖2 ] , hence their result does not generally guarantee convergence . Chen et al . ( 2019 ) showed that AdaGrad and AMSGrad enjoy a convergence rate of O ( log T/ √ T ) when a decaying learning rate is used . Note that both methods constrain ηt in some form , the former with β2 , t = 1− 1/t ( adaptability diminishes with t ) , and the latter explicitly enforces vt ≥ vj for all j < t ( ηt is point-wise non-increasing ) . In both cases , the method is less adaptive than Adam , and yet analyses so far have not delivered a convergence rate that matches SGD ’ s . 4 SGD-LIKE CONVERGENCE WITHOUT CONSTRAINED RATES . Algorithm 1 DELAYED ADAM Input : w1 ∈ Rd , αt , > 0 , β1 , t , β2 , t ∈ [ 0 , 1 ) 1 : Set m0 = 0 , v0 = 0 2 : for t = 1 to T do 3 : Draw st ∼ D 4 : Compute gt = ∇fst ( wt ) 5 : mt = β1 , tmt−1 + ( 1− β1 , t ) gt 6 : ηt = 1√ vt−1+ 7 : wt+1 = wt − αt · ηt mt 8 : vt = β2 , tvt−1 + ( 1− β2 , t ) g2t 9 : end for We first take a step back to note the following : to show that Adam might not converge in the stochastic convex setting , Reddi et al . ( 2018 ) provide a stochastic linear problem where Adam fails to converge w.r.t . suboptimality . Since non-convex optimization is evaluated w.r.t . norm of the gradients , a different instance is required to characterize Adam ’ s behavior in this setting . The following result shows that even for a quadratic problem , Adam indeed does not converge to a stationary point : Theorem 1 . For any ≥ 0 and constant β2 , t = β2 ∈ [ 0 , 1 ) , there is a stochastic convex optimization problem for which Adam does not converge to a stationary point . Proof . The full proof is given in Appendix A . The argument follows closely from Reddi et al . ( 2018 ) , where we explicitly present a stochastic optimization problem : min w∈ [ 0,1 ] f ( w ) : = Es∼D [ fs ( w ) ] fs ( w ) = { C w 2 2 , with probability p : = 1+δ C+1 −w , otherwise ( 7 ) We show that , for large enough C ( as a function of δ , , β2 ) , Adam will move towards w = 1 where ∇f ( 1 ) = δ , and that the constraint w ∈ [ 0 , 1 ] does not make w = 1 a stationary point . This result , like the one in Reddi et al . ( 2018 ) , relies on the fact that ηt and st are correlated : upon a draw of the rare sample C w 2 2 , the learning rate ηt decreases significantly and Adam takes a small step in the correct direction . On the other hand , a sequence of common samples increases ηt and Adam moves faster towards w = 1 . Instead of enforcing ηt to be point-wise non-increasing in t ( Reddi et al. , 2018 ) , which forces the optimizer to take small steps even for a long sequence of common samples , we propose to simply have ηt be independent of st. As an extra motivation for this approach , note that successful proof strategies ( Zaheer et al. , 2018 ) to analyzing adaptive methods include the following step : Est [ ηt gt ] = Est [ ( ηt−1 + ηt − ηt−1 ) gt ] = ηt−1 ∇f ( wt ) + Est [ ( ηt − ηt−1 ) gt ] ( 8 ) where bounding Est [ ( ηt − ηt−1 ) gt ] , seen as a form of bias , is a key part of recent convergence analyses . Replacing ηt by ηt−1 in the update equation of Adam removes this bias and can be implemented by simply swapping lines of code ( updating η after w ) , yielding a simple convergence analysis without hindering the adaptability of the method in any way . Algorithm 1 provides pseudocode when applying this modification , highlighted in red , to Adam , yielding Delayed Adam . The following Theorem shows that this modification is enough to guarantee a SGD-like convergence rate of O ( 1/ √ T ) in the stochastic non-convex setting for general adaptive gradient methods . Theorem 2 . Consider any optimization method which updates parameters as follows : wt+1 = wt − αt · ηt gt ( 9 ) where gt : = ∇fst ( wt ) , st ∼ D , and αt , ηt are independent of st . Assume that f ( w1 ) − f ( w ? ) ≤ D , f ( w ) = Es∼D [ fs ( w ) ] is M -smooth , and ‖∇fs ( w ) ‖∞ ≤ G∞ for all s ∈ S , w ∈ Rd . Moreover , let Z = ∑T t=1 αt mini ηt , i . For αt = γt √ 2D TMG2∞ , if p ( Z|st ) = p ( Z ) for all st ∈ S , then : E S∼DT t∼P ( t|S ) [ ‖∇f ( wt ) ‖2 ] ≤ √ MDG2∞ 2T · ES∼DT [ ∑T t=1 1 + γ 2 t ‖ηt‖ 2∑T t=1 γt mini ηt , i ] ( 10 ) where P assigns probabilities p ( t ) ∝ αt ·mini ηt , i . Proof . The full proof is given in Appendix B , along with analysis for the case with momentum β1 , t ∈ ( 0 , 1 ) in Appendix B.1 , and in particular β1 , t = β1/ √ t , which yields a similar rate . The convergence rate depends on ‖ηt‖ and mini ηt , i , which are random variables for Adam-like algorithms . However , if there are constants H and L such that 0 < L ≤ ηt , i ≤ H < ∞ for all i and t , then a rate of O ( 1/ √ T ) is guaranteed . This is the case for Delayed Adam , where 1/ ( G2 + ) ≤ ηt , i ≤ 1/ for all t and i. Theorem 2 also requires that αt and ηt are independent of st , which can be assured to hold by applying a “ delay ” to their respective computations , if necessary ( i.e. , replacing ηt by ηt−1 , as in Delayed Adam ) . Additionally , the assumption that p ( Z|st ) = p ( Z ) , meaning that a single sample should not affect the distribution of Z = ∑T t=1 αt mini ηt , i , is required since P is conditioned on the samples S ( unlike in standard analysis , where Z = ∑T t=1 αt and αt is deterministic ) , and is expected to hold as T →∞ . Practitioners typically use the last iterate wT or perform early-stopping : in this case , whether the assumption holds or not does not affect the behavior of the algorithm . Nonetheless , we also show in Appendix B.2 a similar rate that does not require this assumption to hold , which also yields a O ( 1/ √ T ) convergence rate taken that the parameter-wise learning rates are bounded from above and below .
This paper proposes a new adaptive method, which is called AvaGrad. The authors first show that Adam may not converge to a stationary point for a stochastic convex optimization in Theorem1, which is closely related to [1]. They then show that by simply making $eta_t$ to be independent of the sample $s_t$, Adam is able to converge just like SGD in Theorem2. Theorem2 follows the standard SGD techniques. Next, they propose AVAGRAD, which is based on the idea of getting rid of the effect of $\epsilon$.
SP:99b801c2541124ead1b8ff8904e7aa82c422c37c
Interpreting video features: a comparison of 3D convolutional networks and convolutional LSTM networks
1 INTRODUCTION . Two standard approaches to deep learning for sequential image data are 3D Convolutional Neural Networks ( 3D CNNs ) , e.g . the I3D model ( Carreira & Zisserman ( 2017 ) ) , and recurrent neural networks ( RNNs ) . Among the RNNs , the convolutional long short-term memory network ( C-LSTM ) ( Shi et al . ( 2015 ) ) is especially suited for sequences of images , since it learns both spatial and temporal dependencies simultaneously . Although both methods can capture aspects of the semantics pertaining to the temporal dependencies in a video clip , there is a fundamental difference in how 3D CNNs treat time compared to C-LSTMs . In 3D CNNs the time axis is treated just like a third spatial axis , whereas C-LSTMs only allow for information flow in the direction of increasing time , complying with the second law of thermodynamics . More concretely , C-LSTMs maintain a hidden state representing the current video frame when forward-traversing the input video sequence , and are able to model non-linear transitions in time . 3D CNNs instead convolve ( i.e . take a weighted average ) over both the temporal and spatial dimensions of the sequence . The question investigated in this paper is whether this difference has consequences for how the two models compute spatiotemporal features . We present a qualitative study of how 3D CNNs and CLSTMs respectively compute video features : what do they learn , and how do they differ from one another ? As outlined in Section 2 , there is a large body of work on evaluating video architectures on spatial and temporal correlations , but significantly fewer investigations of what parts of the data the networks have used and what semantics relating to the temporal dependencies they have extracted from it . Deep neural networks are known to be large computational models , whose inner workings are difficult to overview for a human . For video models , the number of parameters is typically significantly higher which makes their interpretability all the more pressing . We will evaluate these two types of models ( 3D CNN and C-LSTM ) on tasks where temporal order is crucial . The 20BN-Something-something-V2 dataset ( Mahdisoltani et al . ( 2018 ) , hereon Something-something ) will be central to our investigations ; it contains time-critical classes , agnostic to object appearance , such as move something from left to right or move something from right to left . We additionally evaluate the models on the smaller KTH actions dataset ( Schuldt et al . ( 2004 ) ) . Our contributions are listed as follows . • We present the first comparison of 3D CNNs and C-LSTMs in terms of temporal modeling abilities and highlight the essential difference between their assumptions concerning temporal dependencies in the data . • We extend the concept of meaningful perturbation introduced by Fong & Vedaldi ( 2017 ) to the temporal dimension to search for the most critical part of a sequence for classification . 2 RELATED WORK . The field of interpretability in the context of deep neural networks is still young but has made considerable progress for single image networks , owing to works such as Zeiler & Fergus ( 2013 ) , Simonyan et al . ( 2014 ) and Montavon et al . ( 2018 ) . One can distinguish between data centric and network centric methods for interpretability . Activity maximization , first coined by Erhan et al . ( 2009 ) , is network centric in the sense that specific units of the network are being studied . By casting the maximization of the activation of a certain unit as an optimization problem in terms of the input , one can compute the optimal input for that particular unit by gradient ascent . In data centric interpretability methods , the focus is instead on the input to the network , to reveal which patterns of the data that the network has discerned . Grad-CAM ( Selvaraju et al . ( 2017 ) ) and the meaningful perturbations explored in Fong & Vedaldi ( 2017 ) , which form the basis for our experiments , belong to the data centric category . These two methods are further explained in Section 3 . Layer-wise relevance propagation ( LRP ) ( Montavon et al . ( 2018 ) ) as well as Excitation backprop ( Zhang et al . ( 2016 ) ) are two other examples of data centric backpropagation techniques designed for interpretability , where the excitation backprop method follows from a simpler parameter setting of LRP . Building on excitation backprop by Zhang et al . ( 2016 ) , Adel Bargal et al . ( 2018 ) produce saliency maps for video RNNs without the use of gradients . Instead , products of forward weights and activations are normalized in order to be used as conditional probabilities , which are back-propagated . We have chosen to use Grad-CAM for our experiments , since it is one of the saliency methods in Adebayo et al . ( 2018 ) that passes the article ’ s sanity checks , as well as for its simplicity of implementation and wide usage . Limited works have been published with their focus on interpretability for video models ( Feichtenhofer et al . ( 2018 ) , Sigurdsson et al . ( 2017 ) , Huang et al . ( 2018 ) , Ghodrati et al . ( 2018 ) ) . Other works have treated it , but with less extensive experimentation ( Chattopadhyay et al . ( 2017 ) ) , while for example mainly presenting a new spatiotemporal architecture ( Dwibedi et al . ( 2018 ) , Zhou et al . ( 2018 ) ) . We build on the work by Ghodrati et al . ( 2018 ) , where the aim is to measure a network ’ s ability to model video time directly , instead of via the proxy task of action classification , which is most commonly seen . Three defining properties of video time are defined in the paper : temporal symmetry , temporal continuity and temporal causality , and are each presented accompanied by a measurable task . The third property is measured using the classification accuracy on the Somethingsomething dataset . An important contribution of ours with respect to this work is that we compare 3D CNNs and C-LSTMs , whereas Ghodrati et al . ( 2018 ) compare 3D CNNs to standard LSTMs . Their comparison can be argued as slightly unfair , as standard LSTM layers only take 1D input , and thus needs to collapse each image frame in the video to a vector , which removes some spatial dependencies in the pixel grid . Similar to our work , Dwibedi et al . ( 2018 ) investigate the temporal modeling capabilities of convolutional RNNs ( Convolutional Gated Recurrent Units ) trained on Something-something . The authors find that recurrent models perform well for the task , and a qualitative analysis of the learned hidden states of their trained model is presented . For each class of the dataset , they obtain the hidden states of the network corresponding to the frames of one clip and display its nearest neighbors from other clips ’ per-frame hidden state representations . These hidden states had encoded information about the relevant frame ordering for the classes . Sigurdsson et al . ( 2017 ) examined video architectures and datasets on a number of qualitative attributes . Huang et al . ( 2018 ) investigate how much the actual motion in a clip contributes the classification performance of a video architecture . To measure this , they perform classification experiments varying the number of sub-sampled frames used for a clip to examine how much the accuracy changes as a result . In a search-based precursor to our temporal mask experiments , Satkin & Hebert ( 2010 ) crop sequences temporally to obtain the most discriminative sub-sequence for a certain class . The cropping corresponding to the highest classification confidence is selected as being the most discriminative sub-sequence . Feichtenhofer et al . ( 2018 ) present the first network centric interpretability work for video models . The authors investigate spatiotemporal features using activity maximization . Zhou et al . ( 2018 ) introduce the Temporal Relational Network ( TRN ) which learns temporal dependencies between frames through sampling the semantically relevant frames for a particular action class . The TRN module is put on top of a convolutional layer and consists of a fully connected network between the sampled frame features and the output . Similar to Dwibedi et al . ( 2018 ) , they perform temporal alignment of clips from the same class , using the frames considered most representative for the clip by the network . They verify the conclusion previously made by Xie et al . ( 2017 ) , that temporal order is crucial on Something-something and show that their architecture is sensitive to that . They also investigate which classes of Something-something show the strongest sensitivity to temporal order . 3 APPROACH . 3.1 TEMPORAL MASKS . The proposed temporal mask method aims to extend the interpretability of deep networks into the temporal dimension , utilizing meaningful perturbation of the input , as shown effective in the spatial dimension by Fong & Vedaldi ( 2017 ) . When adopting this approach , it is necessary to define what constitutes a meaningful perturbation . In the mentioned paper , a mask that blurs the input as little as possible is learned for a single image , while still maximizing the decrease in class score . Our proposed method applies this concept of a learned mask to the temporal dimension . The perturbation , in this setting , is a noise mask approximating either a ’ freeze ’ operation , which removes motion data through time , or a ’ reverse ’ operation that inverses the sequential order of the frames . This way , we aim to identify which frames are potentially most critical for the network ’ s classification decision . The perturbing temporal mask is defined as a vector of values on the interval [ 0,1 ] with the same length as the input sequence . For the ’ freeze ’ type mask , a value of 1 for a frame at index t duplicates the value from the previous frame at t − 1 onto the input sequence at t. The pseudocode for this procedure is given below . for i in maskIndices do perturbedInputi ← ( 1−maski ) ∗ originalInputi +maski ∗ perturbedInputi−1 end for For the ’ reverse ’ mask type , all indices of the mask m that are activated are first identified ( threshold 0.1 ) . These indices are then looped through to find all contiguous sections , which are treated as sub-masks , mi . For each sub-mask , the frames at the active indices in the sub-mask are reversed . For example ( binary for clarity ) , an input indexed as t1:16 perturbed with a mask with the value [ 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 ] would result in the sequence with frame indices [ 1 , 2 , 3 , 8 , 7 , 6 , 5 , 4 , 9 , 10 , 11 , 12 , 13 , 15 , 14 , 16 ] . In order to learn the mask , we define a loss function ( Eq . 1 ) to be minimized using gradient descent , similar to the approach in Fong & Vedaldi ( 2017 ) . L = λ1‖m‖11 + λ2‖m‖ β β + Fc , ( 1 ) where m is the mask expressed as a vector m ∈ [ 0 , 1 ] t , ‖·‖11 is the L1 norm , ‖·‖ β β is the Total Variation ( TV ) norm , λ1,2 are weighting factors , and Fc is the class score given by the model for the perturbed input . The L1 norm punishes long masks , in order to identify only the most important frames in the sequence . The TV norm penalizes masks that are not contiguous . This approach allows our method to automatically learn masks that identify one or several contiguous sequences in the input . The mask is initialized centered at the middle of the sequence . To keep the perturbed input class score differentiable w.r.t . the mask , the optimizer operates on a mask vector that has values in R. A sigmoid function is applied to the mask before using it for the perturbing operation in order to keep its values in the [ 0,1 ] range . The ADAM optimizer is then used to learn the mask through 300 iterations of gradient descent . After the mask has converged , it is thresholded for visualisation purposes .
The paper shows a way to compare what is learned by two very different networks trained for a video classification task. The two architectures are state-of-the-art methods, one relying on 3d-CNNs (time= one dimension), the other on conv-LSTMs (time is treated sequentially, using hidden states to pass information). The idea of the authors is (i) to provide saliency maps for each of them, and (ii) to create interesting perturbations in order to measure the influence on the networks. The results indicate that these complex networks are usually focused on interesting features, and as we would imagine, LSTMs is more learning from temporal coherence than CNNs.
SP:f99c39367808a1148a8b9559eef7d88cbccc8e6b
Interpreting video features: a comparison of 3D convolutional networks and convolutional LSTM networks
1 INTRODUCTION . Two standard approaches to deep learning for sequential image data are 3D Convolutional Neural Networks ( 3D CNNs ) , e.g . the I3D model ( Carreira & Zisserman ( 2017 ) ) , and recurrent neural networks ( RNNs ) . Among the RNNs , the convolutional long short-term memory network ( C-LSTM ) ( Shi et al . ( 2015 ) ) is especially suited for sequences of images , since it learns both spatial and temporal dependencies simultaneously . Although both methods can capture aspects of the semantics pertaining to the temporal dependencies in a video clip , there is a fundamental difference in how 3D CNNs treat time compared to C-LSTMs . In 3D CNNs the time axis is treated just like a third spatial axis , whereas C-LSTMs only allow for information flow in the direction of increasing time , complying with the second law of thermodynamics . More concretely , C-LSTMs maintain a hidden state representing the current video frame when forward-traversing the input video sequence , and are able to model non-linear transitions in time . 3D CNNs instead convolve ( i.e . take a weighted average ) over both the temporal and spatial dimensions of the sequence . The question investigated in this paper is whether this difference has consequences for how the two models compute spatiotemporal features . We present a qualitative study of how 3D CNNs and CLSTMs respectively compute video features : what do they learn , and how do they differ from one another ? As outlined in Section 2 , there is a large body of work on evaluating video architectures on spatial and temporal correlations , but significantly fewer investigations of what parts of the data the networks have used and what semantics relating to the temporal dependencies they have extracted from it . Deep neural networks are known to be large computational models , whose inner workings are difficult to overview for a human . For video models , the number of parameters is typically significantly higher which makes their interpretability all the more pressing . We will evaluate these two types of models ( 3D CNN and C-LSTM ) on tasks where temporal order is crucial . The 20BN-Something-something-V2 dataset ( Mahdisoltani et al . ( 2018 ) , hereon Something-something ) will be central to our investigations ; it contains time-critical classes , agnostic to object appearance , such as move something from left to right or move something from right to left . We additionally evaluate the models on the smaller KTH actions dataset ( Schuldt et al . ( 2004 ) ) . Our contributions are listed as follows . • We present the first comparison of 3D CNNs and C-LSTMs in terms of temporal modeling abilities and highlight the essential difference between their assumptions concerning temporal dependencies in the data . • We extend the concept of meaningful perturbation introduced by Fong & Vedaldi ( 2017 ) to the temporal dimension to search for the most critical part of a sequence for classification . 2 RELATED WORK . The field of interpretability in the context of deep neural networks is still young but has made considerable progress for single image networks , owing to works such as Zeiler & Fergus ( 2013 ) , Simonyan et al . ( 2014 ) and Montavon et al . ( 2018 ) . One can distinguish between data centric and network centric methods for interpretability . Activity maximization , first coined by Erhan et al . ( 2009 ) , is network centric in the sense that specific units of the network are being studied . By casting the maximization of the activation of a certain unit as an optimization problem in terms of the input , one can compute the optimal input for that particular unit by gradient ascent . In data centric interpretability methods , the focus is instead on the input to the network , to reveal which patterns of the data that the network has discerned . Grad-CAM ( Selvaraju et al . ( 2017 ) ) and the meaningful perturbations explored in Fong & Vedaldi ( 2017 ) , which form the basis for our experiments , belong to the data centric category . These two methods are further explained in Section 3 . Layer-wise relevance propagation ( LRP ) ( Montavon et al . ( 2018 ) ) as well as Excitation backprop ( Zhang et al . ( 2016 ) ) are two other examples of data centric backpropagation techniques designed for interpretability , where the excitation backprop method follows from a simpler parameter setting of LRP . Building on excitation backprop by Zhang et al . ( 2016 ) , Adel Bargal et al . ( 2018 ) produce saliency maps for video RNNs without the use of gradients . Instead , products of forward weights and activations are normalized in order to be used as conditional probabilities , which are back-propagated . We have chosen to use Grad-CAM for our experiments , since it is one of the saliency methods in Adebayo et al . ( 2018 ) that passes the article ’ s sanity checks , as well as for its simplicity of implementation and wide usage . Limited works have been published with their focus on interpretability for video models ( Feichtenhofer et al . ( 2018 ) , Sigurdsson et al . ( 2017 ) , Huang et al . ( 2018 ) , Ghodrati et al . ( 2018 ) ) . Other works have treated it , but with less extensive experimentation ( Chattopadhyay et al . ( 2017 ) ) , while for example mainly presenting a new spatiotemporal architecture ( Dwibedi et al . ( 2018 ) , Zhou et al . ( 2018 ) ) . We build on the work by Ghodrati et al . ( 2018 ) , where the aim is to measure a network ’ s ability to model video time directly , instead of via the proxy task of action classification , which is most commonly seen . Three defining properties of video time are defined in the paper : temporal symmetry , temporal continuity and temporal causality , and are each presented accompanied by a measurable task . The third property is measured using the classification accuracy on the Somethingsomething dataset . An important contribution of ours with respect to this work is that we compare 3D CNNs and C-LSTMs , whereas Ghodrati et al . ( 2018 ) compare 3D CNNs to standard LSTMs . Their comparison can be argued as slightly unfair , as standard LSTM layers only take 1D input , and thus needs to collapse each image frame in the video to a vector , which removes some spatial dependencies in the pixel grid . Similar to our work , Dwibedi et al . ( 2018 ) investigate the temporal modeling capabilities of convolutional RNNs ( Convolutional Gated Recurrent Units ) trained on Something-something . The authors find that recurrent models perform well for the task , and a qualitative analysis of the learned hidden states of their trained model is presented . For each class of the dataset , they obtain the hidden states of the network corresponding to the frames of one clip and display its nearest neighbors from other clips ’ per-frame hidden state representations . These hidden states had encoded information about the relevant frame ordering for the classes . Sigurdsson et al . ( 2017 ) examined video architectures and datasets on a number of qualitative attributes . Huang et al . ( 2018 ) investigate how much the actual motion in a clip contributes the classification performance of a video architecture . To measure this , they perform classification experiments varying the number of sub-sampled frames used for a clip to examine how much the accuracy changes as a result . In a search-based precursor to our temporal mask experiments , Satkin & Hebert ( 2010 ) crop sequences temporally to obtain the most discriminative sub-sequence for a certain class . The cropping corresponding to the highest classification confidence is selected as being the most discriminative sub-sequence . Feichtenhofer et al . ( 2018 ) present the first network centric interpretability work for video models . The authors investigate spatiotemporal features using activity maximization . Zhou et al . ( 2018 ) introduce the Temporal Relational Network ( TRN ) which learns temporal dependencies between frames through sampling the semantically relevant frames for a particular action class . The TRN module is put on top of a convolutional layer and consists of a fully connected network between the sampled frame features and the output . Similar to Dwibedi et al . ( 2018 ) , they perform temporal alignment of clips from the same class , using the frames considered most representative for the clip by the network . They verify the conclusion previously made by Xie et al . ( 2017 ) , that temporal order is crucial on Something-something and show that their architecture is sensitive to that . They also investigate which classes of Something-something show the strongest sensitivity to temporal order . 3 APPROACH . 3.1 TEMPORAL MASKS . The proposed temporal mask method aims to extend the interpretability of deep networks into the temporal dimension , utilizing meaningful perturbation of the input , as shown effective in the spatial dimension by Fong & Vedaldi ( 2017 ) . When adopting this approach , it is necessary to define what constitutes a meaningful perturbation . In the mentioned paper , a mask that blurs the input as little as possible is learned for a single image , while still maximizing the decrease in class score . Our proposed method applies this concept of a learned mask to the temporal dimension . The perturbation , in this setting , is a noise mask approximating either a ’ freeze ’ operation , which removes motion data through time , or a ’ reverse ’ operation that inverses the sequential order of the frames . This way , we aim to identify which frames are potentially most critical for the network ’ s classification decision . The perturbing temporal mask is defined as a vector of values on the interval [ 0,1 ] with the same length as the input sequence . For the ’ freeze ’ type mask , a value of 1 for a frame at index t duplicates the value from the previous frame at t − 1 onto the input sequence at t. The pseudocode for this procedure is given below . for i in maskIndices do perturbedInputi ← ( 1−maski ) ∗ originalInputi +maski ∗ perturbedInputi−1 end for For the ’ reverse ’ mask type , all indices of the mask m that are activated are first identified ( threshold 0.1 ) . These indices are then looped through to find all contiguous sections , which are treated as sub-masks , mi . For each sub-mask , the frames at the active indices in the sub-mask are reversed . For example ( binary for clarity ) , an input indexed as t1:16 perturbed with a mask with the value [ 0 , 0 , 0 , 1 , 1 , 1 , 1 , 1 , 0 , 0 , 0 , 0 , 0 , 1 , 1 , 0 ] would result in the sequence with frame indices [ 1 , 2 , 3 , 8 , 7 , 6 , 5 , 4 , 9 , 10 , 11 , 12 , 13 , 15 , 14 , 16 ] . In order to learn the mask , we define a loss function ( Eq . 1 ) to be minimized using gradient descent , similar to the approach in Fong & Vedaldi ( 2017 ) . L = λ1‖m‖11 + λ2‖m‖ β β + Fc , ( 1 ) where m is the mask expressed as a vector m ∈ [ 0 , 1 ] t , ‖·‖11 is the L1 norm , ‖·‖ β β is the Total Variation ( TV ) norm , λ1,2 are weighting factors , and Fc is the class score given by the model for the perturbed input . The L1 norm punishes long masks , in order to identify only the most important frames in the sequence . The TV norm penalizes masks that are not contiguous . This approach allows our method to automatically learn masks that identify one or several contiguous sequences in the input . The mask is initialized centered at the middle of the sequence . To keep the perturbed input class score differentiable w.r.t . the mask , the optimizer operates on a mask vector that has values in R. A sigmoid function is applied to the mask before using it for the perturbing operation in order to keep its values in the [ 0,1 ] range . The ADAM optimizer is then used to learn the mask through 300 iterations of gradient descent . After the mask has converged , it is thresholded for visualisation purposes .
This paper presents a paradigm for generating saliency maps for video models, specifically, I3D (3D CNN) and C-LSTM. It extends Fong & Vedaldi, 2017 to generate a temporal mask and introduces two types of "meaningful perturbations" for videos: freezing and reversing frames; they use Grad-CAM (with no modifications) for generating spatial masks. The problem is well-motivated, as saliency maps have been extensively studied for image classification models, but rarely for video classification. Quantitatively, they demonstrate that frame-reversal is meaningful for the Something-something dataset but less for KTH because those actions rely more on spatial information than temporal (i.e., running, clapping). Qualitatively, they show their spatial and temporal masks on both datasets and suggest the a few insights:
SP:f99c39367808a1148a8b9559eef7d88cbccc8e6b
You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings
1 INTRODUCTION . Knowledge graph embedding ( KGE ) models learn algebraic representations , termed embeddings , of the entities and relations in a knowledge graph . They have been successfully applied to knowledge graph completion ( Nickel et al. , 2015 ) as well as in downstream tasks and applications such as recommender systems ( Wang et al. , 2017 ) or visual relationship detection ( Baier et al. , 2017 ) . A vast number of different KGE models for multi-relational link prediction have been proposed in the recent literature ; e.g. , RESCAL ( Nickel et al. , 2011 ) , TransE ( Bordes et al. , 2013 ) , DistMult , ComplEx ( Trouillon et al. , 2016 ) , ConvE ( Dettmers et al. , 2018 ) , TuckER ( Balazevic et al. , 2019 ) , RotatE ( Sun et al. , 2019a ) , SACN ( Shang et al. , 2019 ) , and many more . Model architectures generally differ in the way the entity and relation embeddings are combined to model the presence or absence of an edge ( more precisely , a subject-predicate-object triple ) in the knowledge graph ; they include factorization models ( e.g. , RESCAL , DistMult , ComplEx , TuckER ) , translational models ( TransE , RotatE ) , and more advanced models such as convolutional models ( ConvE ) . In many cases , the introduction of new models went along with new approaches for training these models— e.g. , new training types ( such as negative sampling or 1vsAll scoring ) , new loss functions ( such as ∗Contributed equally . { daniel , broscheit } @ informatik.uni-mannheim.de †rgemulla @ uni-mannheim.de pairwise margin ranking or binary cross entropy ) , new forms of regularization ( such as unweighted and weighted L2 ) , or the use of reciprocal relations ( Kazemi & Poole , 2018 ; Lacroix et al. , 2018 ) — and ablation studies were not always performed . Table 1 shows an overview of selected models and training techniques along with the publications that introduced them . The diversity in model training makes it difficult to compare performance results for various model architectures , especially when results are reproduced from prior studies that used a different experimental setup . Model hyperparameters are commonly tuned using grid search on a small grid involving hand-crafted parameter ranges or settings known to “ work well ” from prior studies . A grid suitable for one model may be suboptimal for another , however . Indeed , it has been observed that newer training strategies can considerably improve model performance ( Kadlec et al. , 2017 ; Lacroix et al. , 2018 ; Salehi et al. , 2018 ) . In this paper , we take a step back and aim to summarize and quantify empirically the impact of different model architectures and different training strategies on model performance . We performed an extensive set of experiments using popular model architectures and training strategies in a common experimental setup . In contrast to most prior work , we considered many training strategies as well as a large hyperparameter space , and we performed model selection using quasi-random search ( instead of grid search ) followed by Bayesian optimization . We found that this approach was able to find good ( and often superior to prior studies ) model configurations with relatively low effort . Our study complements and expands on the results of Kotnis & Nastase ( 2018 ) ( focus on negative sampling ) and Mohamed et al . ( 2019 ) ( focus on loss functions ) as well as similar studies in other areas , including language modeling ( Melis et al. , 2017 ) , generative adversarial networks ( Lucic et al. , 2018 ) , or sequence tagging ( Reimers & Gurevych , 2017 ) . We found that when trained appropriately , the performance of a particular model architecture can by far exceed the performance observed in older studies . For example , RESCAL ( Nickel et al. , 2011 ) , which constitutes one of the first KGE models but is rarely considered in newer work , showed very strong performance in our study : it was competitive to or outperformed more recent architectures such as ConvE ( Dettmers et al. , 2018 ) and TuckER ( Balazevic et al. , 2019 ) . More generally , we found that the relative performance differences between various model architectures often shrunk and sometimes even reversed when compared to prior results . This suggests that ( at least currently ) training strategies have a significant impact on model performance and may account for a substantial fraction of the progress made in recent years . We also found that suitable training strategies and hyperparameter settings vary significantly across models and datasets , indicating that a small search grid may bias results on model performance . Fortunately , as indicated above , large hyperparameter spaces can be ( and should be ) used with little additional training effort . To facilitate such efforts , we provide implementations of relevant training strategies , models , and evaluation methods as part of the open source LibKGE framework,1 which emphasizes reproducibility and extensibility . Our study focuses solely on pure KGE models , which do not exploit auxiliary information such as textual data or logical rules ( Wang et al. , 2017 ) . Since many of the studies on these non-pure models did not ( and , to be fair , could not ) use current training strategies and consequently underestimated the performance of pure KGE models , their results and conclusions need to be revisited . 1https : //github.com/uma-pi1/kge 2 KNOWLEDGE GRAPH EMBEDDINGS : MODELS , TRAINING , EVALUATION . The literature on KGE models is expanding rapidly . We review selected architectures , training methods , and evaluation protocols ; see Table 1 . The table examplarily indicates that new model architectures are sometimes introduced along with new training strategies ( marked bold ) . Reasonably recent survey articles about KGE models include Nickel et al . ( 2015 ) and Wang et al . ( 2017 ) . Multi-relational link prediction . KGE models are typically trained and evaluated in the context of multi-relational link prediction for knowledge graphs ( KG ) . Generally , given a set E of entities and a set R of relations , a knowledge graph K ⊆ E × R × E is a set of subject-predicate-object ( spo ) triples . The goal of multi-relational link prediction is to “ complete the KG ” , i.e. , to predict true but unobserved triples based on the information in K. Common approaches include rule-based methods ( Galarraga et al. , 2013 ; Meilicke et al. , 2019 ) , KGE methods ( Nickel et al. , 2011 ; Bordes et al. , 2013 ; Trouillon et al. , 2016 ; Dettmers et al. , 2018 ) , and hybrid methods ( Guo et al. , 2018 ) . Knowledge graph embeddings ( KGE ) . A KGE model associates with each entity i ∈ E and relation k ∈ R an embedding ei ∈ Rde and rk ∈ Rdr in a low-dimensional vector space , respectively ; here de , dr ∈ N+ are hyperparameters for the embedding size . Each particular model uses a scoring function s : E × R × E → R to associate a score s ( i , k , j ) with each potential triple ( i , k , j ) ∈ E × R × E . The higher the score of a triple , the more likely it is considered to be true by the model . The scoring function takes form s ( i , k , j ) = f ( ei , rk , ej ) , i.e. , depends on i , k , and j only through their respective embeddings . Here f , which represents the model architecture , may be either a fixed function or learned ( e.g. , f may be a parameterized function ) . Evaluation . The most common evaluation task for KGE methods is entity ranking , which is a form of question answering . The available data is partitioned into a set of training , validation , and test triples . Given a test triple ( i , k , j ) ( unseen during training ) , the entity ranking task is to determine the correct answer—i.e. , the missing entity j and i , resp.—to questions ( i , k , ? ) and ( ? , k , j ) . To do so , potential answer triples are first ranked by their score in descending order . All triples but ( i , k , j ) that occur in the training , validation , or test data are subsequently filtered out so that other triples known to be true do not affect the ranking . Finally , metrics such as the mean reciprocal rank ( MRR ) of the true answer or the average HITS @ k are computed ; see the appendix . KGE models . KGE model architectures differ in their scoring function . We can roughly classify models as decomposable or monolithic : the former only allow element-wise interactions between ( relation-specific ) subject and object embeddings , whereas the latter allow arbitrary interactions . More specifically , decomposable models use scoring functions of form s ( i , k , j ) = f ( ∑ z g ( [ h1 ( ei , er ) ◦ h2 ( er , ej ) ] z ) ) , where ◦ is any element-wise function ( e.g. , multiplication ) , h1 and h2 are functions that obtain relation-specific subject and object embeddings , resp. , and g and f are scalar functions ( e.g. , identity or sigmoid ) . The most popular models in this category are perhaps RESCAL ( Nickel et al. , 2011 ) , TransE ( Bordes et al. , 2013 ) , DistMult ( Yang et al. , 2015 ) , ComplEx ( Trouillon et al. , 2016 ) , and ConvE ( Dettmers et al. , 2018 ) . RESCAL ’ s scoring function is bilinear in the entity embeddings : it uses s ( i , k , j ) = eTi Rkej , where Rk ∈ Rde×de is a matrix formed from the entries of rk ∈ Rdr ( where dr = d2e ) . DistMult and ComplEx can be seen as constrained variants of RESCAL with smaller relation embeddings ( dr = de ) . TransE is a translation-based model and uses negative distance −‖ei + rk − ej‖p between ei + rk and ej as score , commonly using the L1 norm ( p = 1 ) or the L2 norm ( p = 2 ) . Finally , ConvE uses a 2D convolutional layer and a large fully connected layer to obtain relation-specific entity embeddings ( i.e. , in h1 above ) . Other recent examples for decomposable models include TuckER ( Balazevic et al. , 2019 ) , RotatE ( Sun et al. , 2019a ) , and SACN ( Shang et al. , 2019 ) . Decomposable models are generally fast to use : once the relation-specific embeddings are ( pre- ) computed , score computations are cheap . Monolithic models—such as ConvKB or KBGAT—do not decompose into relation-specific embeddings : they take form s ( i , k , j ) = f ( ei , rk , ej ) . Such models are more flexible , but they are also considerably more costly to train and use . It is currently unclear whether monolithic models can achieve comparable or better performance than decomposable models Sun et al . ( 2019b ) . Training type . There are three commonly used approaches to train KGE models , which differ mainly in the way negative examples are generated . First , training with negative sampling ( NegSamp ) ( Bordes et al. , 2013 ) obtains for each positive triple t = ( i , k , j ) from the training data a set of ( pseudo- ) negative triples obtained by randomly perturbing the subject , relation , or object position in t ( and optionally verifying that the so-obtained triples do not exist in the KG ) . An alternative approach ( Lacroix et al. , 2018 ) , which we term 1vsAll , is to omit sampling and take all triples that can be obtained by perturbing the subject and object positions as negative examples for t ( even if these tuples exist in the KG ) . 1vsAll is generally more expensive than NegSamp , but it is feasible ( and even surprisingly fast in practice ) if the number of entities is not excessively large . Finally , Dettmers et al . ( 2018 ) proposed a training type that we term KvsAll2 : this approach ( i ) constructs batches from non-empty rows ( i , k , ∗ ) or ( ∗ , k , j ) instead of from individual triples , and ( ii ) labels all such triples as either positive ( occurs in training data ) or negative ( otherwise ) . Loss functions . Several loss functions for training KGEs have been introduced so far . RESCAL originally used squared error between the score of each triple and its label ( positive or negative ) . TransE used pairwise margin ranking with hinge loss ( MR ) , where each pair consists of a positive triple and one of its negative triples ( only applicable to NegSamp and 1vsAll ) and the margin η is a hyperparameter . Trouillon et al . ( 2016 ) proposed to use binary cross entropy ( BCE ) loss : it applies a sigmoid to the score of each ( positive or negative ) triple and uses the cross entropy between the resulting probability and that triple ’ s label as loss . BCE is suitable for multi-class and multi-label classification . Finally , Kadlec et al . ( 2017 ) used cross entropy ( CE ) between the model distribution ( softmax distribution over scores ) and the data distribution ( labels of corresponding triples , normalized to sum to 1 ) . CE is more suitable for multi-class classification ( as in NegSamp and 1vsAll ) , but it has also been used in the multi-label setting ( KvsAll ) . Mohamed et al . ( 2019 ) found that the choice of loss function can have a significant impact on model performance , and that the best choice is data and model dependent . Our experimental study provides additional evidence for this finding . Reciprocal relations . Kazemi & Poole ( 2018 ) and Lacroix et al . ( 2018 ) introduced the technique of reciprocal relations into KGE training . Observe that during evaluation and also most training methods discussed above , the model is solely asked to score subjects ( for questions of form ( ? , k , j ) ) or objects ( for questions of form ( i , k , ? ) ) . The idea of reciprocal relations is to use separate scoring functions ssub and sobj for each of these tasks , resp . Both scoring functions share entity embeddings , but they do not share relation embeddings : each relation thus has two embeddings.3 The use of reciprocal relations may decrease the computational cost ( as in the case of ConvE ) , and it may also lead to better model performance Lacroix et al . ( 2018 ) ( e.g. , for relations in which one direction is easier to predict ) . On the downside , the use of reciprocal relations means that a model does not provide a single triple score s ( i , k , j ) anymore ; generally , ssub ( i , k , j ) 6= sobj ( i , k , j ) . Kazemi & Poole ( 2018 ) proposed to take the average of the two triple scores and explored the resulting models . Regularization . The most popular form of regularization in the literature is L2 regularization on the embedding vectors , either unweighted or weighted by the frequency of the corresponding entity or relation ( Yang et al. , 2015 ) . Lacroix et al . ( 2018 ) proposed to use L3 regularization . TransE normalized the embeddings to unit norm after each update . ConvE used dropout ( Srivastava et al. , 2014 ) in its hidden layers ( and only in those ) . In our study , we additionally consider L1 regularization and the use of dropout on the entity and/or relation embeddings . Hyperparameters . Many more hyperparameters have been used in prior work . This includes , for example , different methods to initialize model parameters , different optimizers , different optimizer parameters such as the learning rate or batch size , the number of negative examples for NegSamp , the regularization weights for entities and relations , and so on . To deal with such a large search space , most prior studies favor grid search over a small grid where most of these hyperparameters remain fixed . As discussed before , this approach may lead to bias in the results , however .
The paper presents an experimental study about some KGE methods. It argues that papers often propose changes in several different dimensions, such as model, loss, training, regularizer, etc., at once without providing a sufficient investigation about the individual components' contributions. The experimental study considers two datasets (FB15k-237 and WNRR) and five different models (RESCAL, TransE, DistMult, ComplEx, ConvE). The models were selected using a quasi-random hyperparameter search, followed by a short Bayesian optimization phase to fine-tune the parameters. The performance of the best models found during this hyperparameter search are compared to first published results for the same model, as well as to a small selection of recent papers. To analyse the influence of single hyperparameters, the best found configuration is compared to the best configuration which does not use this specific value for the given hyperparameter.
SP:9459318b83cfeeaf7ba7efa3b8a188977d9e572a
You CAN Teach an Old Dog New Tricks! On Training Knowledge Graph Embeddings
1 INTRODUCTION . Knowledge graph embedding ( KGE ) models learn algebraic representations , termed embeddings , of the entities and relations in a knowledge graph . They have been successfully applied to knowledge graph completion ( Nickel et al. , 2015 ) as well as in downstream tasks and applications such as recommender systems ( Wang et al. , 2017 ) or visual relationship detection ( Baier et al. , 2017 ) . A vast number of different KGE models for multi-relational link prediction have been proposed in the recent literature ; e.g. , RESCAL ( Nickel et al. , 2011 ) , TransE ( Bordes et al. , 2013 ) , DistMult , ComplEx ( Trouillon et al. , 2016 ) , ConvE ( Dettmers et al. , 2018 ) , TuckER ( Balazevic et al. , 2019 ) , RotatE ( Sun et al. , 2019a ) , SACN ( Shang et al. , 2019 ) , and many more . Model architectures generally differ in the way the entity and relation embeddings are combined to model the presence or absence of an edge ( more precisely , a subject-predicate-object triple ) in the knowledge graph ; they include factorization models ( e.g. , RESCAL , DistMult , ComplEx , TuckER ) , translational models ( TransE , RotatE ) , and more advanced models such as convolutional models ( ConvE ) . In many cases , the introduction of new models went along with new approaches for training these models— e.g. , new training types ( such as negative sampling or 1vsAll scoring ) , new loss functions ( such as ∗Contributed equally . { daniel , broscheit } @ informatik.uni-mannheim.de †rgemulla @ uni-mannheim.de pairwise margin ranking or binary cross entropy ) , new forms of regularization ( such as unweighted and weighted L2 ) , or the use of reciprocal relations ( Kazemi & Poole , 2018 ; Lacroix et al. , 2018 ) — and ablation studies were not always performed . Table 1 shows an overview of selected models and training techniques along with the publications that introduced them . The diversity in model training makes it difficult to compare performance results for various model architectures , especially when results are reproduced from prior studies that used a different experimental setup . Model hyperparameters are commonly tuned using grid search on a small grid involving hand-crafted parameter ranges or settings known to “ work well ” from prior studies . A grid suitable for one model may be suboptimal for another , however . Indeed , it has been observed that newer training strategies can considerably improve model performance ( Kadlec et al. , 2017 ; Lacroix et al. , 2018 ; Salehi et al. , 2018 ) . In this paper , we take a step back and aim to summarize and quantify empirically the impact of different model architectures and different training strategies on model performance . We performed an extensive set of experiments using popular model architectures and training strategies in a common experimental setup . In contrast to most prior work , we considered many training strategies as well as a large hyperparameter space , and we performed model selection using quasi-random search ( instead of grid search ) followed by Bayesian optimization . We found that this approach was able to find good ( and often superior to prior studies ) model configurations with relatively low effort . Our study complements and expands on the results of Kotnis & Nastase ( 2018 ) ( focus on negative sampling ) and Mohamed et al . ( 2019 ) ( focus on loss functions ) as well as similar studies in other areas , including language modeling ( Melis et al. , 2017 ) , generative adversarial networks ( Lucic et al. , 2018 ) , or sequence tagging ( Reimers & Gurevych , 2017 ) . We found that when trained appropriately , the performance of a particular model architecture can by far exceed the performance observed in older studies . For example , RESCAL ( Nickel et al. , 2011 ) , which constitutes one of the first KGE models but is rarely considered in newer work , showed very strong performance in our study : it was competitive to or outperformed more recent architectures such as ConvE ( Dettmers et al. , 2018 ) and TuckER ( Balazevic et al. , 2019 ) . More generally , we found that the relative performance differences between various model architectures often shrunk and sometimes even reversed when compared to prior results . This suggests that ( at least currently ) training strategies have a significant impact on model performance and may account for a substantial fraction of the progress made in recent years . We also found that suitable training strategies and hyperparameter settings vary significantly across models and datasets , indicating that a small search grid may bias results on model performance . Fortunately , as indicated above , large hyperparameter spaces can be ( and should be ) used with little additional training effort . To facilitate such efforts , we provide implementations of relevant training strategies , models , and evaluation methods as part of the open source LibKGE framework,1 which emphasizes reproducibility and extensibility . Our study focuses solely on pure KGE models , which do not exploit auxiliary information such as textual data or logical rules ( Wang et al. , 2017 ) . Since many of the studies on these non-pure models did not ( and , to be fair , could not ) use current training strategies and consequently underestimated the performance of pure KGE models , their results and conclusions need to be revisited . 1https : //github.com/uma-pi1/kge 2 KNOWLEDGE GRAPH EMBEDDINGS : MODELS , TRAINING , EVALUATION . The literature on KGE models is expanding rapidly . We review selected architectures , training methods , and evaluation protocols ; see Table 1 . The table examplarily indicates that new model architectures are sometimes introduced along with new training strategies ( marked bold ) . Reasonably recent survey articles about KGE models include Nickel et al . ( 2015 ) and Wang et al . ( 2017 ) . Multi-relational link prediction . KGE models are typically trained and evaluated in the context of multi-relational link prediction for knowledge graphs ( KG ) . Generally , given a set E of entities and a set R of relations , a knowledge graph K ⊆ E × R × E is a set of subject-predicate-object ( spo ) triples . The goal of multi-relational link prediction is to “ complete the KG ” , i.e. , to predict true but unobserved triples based on the information in K. Common approaches include rule-based methods ( Galarraga et al. , 2013 ; Meilicke et al. , 2019 ) , KGE methods ( Nickel et al. , 2011 ; Bordes et al. , 2013 ; Trouillon et al. , 2016 ; Dettmers et al. , 2018 ) , and hybrid methods ( Guo et al. , 2018 ) . Knowledge graph embeddings ( KGE ) . A KGE model associates with each entity i ∈ E and relation k ∈ R an embedding ei ∈ Rde and rk ∈ Rdr in a low-dimensional vector space , respectively ; here de , dr ∈ N+ are hyperparameters for the embedding size . Each particular model uses a scoring function s : E × R × E → R to associate a score s ( i , k , j ) with each potential triple ( i , k , j ) ∈ E × R × E . The higher the score of a triple , the more likely it is considered to be true by the model . The scoring function takes form s ( i , k , j ) = f ( ei , rk , ej ) , i.e. , depends on i , k , and j only through their respective embeddings . Here f , which represents the model architecture , may be either a fixed function or learned ( e.g. , f may be a parameterized function ) . Evaluation . The most common evaluation task for KGE methods is entity ranking , which is a form of question answering . The available data is partitioned into a set of training , validation , and test triples . Given a test triple ( i , k , j ) ( unseen during training ) , the entity ranking task is to determine the correct answer—i.e. , the missing entity j and i , resp.—to questions ( i , k , ? ) and ( ? , k , j ) . To do so , potential answer triples are first ranked by their score in descending order . All triples but ( i , k , j ) that occur in the training , validation , or test data are subsequently filtered out so that other triples known to be true do not affect the ranking . Finally , metrics such as the mean reciprocal rank ( MRR ) of the true answer or the average HITS @ k are computed ; see the appendix . KGE models . KGE model architectures differ in their scoring function . We can roughly classify models as decomposable or monolithic : the former only allow element-wise interactions between ( relation-specific ) subject and object embeddings , whereas the latter allow arbitrary interactions . More specifically , decomposable models use scoring functions of form s ( i , k , j ) = f ( ∑ z g ( [ h1 ( ei , er ) ◦ h2 ( er , ej ) ] z ) ) , where ◦ is any element-wise function ( e.g. , multiplication ) , h1 and h2 are functions that obtain relation-specific subject and object embeddings , resp. , and g and f are scalar functions ( e.g. , identity or sigmoid ) . The most popular models in this category are perhaps RESCAL ( Nickel et al. , 2011 ) , TransE ( Bordes et al. , 2013 ) , DistMult ( Yang et al. , 2015 ) , ComplEx ( Trouillon et al. , 2016 ) , and ConvE ( Dettmers et al. , 2018 ) . RESCAL ’ s scoring function is bilinear in the entity embeddings : it uses s ( i , k , j ) = eTi Rkej , where Rk ∈ Rde×de is a matrix formed from the entries of rk ∈ Rdr ( where dr = d2e ) . DistMult and ComplEx can be seen as constrained variants of RESCAL with smaller relation embeddings ( dr = de ) . TransE is a translation-based model and uses negative distance −‖ei + rk − ej‖p between ei + rk and ej as score , commonly using the L1 norm ( p = 1 ) or the L2 norm ( p = 2 ) . Finally , ConvE uses a 2D convolutional layer and a large fully connected layer to obtain relation-specific entity embeddings ( i.e. , in h1 above ) . Other recent examples for decomposable models include TuckER ( Balazevic et al. , 2019 ) , RotatE ( Sun et al. , 2019a ) , and SACN ( Shang et al. , 2019 ) . Decomposable models are generally fast to use : once the relation-specific embeddings are ( pre- ) computed , score computations are cheap . Monolithic models—such as ConvKB or KBGAT—do not decompose into relation-specific embeddings : they take form s ( i , k , j ) = f ( ei , rk , ej ) . Such models are more flexible , but they are also considerably more costly to train and use . It is currently unclear whether monolithic models can achieve comparable or better performance than decomposable models Sun et al . ( 2019b ) . Training type . There are three commonly used approaches to train KGE models , which differ mainly in the way negative examples are generated . First , training with negative sampling ( NegSamp ) ( Bordes et al. , 2013 ) obtains for each positive triple t = ( i , k , j ) from the training data a set of ( pseudo- ) negative triples obtained by randomly perturbing the subject , relation , or object position in t ( and optionally verifying that the so-obtained triples do not exist in the KG ) . An alternative approach ( Lacroix et al. , 2018 ) , which we term 1vsAll , is to omit sampling and take all triples that can be obtained by perturbing the subject and object positions as negative examples for t ( even if these tuples exist in the KG ) . 1vsAll is generally more expensive than NegSamp , but it is feasible ( and even surprisingly fast in practice ) if the number of entities is not excessively large . Finally , Dettmers et al . ( 2018 ) proposed a training type that we term KvsAll2 : this approach ( i ) constructs batches from non-empty rows ( i , k , ∗ ) or ( ∗ , k , j ) instead of from individual triples , and ( ii ) labels all such triples as either positive ( occurs in training data ) or negative ( otherwise ) . Loss functions . Several loss functions for training KGEs have been introduced so far . RESCAL originally used squared error between the score of each triple and its label ( positive or negative ) . TransE used pairwise margin ranking with hinge loss ( MR ) , where each pair consists of a positive triple and one of its negative triples ( only applicable to NegSamp and 1vsAll ) and the margin η is a hyperparameter . Trouillon et al . ( 2016 ) proposed to use binary cross entropy ( BCE ) loss : it applies a sigmoid to the score of each ( positive or negative ) triple and uses the cross entropy between the resulting probability and that triple ’ s label as loss . BCE is suitable for multi-class and multi-label classification . Finally , Kadlec et al . ( 2017 ) used cross entropy ( CE ) between the model distribution ( softmax distribution over scores ) and the data distribution ( labels of corresponding triples , normalized to sum to 1 ) . CE is more suitable for multi-class classification ( as in NegSamp and 1vsAll ) , but it has also been used in the multi-label setting ( KvsAll ) . Mohamed et al . ( 2019 ) found that the choice of loss function can have a significant impact on model performance , and that the best choice is data and model dependent . Our experimental study provides additional evidence for this finding . Reciprocal relations . Kazemi & Poole ( 2018 ) and Lacroix et al . ( 2018 ) introduced the technique of reciprocal relations into KGE training . Observe that during evaluation and also most training methods discussed above , the model is solely asked to score subjects ( for questions of form ( ? , k , j ) ) or objects ( for questions of form ( i , k , ? ) ) . The idea of reciprocal relations is to use separate scoring functions ssub and sobj for each of these tasks , resp . Both scoring functions share entity embeddings , but they do not share relation embeddings : each relation thus has two embeddings.3 The use of reciprocal relations may decrease the computational cost ( as in the case of ConvE ) , and it may also lead to better model performance Lacroix et al . ( 2018 ) ( e.g. , for relations in which one direction is easier to predict ) . On the downside , the use of reciprocal relations means that a model does not provide a single triple score s ( i , k , j ) anymore ; generally , ssub ( i , k , j ) 6= sobj ( i , k , j ) . Kazemi & Poole ( 2018 ) proposed to take the average of the two triple scores and explored the resulting models . Regularization . The most popular form of regularization in the literature is L2 regularization on the embedding vectors , either unweighted or weighted by the frequency of the corresponding entity or relation ( Yang et al. , 2015 ) . Lacroix et al . ( 2018 ) proposed to use L3 regularization . TransE normalized the embeddings to unit norm after each update . ConvE used dropout ( Srivastava et al. , 2014 ) in its hidden layers ( and only in those ) . In our study , we additionally consider L1 regularization and the use of dropout on the entity and/or relation embeddings . Hyperparameters . Many more hyperparameters have been used in prior work . This includes , for example , different methods to initialize model parameters , different optimizers , different optimizer parameters such as the learning rate or batch size , the number of negative examples for NegSamp , the regularization weights for entities and relations , and so on . To deal with such a large search space , most prior studies favor grid search over a small grid where most of these hyperparameters remain fixed . As discussed before , this approach may lead to bias in the results , however .
The paper conducts a thorough analysis of existing models for constructing knowledge graph embeddings. It focuses on attempting to remove confounding aspects of model features and training regime, in order to better assess the merits of KGE models. The paper describes the reimplementation of five different KGE models, re-trained with a common training framework which conducts hyperparameter exploration. The results show surprising insights, e.g., demonstrating that a system from 2011, despite being the earliest of the KGE models analyzed, demonstrates competitive results over a more recent (2017) published model.
SP:9459318b83cfeeaf7ba7efa3b8a188977d9e572a
Unsupervised Intuitive Physics from Past Experiences
We consider the problem of learning models of intuitive physics from raw , unlabelled visual input . Differently from prior work , in addition to learning general physical principles , we are also interested in learning “ on the fly ” physical properties specific to new environments , based on a small number of environment-specific experiences . We do all this in an unsupervised manner , using a meta-learning formulation where the goal is to predict videos containing demonstrations of physical phenomena , such as objects moving and colliding with a complex background . We introduce the idea of summarizing past experiences in a very compact manner , in our case using dynamic images , and show that this can be used to solve the problem well and efficiently . Empirically , we show , via extensive experiments and ablation studies , that our model learns to perform physical predictions that generalize well in time and space , as well as to a variable number of interacting physical objects . 1 INTRODUCTION . Many animals possess an intuitive understanding of the physical world . They use this understanding to accurately and rapidly predict events from sparse sensory inputs . In addition to general physical principles , many animals also learn specific models of new environments as they experience them over time . For example , they can explore an environment to determine which parts of it can be navigated safely and remember this knowledge for later reuse . Authors have looked at equipping artificial intelligences ( AIs ) with analogous capabilities , but focusing mostly on performing predictions from instantaneous observations of an environment , such as a few frames in a video . However , such predictions can be successful only if observations are combined with sufficient prior knowledge about the environment . For example , consider predicting the motion of a bouncing ball . Unless key parameters such as the ball ’ s elasticity are known a priori , it is impossible to predict the ball ’ s trajectory accurately . However , after observing at least one bounce , it is possible to infer some of the parameters and eventually perform much better predictions . In this paper , we are interested in learning intuitive physics in an entirely unsupervised manner , by passively watching videos . We consider situations in which objects interact with scenarios that can only be partially inferred from their appearance , but that also contain objects whose parameters can not be confidently predicted from appearance alone ( fig . 1 ) . Then , we consider learning a system that can observe a few physical experiments to infer such parameters , and use this knowledge to perform better predictions in the future . Our model has three goals . First , it must learn without the use of any external or ad-hoc supervision . We achieve this by training our model from raw videos , using a video prediction error as a loss . Second , our model must be able to extract information about a new scenario by observing a few experiments , which we formulate as meta-learning . We also propose a simple representation of the experiments based on the concept of “ dynamic image ” that allows to process long experiments more efficiently than using a conventional recurrent network . Third , our model must learn a good representation of physics without access to any explicit or external supervision . Instead , we propose three tests to support this hypothesis . ( i ) We show that the model can predict far in the future , which is a proxy to temporal invariance . ( ii ) We further show that the model can extend to scenarios that are geometrically much larger than the ones used for training , which is a proxy to spatial invariance . ( iii ) Finally , we show that the model can generalize to several moving objects , which is a proxy to locality . Locality and time-space invariance are of course three key properties of physical laws and thus we should expect any good intuitive model of physics to possess them . In order to support these claims , we conduct extensive experiments in simulated scenarios , including testing the ability of the model to cope with non-trivial visual variations of the inputs . While the data is simpler than a real-world application , we nevertheless make substantial progress compared to previous work , as discussed in section 2 . We do so by learning from passive , raw video data a good model of dynamics and collisions that generalizes well spatially , temporally , and to a variable number of objects . The scalability of our approach , via the use of the dynamic image , is also unique . Finally , we investigate the problem of learning the parameters of new scenarios on the fly via experiences and we propose an effective solution to do so . 2 RELATED WORK . A natural way to represent physics is to manually encode every object parameter and physical property ( e.g. , mass , velocity , positions ) and use supervision to make predictions . This has been widely used to represent and propagate physics ( Wu et al. , 2015 ; 2016 ; Battaglia et al. , 2016 ; Chang et al. , 2017 ; Mrowca et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ) . If models like Wu et al . ( 2015 ; 2016 ) also estimate environment parameters , these works rely on a physics engine that assumes strong priors about the scenario , while our approach does not require such constraint . Inspired by the recent successes of Convolutional Neural Networks ( CNNs ( Krizhevsky et al. , 2012 ) ) and their application to implicit representation of dynamics ( Ondruska & Posner , 2016 ; Oh et al. , 2015 ; Chiappa et al. , 2017 ; Bhattacharyya et al. , 2018 ) , researchers ( Watters et al. , 2017 ; Ehrhardt et al. , 2019 ; Kansky et al. , 2017 ) have tried to base their approaches on visual inputs . They learn from several frames of a scene to regress the next physical state of a system . In general these approaches learn an implicit representation of physics ( Ehrhardt et al. , 2019 ; Watters et al. , 2017 ) as a tensor state from recurrent deep networks . Such models are mostly supervised using ground-truth information about key physical parameters ( e.g. , positions , velocities , density ) during training . While these approaches require an expensive annotation of data , other works have tried to learn from unsupervised data as well . Researchers have successfully learned unsupervised models either through active manipulation ( Agrawal et al. , 2016 ; Denil et al. , 2016 ; Finn et al. , 2016a ) , using the laws of physics ( Stewart & Ermon , 2017 ) , using dynamic clues and invariances ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ) or features extracted from unsupervised methods ( Finn et al. , 2016b ; Ehrhardt et al. , 2018 ) . Fragkiadaki et al . ( 2016 ) also used an unsupervised system like ours , however , they assumed the rendering system and the number of objects to be known , which we do not , which prevented the internal representation to fail over time . Perhaps most related to our approach is the work of Wang et al . ( 2018 ) , where the model is learnt using future image prediction on a simple synthetic task and then transferred to real world scenarios . They also demonstrate long-term dynamic predictions , however they did not generalize to different background/number of balls . In other works , models are taught to answer simple qualitative questions about a physical setup , such as : the stability of stacks of objects ( Battaglia et al. , 2013 ; Lerer et al. , 2016 ; Li et al. , 2017 ; Groth et al. , 2018 ) , the likelihood of a scenario ( Riochet et al. , 2018 ) , the forces acting behind a scene ( Wu et al. , 2017 ; Mottaghi et al. , 2016 ) or properties of objects through manipulation ( Agrawal et al. , 2016 ; Denil et al. , 2016 ) . Other papers compromise between qualitative and quantitative predictions and focus on plausibility ( Tompson et al. , 2016 ; Ladický et al. , 2015 ; Monszpart et al. , 2016 ) . Finally similar problems can be found in online learning settings ( Nagabandi et al. , 2019a ; b ) . These works also use meta-learning but , differently from ours , allow their models to learn and adapt at test time from feedbacks . 3 METHOD . We describe our model , illustrated in fig . 2 , starting from the input and output data . A scenario S is a physical environment that supports moving objects interacting with it . In this paper , we take as scenarios 2.1D environments containing obstacles and we consider rolling balls as moving objects . Hence , interactions are in the form of bounces . Formally , a scenario is defined over a lattice Ω = { 0 , . . . , H − 1 } × { 0 , . . . , W − 1 } and is specified by a list of obstacles S = { ( Oj , bj ) , j = 1 , . . . , K } . Here Oj ⊂ Ω is the shape of an obstacle and bj ∈ { B , A , U } is a flag that tells whether the ball bounces against it ( B ) , passes above it ( A ) , or passes under it ( U ) . Obstacles do not overlap . A run is a tupleR = ( S , y ) associating a scenario S with a trajectory y = ( yt ∈ Ω , t = 0 , . . . , T −1 ) for the ball ( this is trivially extended to multiple balls ) . Scenarios and runs are sensed visually . The symbol I ( S ) : Ω → R3×H×W denotes the image generated by observing a scenario with no ball and It ( R ) = I ( S , yt ) is the image generated by observing the ball at time t in a run . The symbol I ( R ) = ( It ( R ) , t = 0 , . . . , T − 1 ) denotes the collection of all frames in a run , which can be thought of as a video . We are interested in learning intuitive physics with no explicit supervision on the object trajectories , nor an explicit encoding of the laws of mechanics that govern the motion of the objects and their interactions and collisions with the environment . Hence , we cast this problem as predicting the video I ( R ) of the trajectory of the objects given only a few initial frames I0 : T0 ( R ) = ( I0 ( R ) , . . . , IT0−1 ( R ) ) , where T0 T . We are not the first to consider a similar learning problem , although most prior works do require some form of external supervision , which we do not use . Here , however , we consider an additional key challenge that the images I0 : T0 ( R ) do not contain sufficient information to successfully predict the long-term objects ’ motion . This is because these few frames tell us nothing about the nature of the obstacles in the scenario . In fact , under the assumptions that obstacles of type B , A and U have similar or even the identical appearance , it is not possible to predict whether the ball will bounce , move above , or move under any such obstacle . This situation is representative of an agent that needs to operate in a new complex environment and must learn more about it , from past experiences , before it can do so reliably . Thus , we consider a modification of the setup described above in which the model can experience each new scenario for a while , by observing the motion of the ball , before making its own predictions . Formally , an experience is a collection of N runs E = ( R1 , . . . , RN ) all relative to the same scenario S with each a different trajectories y1 , . . . , yN . By observing such examples , the model must determine the nature of the obstacles and then use this information to correctly predict the motion of the ball in the future . We cast this as the problem of learning a mapping Ω : ( I0 : T0 ( R ) , I ( E ) ) 7−→ I ( R ) , ( 1 ) where I ( E ) = ( I ( R1 ) , . . . , I ( RN ) ) are the videos corresponding to the runs in the experience . We will callR the prediction run to distinguish it from the experience runs E .
The paper proposes an architecture for few-shot video prediction in which a number of videos are summarized through global pooling operations and passed into a video predictor that learns to leverage them for adaptation, similar in spirit to the RNN-based meta-learning approaches such as Santoro’16, Duan’16. Due to global image and feature pooling operations, the proposed approach is computationally efficient. Proof-of-concept experiments are presented in a simple simulated physical prediction setting. It is claimed that the proposed model achieves generalization to longer sequences and larger board sizes, as well as a larger number of objects.
SP:e99ba3bc2e7d00a6d419b179cb76b5290b878f24
Unsupervised Intuitive Physics from Past Experiences
We consider the problem of learning models of intuitive physics from raw , unlabelled visual input . Differently from prior work , in addition to learning general physical principles , we are also interested in learning “ on the fly ” physical properties specific to new environments , based on a small number of environment-specific experiences . We do all this in an unsupervised manner , using a meta-learning formulation where the goal is to predict videos containing demonstrations of physical phenomena , such as objects moving and colliding with a complex background . We introduce the idea of summarizing past experiences in a very compact manner , in our case using dynamic images , and show that this can be used to solve the problem well and efficiently . Empirically , we show , via extensive experiments and ablation studies , that our model learns to perform physical predictions that generalize well in time and space , as well as to a variable number of interacting physical objects . 1 INTRODUCTION . Many animals possess an intuitive understanding of the physical world . They use this understanding to accurately and rapidly predict events from sparse sensory inputs . In addition to general physical principles , many animals also learn specific models of new environments as they experience them over time . For example , they can explore an environment to determine which parts of it can be navigated safely and remember this knowledge for later reuse . Authors have looked at equipping artificial intelligences ( AIs ) with analogous capabilities , but focusing mostly on performing predictions from instantaneous observations of an environment , such as a few frames in a video . However , such predictions can be successful only if observations are combined with sufficient prior knowledge about the environment . For example , consider predicting the motion of a bouncing ball . Unless key parameters such as the ball ’ s elasticity are known a priori , it is impossible to predict the ball ’ s trajectory accurately . However , after observing at least one bounce , it is possible to infer some of the parameters and eventually perform much better predictions . In this paper , we are interested in learning intuitive physics in an entirely unsupervised manner , by passively watching videos . We consider situations in which objects interact with scenarios that can only be partially inferred from their appearance , but that also contain objects whose parameters can not be confidently predicted from appearance alone ( fig . 1 ) . Then , we consider learning a system that can observe a few physical experiments to infer such parameters , and use this knowledge to perform better predictions in the future . Our model has three goals . First , it must learn without the use of any external or ad-hoc supervision . We achieve this by training our model from raw videos , using a video prediction error as a loss . Second , our model must be able to extract information about a new scenario by observing a few experiments , which we formulate as meta-learning . We also propose a simple representation of the experiments based on the concept of “ dynamic image ” that allows to process long experiments more efficiently than using a conventional recurrent network . Third , our model must learn a good representation of physics without access to any explicit or external supervision . Instead , we propose three tests to support this hypothesis . ( i ) We show that the model can predict far in the future , which is a proxy to temporal invariance . ( ii ) We further show that the model can extend to scenarios that are geometrically much larger than the ones used for training , which is a proxy to spatial invariance . ( iii ) Finally , we show that the model can generalize to several moving objects , which is a proxy to locality . Locality and time-space invariance are of course three key properties of physical laws and thus we should expect any good intuitive model of physics to possess them . In order to support these claims , we conduct extensive experiments in simulated scenarios , including testing the ability of the model to cope with non-trivial visual variations of the inputs . While the data is simpler than a real-world application , we nevertheless make substantial progress compared to previous work , as discussed in section 2 . We do so by learning from passive , raw video data a good model of dynamics and collisions that generalizes well spatially , temporally , and to a variable number of objects . The scalability of our approach , via the use of the dynamic image , is also unique . Finally , we investigate the problem of learning the parameters of new scenarios on the fly via experiences and we propose an effective solution to do so . 2 RELATED WORK . A natural way to represent physics is to manually encode every object parameter and physical property ( e.g. , mass , velocity , positions ) and use supervision to make predictions . This has been widely used to represent and propagate physics ( Wu et al. , 2015 ; 2016 ; Battaglia et al. , 2016 ; Chang et al. , 2017 ; Mrowca et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ) . If models like Wu et al . ( 2015 ; 2016 ) also estimate environment parameters , these works rely on a physics engine that assumes strong priors about the scenario , while our approach does not require such constraint . Inspired by the recent successes of Convolutional Neural Networks ( CNNs ( Krizhevsky et al. , 2012 ) ) and their application to implicit representation of dynamics ( Ondruska & Posner , 2016 ; Oh et al. , 2015 ; Chiappa et al. , 2017 ; Bhattacharyya et al. , 2018 ) , researchers ( Watters et al. , 2017 ; Ehrhardt et al. , 2019 ; Kansky et al. , 2017 ) have tried to base their approaches on visual inputs . They learn from several frames of a scene to regress the next physical state of a system . In general these approaches learn an implicit representation of physics ( Ehrhardt et al. , 2019 ; Watters et al. , 2017 ) as a tensor state from recurrent deep networks . Such models are mostly supervised using ground-truth information about key physical parameters ( e.g. , positions , velocities , density ) during training . While these approaches require an expensive annotation of data , other works have tried to learn from unsupervised data as well . Researchers have successfully learned unsupervised models either through active manipulation ( Agrawal et al. , 2016 ; Denil et al. , 2016 ; Finn et al. , 2016a ) , using the laws of physics ( Stewart & Ermon , 2017 ) , using dynamic clues and invariances ( Greff et al. , 2017 ; van Steenkiste et al. , 2018 ) or features extracted from unsupervised methods ( Finn et al. , 2016b ; Ehrhardt et al. , 2018 ) . Fragkiadaki et al . ( 2016 ) also used an unsupervised system like ours , however , they assumed the rendering system and the number of objects to be known , which we do not , which prevented the internal representation to fail over time . Perhaps most related to our approach is the work of Wang et al . ( 2018 ) , where the model is learnt using future image prediction on a simple synthetic task and then transferred to real world scenarios . They also demonstrate long-term dynamic predictions , however they did not generalize to different background/number of balls . In other works , models are taught to answer simple qualitative questions about a physical setup , such as : the stability of stacks of objects ( Battaglia et al. , 2013 ; Lerer et al. , 2016 ; Li et al. , 2017 ; Groth et al. , 2018 ) , the likelihood of a scenario ( Riochet et al. , 2018 ) , the forces acting behind a scene ( Wu et al. , 2017 ; Mottaghi et al. , 2016 ) or properties of objects through manipulation ( Agrawal et al. , 2016 ; Denil et al. , 2016 ) . Other papers compromise between qualitative and quantitative predictions and focus on plausibility ( Tompson et al. , 2016 ; Ladický et al. , 2015 ; Monszpart et al. , 2016 ) . Finally similar problems can be found in online learning settings ( Nagabandi et al. , 2019a ; b ) . These works also use meta-learning but , differently from ours , allow their models to learn and adapt at test time from feedbacks . 3 METHOD . We describe our model , illustrated in fig . 2 , starting from the input and output data . A scenario S is a physical environment that supports moving objects interacting with it . In this paper , we take as scenarios 2.1D environments containing obstacles and we consider rolling balls as moving objects . Hence , interactions are in the form of bounces . Formally , a scenario is defined over a lattice Ω = { 0 , . . . , H − 1 } × { 0 , . . . , W − 1 } and is specified by a list of obstacles S = { ( Oj , bj ) , j = 1 , . . . , K } . Here Oj ⊂ Ω is the shape of an obstacle and bj ∈ { B , A , U } is a flag that tells whether the ball bounces against it ( B ) , passes above it ( A ) , or passes under it ( U ) . Obstacles do not overlap . A run is a tupleR = ( S , y ) associating a scenario S with a trajectory y = ( yt ∈ Ω , t = 0 , . . . , T −1 ) for the ball ( this is trivially extended to multiple balls ) . Scenarios and runs are sensed visually . The symbol I ( S ) : Ω → R3×H×W denotes the image generated by observing a scenario with no ball and It ( R ) = I ( S , yt ) is the image generated by observing the ball at time t in a run . The symbol I ( R ) = ( It ( R ) , t = 0 , . . . , T − 1 ) denotes the collection of all frames in a run , which can be thought of as a video . We are interested in learning intuitive physics with no explicit supervision on the object trajectories , nor an explicit encoding of the laws of mechanics that govern the motion of the objects and their interactions and collisions with the environment . Hence , we cast this problem as predicting the video I ( R ) of the trajectory of the objects given only a few initial frames I0 : T0 ( R ) = ( I0 ( R ) , . . . , IT0−1 ( R ) ) , where T0 T . We are not the first to consider a similar learning problem , although most prior works do require some form of external supervision , which we do not use . Here , however , we consider an additional key challenge that the images I0 : T0 ( R ) do not contain sufficient information to successfully predict the long-term objects ’ motion . This is because these few frames tell us nothing about the nature of the obstacles in the scenario . In fact , under the assumptions that obstacles of type B , A and U have similar or even the identical appearance , it is not possible to predict whether the ball will bounce , move above , or move under any such obstacle . This situation is representative of an agent that needs to operate in a new complex environment and must learn more about it , from past experiences , before it can do so reliably . Thus , we consider a modification of the setup described above in which the model can experience each new scenario for a while , by observing the motion of the ball , before making its own predictions . Formally , an experience is a collection of N runs E = ( R1 , . . . , RN ) all relative to the same scenario S with each a different trajectories y1 , . . . , yN . By observing such examples , the model must determine the nature of the obstacles and then use this information to correctly predict the motion of the ball in the future . We cast this as the problem of learning a mapping Ω : ( I0 : T0 ( R ) , I ( E ) ) 7−→ I ( R ) , ( 1 ) where I ( E ) = ( I ( R1 ) , . . . , I ( RN ) ) are the videos corresponding to the runs in the experience . We will callR the prediction run to distinguish it from the experience runs E .
The paper proposes a method to predict the future trajectory of a ball (or a set of balls) given the first few frames of the trajectory and a set of experience runs in the same environment. The model first learns to convert the set of images to a set of corresponding heatmaps that encode the location of the ball. Then, a recurrent network generates the future states given the first few locations and the output of a network that learns abstract information from the experience runs. The network is trained using reconstruction and perceptual similarity loss.
SP:e99ba3bc2e7d00a6d419b179cb76b5290b878f24
Pixel Co-Occurence Based Loss Metrics for Super Resolution Texture Recovery
Single Image Super Resolution ( SISR ) has significantly improved with Convolutional Neural Networks ( CNNs ) and Generative Adversarial Networks ( GANs ) , often achieving order of magnitude better pixelwise accuracies ( distortions ) and state-of-the-art perceptual accuracy . Due to the stochastic nature of GAN reconstruction and the ill-posed nature of the problem , perceptual accuracy tends to correlate inversely with pixelwise accuracy which is especially detrimental to SISR , where preservation of original content is an objective . GAN stochastics can be guided by intermediate loss functions such as the VGG featurewise loss , but these features are typically derived from biased pre-trained networks . Similarly , measurements of perceptual quality such as the human Mean Opinion Score ( MOS ) and no-reference measures have issues with pre-trained bias . The spatial relationships between pixel values can be measured without bias using the Grey Level Co-occurence Matrix ( GLCM ) , which was found to match the cardinality and comparative value of the MOS while reducing subjectivity and automating the analytical process . In this work , the GLCM is also directly used as a loss function to guide the generation of perceptually accurate images based on spatial collocation of pixel values . We compare GLCM based loss against scenarios where ( 1 ) no intermediate guiding loss function , and ( 2 ) the VGG feature function are used . Experimental validation is carried on X-ray images of rock samples , characterised by significant number of high frequency texture features . We find GLCM-based loss to result in images with higher pixelwise accuracy and better perceptual scores . 1 INTRODUCTION . A super resolution ( SR ) image is generated from a single low resolution image ( LR ) ( with or without variable blur and noise ) such that the result closely matches the true high resolution counterpart ( whether it exists or not ) ( Park et al. , 2003 ) . There thus exists a vast number of possible solutions ( Dong et al. , 2014 ) for any given LR image , and by an extension , there are many techniques to recover SR details with varying degrees of accuracy . These methods range from the simple and blurry interpolation methods ( bicubic , linear , etc . ) that can not recover contextual features to more complex model-based methods that utilise prior knowledge of certain domain characteristics to generate sharper details ( Dai et al. , 2009 ; Jian Sun et al. , 2008 ; Yan et al. , 2015 ) . The contextual dependence of SR images is addressed with Example-based approaches including Markov Random Field ( MRF ) ( Freeman et al. , 2002 ) , Neighbour Embedding ( Hong Chang et al. , 2004 ) , Sparse Coding ( Yang et al. , 2010 ) and Random Forest methods ( Schulter et al. , 2015 ) . These can be generalised ( Dong et al. , 2014 ) into Super Resolution Convolutional Neural Networks ( SRCNN ) for photographic images ( Dong et al. , 2014 ; Wang et al. , 2016 ; Lim et al. , 2017 ; Yu et al. , 2018 ; Kim et al. , 2015 ; Ledig et al. , 2017 ) , medical images ( Umehara et al. , 2018 ; You et al. , 2018 ) , and digital rock images ( Wang et al. , 2019b ; 2018 ; 2019a ) . The original SRCNN network utilised as a backbone 3-5 convolutional layers ( Dong et al. , 2014 ) that sharpened a bicubically upsampled image . More recent models contain dedicated upsampling layers ( Dong et al. , 2016 ; Shi et al. , 2016 ; Odena et al. , 2016 ) , skip connections to improve gradient scaling ( Kim et al. , 2015 ; Ledig et al. , 2017 ) , removal of batch normalisation ( Lim et al. , 2017 ) , and use of the sharper L1 loss function ( Yu et al. , 2018 ; Zhu et al. , 2017 ) . These changes have all contributed gradually to improving the pixelwise accuracy of the super resolution methods . Despite the high pixelwise accuracy achieved by SRCNN networks 1 , the resulting super resolved images are often perceptually unsatisfying , and easily identifiable by a human observer as `` blurry '' . It is due to the SRCNN can recover accurately features spanning several pixels such as larger scale edges , texture and high frequency features are lost as the network attempts to maximise the pixelwise accuracy over a wide range of possible HR counterparts . The local minima problem thus manifests itself clearly in the ill-posed problem of Super Resolution . Perceptual results can be improved by training the network with a hybrid loss function that combines a pixelwise loss ( L1 or L2 ) with a feature-wise loss that is the L2 loss of features extracted from some intermediate convolutional layer of a pre-trained model ( Johnson et al. , 2016 ) . The most effective method of perceptual texture generation thus far has been the use of Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , with SRGAN generated images from low resolution photographs score highly on human surveys of image quality ( Ledig et al. , 2017 ; Dosovitskiy & Brox , 2016 ) as they can recover high frequency textures that are perceived as realistic , both for photographic features ( Ledig et al. , 2017 ) and textural quality in X-ray images ( You et al. , 2018 ) . However , SRGAN generated images are stochastic in the way that high frequency features are regenerated , with a tendency to cause pixel mismatch , which is especially exacerbated by the use of the VGG feature loss ( You et al. , 2018 ; Ledig et al. , 2017 ; Sajjadi et al. , 2017 ) that further tends to result in further distortion , leading to higher pixelwise loss . For natural images this loss in pixelwise accuracy may be secondary , but plays an important role in applications where texture carries actual information , such as in X-ray images . This trade-off between the pixelwise accuracy ( distortion ) and the perceptual accuracy ( Sajjadi et al. , 2017 ; Mechrez et al. , 2018 ; Vasu et al. , 2018 ) , is a consistently emergent limitation in SRGAN performance , whereby a high pixelwise accuracy causes over-smoothing , while a high perceptual accuracy causes pixel mismatch and distortion in some features . While both pixelwise accuracy and perceptual accuracy are important , SISR aims to preserve as much content/characteristics from the original image , while GAN essentially `` makes up '' content that is perceptually satisfying at the expense of pixelwise measures . While pixel-wise distortions can be quantitatively measured , in order to evaluate perceptual performance one typically requires human subjective evaluations of Mean Opinion Scores ( MOS ) , or by leveraging the proxy score produced by `` pre-trained '' models ( which could introduce their own biases ) ( Ma et al. , 2016 ; Salimans et al. , 2016 ) . SRGANs typically obtain superior scores in terms of subjective metrics ( Ledig et al. , 2017 ; Zhu et al. , 2017 ; Vasu et al. , 2018 ) , however , an objective study of the differences in the high frequency SR textures compared to the original HR images has not yet been carried . This is especially of interest in imaging areas requiring expert judgements , such as in radiographic images . These are most often benchmarked with a combination of pixelwise metrics and opinion score surveys despite the smaller sample sizes and logistical challenge . The overall aim of this study is the introduction of Grey Level Co-occurence Matrix ( GLCM ) method ( Haralick et al. , 1973 ) as both an auxiliary loss function , and as an addition to the PSNR metric for evaluating perceptual and textural accuracy of super resolution techniques ( GLCM better correlates with the subjective scores of human evaluations ) . GLCM has been sucessfully used in the characterisation of texture in medical images ( Makaju et al. , 2018 ; Madero Orozco et al. , 2015 ; Sivaramakrishna et al. , 2002 ; Pratiwi et al. , 2015 ; Liao et al. , 2011 ; Yang et al. , 2012 ) and CT rock images ( Singh et al. , 2019 ; Becker et al. , 2016 ; Jardine et al. , 2018 ) . In essence , the GLCM transforms an image into a representation of the spatial relationship between the grey colours present in the image . The GLCM is not a pixelwise measurement , and so can be used to evaluate texture recovery that may not be a pixelwise match . The GLCM is particularly well suited for automatic perceptual/textural in-domain comparisons , as it does not require time-consuming expert MOS evaluations , and does not pose inherent data-bias when scoring with auxiliary models . We use the DeepRock-SR ( Wang et al. , 2019a ) dataset to train and validate the results of an SRGAN model , which in this study is a modified Enhanced Deep Super Resolution ( EDSR ) network ( Lim et al. , 2017 ) coupled to a GAN discriminator ( EDSRGAN ) . The resulting performance in generating SR images is quantitatively analysed , both based on the traditional pixelwise approach , as well as using the GLCM spatial texture accuracy method introduced in this work . GLCM texture analyses of the SRGAN and SRCNN results indicate quantitatively that SRGAN produces images with a GLCM error that are an order of magnitude lower ( more texturally similar ) than SRCNN images , that tend to 1Henceforth , unless stated otherwise , all generative convolutional networks ( and the corresponding generated images ) will be referred to as `` SRCNN '' have a higher PSNR , but also a higher GLCM error and lower MOS scores . Overall , the use of the GLCM offers fast and more agnostic evaluation metric when compared to carrying MOS evaluations , and is easier to reproduce and analyse due to its algorithmic data-driven nature . The GLCM can be also used as an auxiliary loss function to guide the generation of spatially accurate texture , resulting in further reductions in the texture error while also improving the pixelwise accuracy . 2 METHODS . 2.1 GREY LEVEL CO-OCCURRENCE MATRIX FOR TEXTURE ANALYSIS . The Grey Level Co-occurrence Matrix characterises the texture of an image by calculating the number of pairs of pixels that could be spatially related within some pre-defined regions of interest . For a given image with N grey levels , an N by N co-occurrence matrix P is constructed , in which the ( i , j ) position denotes the number of times a grey level i is spatially related to a corresponding grey level j . An example of spatial relationship setting in an image of size ( Nx , Ny ) could be all locations of ( x , y ) and ( x+ 1 , y ) ( i.e . their horizontal adjacency ) . In this case , the value at location ( i , j ) in P is the sum of all occurrences where grey level i and grey level j occur horizontally adjacent to each other within the image . Since the GLCM does not compare spatially matching pixel values but instead compares the spatial distribution of pixel values , it is a good measure of texture similarity to complement the pixel-by-pixel similarity PSNR metric . In general , multiple GLCMs with an encompassing set of spatial relationships are constructed to fully characterise the texture of an image . Aside from the default ( x , y ) to ( x+ 1 , y ) relationship , the offset value can be generalised to an omnidirectional pixel distance d ∈ Z such that the spatial relationships are ( x , y ) to ( x+ dx , y+ dy ) . Some of the the GLCMs used in this study are calculated in 8 directions with a 45 degree offset , to a distance of up to 10 pixels with a 4-bit precision ( 16 grey levels after quantisation ) . This results in a 8x10x16x16 GLCM tensor P , or 80 16x16 GLCMS , one for each spatial relationship setting . This raw transformation can then be analysed for a variety of statistical measures , or a pixelwise comparison can be carried on GLCMs for different images to comparatively quantify texture similarity . If a generated SR image is texturally similar to the original HR image , then it can be expected that the corresponding GLCMs are closer to each other w.r.t L1 or L2 distances . The L1-GLCM error is computed as : GLCMLoss = ∑ ij |PSRij − PHRij |∑ ij ij ( 1 )
The paper considers the problem of generating a high-resolution image from a low-resolution one. The paper introduces the Grey Level Co-occurrence Matrix Method (GLCM) for evaluating the performance of super-resolution techniques and as an auxiliary loss function for training neural networks to perform well for super-resolution. The GLCM was originally introduced in a 1973 paper and has been used in a few papers in the computer vision community. The paper trains and validates a super-resolution GAN (SRGAN) and a super-resolution CNN (SRCNN) on the DeepRock-SR dataset. Specifically, for the SRGAN, the paper uses the EDSRGAN network trained on loss function particular to the paper: The loss function consists of the addition of the L1 pixel-wise loss plus the VGG19 perceptual objective plus the proposed GLCM loss.
SP:6c766bf18a0b552410d411248af30915e331c5f7
Pixel Co-Occurence Based Loss Metrics for Super Resolution Texture Recovery
Single Image Super Resolution ( SISR ) has significantly improved with Convolutional Neural Networks ( CNNs ) and Generative Adversarial Networks ( GANs ) , often achieving order of magnitude better pixelwise accuracies ( distortions ) and state-of-the-art perceptual accuracy . Due to the stochastic nature of GAN reconstruction and the ill-posed nature of the problem , perceptual accuracy tends to correlate inversely with pixelwise accuracy which is especially detrimental to SISR , where preservation of original content is an objective . GAN stochastics can be guided by intermediate loss functions such as the VGG featurewise loss , but these features are typically derived from biased pre-trained networks . Similarly , measurements of perceptual quality such as the human Mean Opinion Score ( MOS ) and no-reference measures have issues with pre-trained bias . The spatial relationships between pixel values can be measured without bias using the Grey Level Co-occurence Matrix ( GLCM ) , which was found to match the cardinality and comparative value of the MOS while reducing subjectivity and automating the analytical process . In this work , the GLCM is also directly used as a loss function to guide the generation of perceptually accurate images based on spatial collocation of pixel values . We compare GLCM based loss against scenarios where ( 1 ) no intermediate guiding loss function , and ( 2 ) the VGG feature function are used . Experimental validation is carried on X-ray images of rock samples , characterised by significant number of high frequency texture features . We find GLCM-based loss to result in images with higher pixelwise accuracy and better perceptual scores . 1 INTRODUCTION . A super resolution ( SR ) image is generated from a single low resolution image ( LR ) ( with or without variable blur and noise ) such that the result closely matches the true high resolution counterpart ( whether it exists or not ) ( Park et al. , 2003 ) . There thus exists a vast number of possible solutions ( Dong et al. , 2014 ) for any given LR image , and by an extension , there are many techniques to recover SR details with varying degrees of accuracy . These methods range from the simple and blurry interpolation methods ( bicubic , linear , etc . ) that can not recover contextual features to more complex model-based methods that utilise prior knowledge of certain domain characteristics to generate sharper details ( Dai et al. , 2009 ; Jian Sun et al. , 2008 ; Yan et al. , 2015 ) . The contextual dependence of SR images is addressed with Example-based approaches including Markov Random Field ( MRF ) ( Freeman et al. , 2002 ) , Neighbour Embedding ( Hong Chang et al. , 2004 ) , Sparse Coding ( Yang et al. , 2010 ) and Random Forest methods ( Schulter et al. , 2015 ) . These can be generalised ( Dong et al. , 2014 ) into Super Resolution Convolutional Neural Networks ( SRCNN ) for photographic images ( Dong et al. , 2014 ; Wang et al. , 2016 ; Lim et al. , 2017 ; Yu et al. , 2018 ; Kim et al. , 2015 ; Ledig et al. , 2017 ) , medical images ( Umehara et al. , 2018 ; You et al. , 2018 ) , and digital rock images ( Wang et al. , 2019b ; 2018 ; 2019a ) . The original SRCNN network utilised as a backbone 3-5 convolutional layers ( Dong et al. , 2014 ) that sharpened a bicubically upsampled image . More recent models contain dedicated upsampling layers ( Dong et al. , 2016 ; Shi et al. , 2016 ; Odena et al. , 2016 ) , skip connections to improve gradient scaling ( Kim et al. , 2015 ; Ledig et al. , 2017 ) , removal of batch normalisation ( Lim et al. , 2017 ) , and use of the sharper L1 loss function ( Yu et al. , 2018 ; Zhu et al. , 2017 ) . These changes have all contributed gradually to improving the pixelwise accuracy of the super resolution methods . Despite the high pixelwise accuracy achieved by SRCNN networks 1 , the resulting super resolved images are often perceptually unsatisfying , and easily identifiable by a human observer as `` blurry '' . It is due to the SRCNN can recover accurately features spanning several pixels such as larger scale edges , texture and high frequency features are lost as the network attempts to maximise the pixelwise accuracy over a wide range of possible HR counterparts . The local minima problem thus manifests itself clearly in the ill-posed problem of Super Resolution . Perceptual results can be improved by training the network with a hybrid loss function that combines a pixelwise loss ( L1 or L2 ) with a feature-wise loss that is the L2 loss of features extracted from some intermediate convolutional layer of a pre-trained model ( Johnson et al. , 2016 ) . The most effective method of perceptual texture generation thus far has been the use of Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , with SRGAN generated images from low resolution photographs score highly on human surveys of image quality ( Ledig et al. , 2017 ; Dosovitskiy & Brox , 2016 ) as they can recover high frequency textures that are perceived as realistic , both for photographic features ( Ledig et al. , 2017 ) and textural quality in X-ray images ( You et al. , 2018 ) . However , SRGAN generated images are stochastic in the way that high frequency features are regenerated , with a tendency to cause pixel mismatch , which is especially exacerbated by the use of the VGG feature loss ( You et al. , 2018 ; Ledig et al. , 2017 ; Sajjadi et al. , 2017 ) that further tends to result in further distortion , leading to higher pixelwise loss . For natural images this loss in pixelwise accuracy may be secondary , but plays an important role in applications where texture carries actual information , such as in X-ray images . This trade-off between the pixelwise accuracy ( distortion ) and the perceptual accuracy ( Sajjadi et al. , 2017 ; Mechrez et al. , 2018 ; Vasu et al. , 2018 ) , is a consistently emergent limitation in SRGAN performance , whereby a high pixelwise accuracy causes over-smoothing , while a high perceptual accuracy causes pixel mismatch and distortion in some features . While both pixelwise accuracy and perceptual accuracy are important , SISR aims to preserve as much content/characteristics from the original image , while GAN essentially `` makes up '' content that is perceptually satisfying at the expense of pixelwise measures . While pixel-wise distortions can be quantitatively measured , in order to evaluate perceptual performance one typically requires human subjective evaluations of Mean Opinion Scores ( MOS ) , or by leveraging the proxy score produced by `` pre-trained '' models ( which could introduce their own biases ) ( Ma et al. , 2016 ; Salimans et al. , 2016 ) . SRGANs typically obtain superior scores in terms of subjective metrics ( Ledig et al. , 2017 ; Zhu et al. , 2017 ; Vasu et al. , 2018 ) , however , an objective study of the differences in the high frequency SR textures compared to the original HR images has not yet been carried . This is especially of interest in imaging areas requiring expert judgements , such as in radiographic images . These are most often benchmarked with a combination of pixelwise metrics and opinion score surveys despite the smaller sample sizes and logistical challenge . The overall aim of this study is the introduction of Grey Level Co-occurence Matrix ( GLCM ) method ( Haralick et al. , 1973 ) as both an auxiliary loss function , and as an addition to the PSNR metric for evaluating perceptual and textural accuracy of super resolution techniques ( GLCM better correlates with the subjective scores of human evaluations ) . GLCM has been sucessfully used in the characterisation of texture in medical images ( Makaju et al. , 2018 ; Madero Orozco et al. , 2015 ; Sivaramakrishna et al. , 2002 ; Pratiwi et al. , 2015 ; Liao et al. , 2011 ; Yang et al. , 2012 ) and CT rock images ( Singh et al. , 2019 ; Becker et al. , 2016 ; Jardine et al. , 2018 ) . In essence , the GLCM transforms an image into a representation of the spatial relationship between the grey colours present in the image . The GLCM is not a pixelwise measurement , and so can be used to evaluate texture recovery that may not be a pixelwise match . The GLCM is particularly well suited for automatic perceptual/textural in-domain comparisons , as it does not require time-consuming expert MOS evaluations , and does not pose inherent data-bias when scoring with auxiliary models . We use the DeepRock-SR ( Wang et al. , 2019a ) dataset to train and validate the results of an SRGAN model , which in this study is a modified Enhanced Deep Super Resolution ( EDSR ) network ( Lim et al. , 2017 ) coupled to a GAN discriminator ( EDSRGAN ) . The resulting performance in generating SR images is quantitatively analysed , both based on the traditional pixelwise approach , as well as using the GLCM spatial texture accuracy method introduced in this work . GLCM texture analyses of the SRGAN and SRCNN results indicate quantitatively that SRGAN produces images with a GLCM error that are an order of magnitude lower ( more texturally similar ) than SRCNN images , that tend to 1Henceforth , unless stated otherwise , all generative convolutional networks ( and the corresponding generated images ) will be referred to as `` SRCNN '' have a higher PSNR , but also a higher GLCM error and lower MOS scores . Overall , the use of the GLCM offers fast and more agnostic evaluation metric when compared to carrying MOS evaluations , and is easier to reproduce and analyse due to its algorithmic data-driven nature . The GLCM can be also used as an auxiliary loss function to guide the generation of spatially accurate texture , resulting in further reductions in the texture error while also improving the pixelwise accuracy . 2 METHODS . 2.1 GREY LEVEL CO-OCCURRENCE MATRIX FOR TEXTURE ANALYSIS . The Grey Level Co-occurrence Matrix characterises the texture of an image by calculating the number of pairs of pixels that could be spatially related within some pre-defined regions of interest . For a given image with N grey levels , an N by N co-occurrence matrix P is constructed , in which the ( i , j ) position denotes the number of times a grey level i is spatially related to a corresponding grey level j . An example of spatial relationship setting in an image of size ( Nx , Ny ) could be all locations of ( x , y ) and ( x+ 1 , y ) ( i.e . their horizontal adjacency ) . In this case , the value at location ( i , j ) in P is the sum of all occurrences where grey level i and grey level j occur horizontally adjacent to each other within the image . Since the GLCM does not compare spatially matching pixel values but instead compares the spatial distribution of pixel values , it is a good measure of texture similarity to complement the pixel-by-pixel similarity PSNR metric . In general , multiple GLCMs with an encompassing set of spatial relationships are constructed to fully characterise the texture of an image . Aside from the default ( x , y ) to ( x+ 1 , y ) relationship , the offset value can be generalised to an omnidirectional pixel distance d ∈ Z such that the spatial relationships are ( x , y ) to ( x+ dx , y+ dy ) . Some of the the GLCMs used in this study are calculated in 8 directions with a 45 degree offset , to a distance of up to 10 pixels with a 4-bit precision ( 16 grey levels after quantisation ) . This results in a 8x10x16x16 GLCM tensor P , or 80 16x16 GLCMS , one for each spatial relationship setting . This raw transformation can then be analysed for a variety of statistical measures , or a pixelwise comparison can be carried on GLCMs for different images to comparatively quantify texture similarity . If a generated SR image is texturally similar to the original HR image , then it can be expected that the corresponding GLCMs are closer to each other w.r.t L1 or L2 distances . The L1-GLCM error is computed as : GLCMLoss = ∑ ij |PSRij − PHRij |∑ ij ij ( 1 )
This paper adopts a loss metric called Grey Level Co-occurence Matrix (GLCM) as a new measurement of perceptual quality for single image super-resolution. The GLCM is particularly well suited for automatic perceptual/textural in-domain comparisons, which does not require time-consuming expert MOS evaluations. Experimental validation is carried on X-ray images of rock samples and promising results are achieved.
SP:6c766bf18a0b552410d411248af30915e331c5f7
The Early Phase of Neural Network Training
1 INTRODUCTION Over the past decade , methods for successfully training big , deep neural networks have revolutionized machine learning . Yet surprisingly , the underlying reasons for the success of these approaches remain poorly understood , despite remarkable empirical performance ( Santurkar et al. , 2018 ; Zhang et al. , 2017 ) . A large body of work has focused on understanding what happens during the later stages of training ( Neyshabur et al. , 2019 ; Yaida , 2019 ; Chaudhuri & Soatto , 2017 ; Wei & Schwab , 2019 ) , while the initial phase has been less explored . However , a number of distinct observations indicate that significant and consequential changes are occurring during the most early stage of training . These include the presence of critical periods during training ( Achille et al. , 2019 ) , the dramatic reshaping of the local loss landscape ( Sagun et al. , 2017 ; Gur-Ari et al. , 2018 ) , and the necessity of rewinding in the context of the lottery ticket hypothesis ( Frankle et al. , 2019 ) . Here we perform a thorough investigation of the state of the network in this early stage . To provide a unified framework for understanding the changes the network undergoes during the early phase , we employ the methodology of iterative magnitude pruning with rewinding ( IMP ) , as detailed below , throughout the bulk of this work ( Frankle & Carbin , 2019 ; Frankle et al. , 2019 ) . The initial lottery ticket hypothesis , which was validated on comparatively small networks , proposed that small , sparse sub-networks found via pruning of converged larger models could be trained to high performance provided they were initialized with the same values used in the training of the unpruned model ( Frankle & Carbin , 2019 ) . However , follow-up work found that rewinding the weights to their values at some iteration early in the training of the unpruned model , rather than to their initial values , was necessary to achieve good performance on deeper networks such as ResNets ( Frankle et al. , 2019 ) . This observation suggests that the changes in the network during this initial phase are vital for the success of the training of small , sparse sub-networks . As a result , this paradigm provides a simple and quantitative scheme for measuring the importance of the weights at various points early in training within an actionable and causal framework . †Work done while an intern at Facebook AI Research . We make the following contributions , all evaluated across three different network architectures : 1 . We provide an in-depth overview of various statistics summarizing learning over the early part of training . 2 . We evaluate the impact of perturbing the state of the network in various ways during the early phase of training , finding that : ( i ) counter to observations in smaller networks ( Zhou et al. , 2019 ) , deeper networks are not robust to reinitializion with random weights , but maintained signs ( ii ) the distribution of weights after the early phase of training is already highly non-i.i.d. , as permuting them dramatically harms performance , even when signs are maintained ( iii ) both of the above perturbations can roughly be approximated by simply adding noise to the network weights , though this effect is stronger for ( ii ) than ( i ) 3 . We measure the data-dependence of the early phase of training , finding that pre-training using only p ( x ) can approximate the changes that occur in the early phase of training , though pre-training must last for far longer ( ∼32× longer ) and not be fed misleading labels . 2 KNOWN PHENOMENA IN THE EARLY PHASE OF TRAINING Lottery ticket rewinding : The original lottery ticket paper ( Frankle & Carbin , 2019 ) rewound weights to initialization , i.e. , k = 0 , during IMP . Follow up work on larger models demonstrated that it is necessary to rewind to a later point during training for IMP to succeed , i.e. , k < < T , where T is total training iterations ( Frankle et al. , 2019 ) . Notably , the benefit of rewinding to a later point in training saturates quickly , roughly between 500 and 2000 iterations for ResNet-20 on CIFAR-10 ( Figure 1 ) . This timescale is strikingly similar to the changes in the Hessian described below . Hessian eigenspectrum : The shape of the loss landscape around the network state also appears to change rapidly during the early phase of training ( Sagun et al. , 2017 ; Gur-Ari et al. , 2018 ) . At initialization , the Hessian of the loss contains a number of large positive and negative eigenvalues . However , very rapidly the curvature is reshaped in a few marked ways : a few large eigenvalues emerge , the bulk eigenvalues are close to zero , and the negative eigenvalues become very small . Moreover , once the Hessian spectrum has reshaped , gradient descent appears to occur largely within the top subspace of the Hessian ( Gur-Ari et al. , 2018 ) . These results have been largely confirmed in large scale studies ( Ghorbani et al. , 2019 ) , but note they depend to some extent on architecture and ( absence of ) batch normalization ( Ioffe & Szegedy , 2015 ) . A notable exception to this consistency is the presence of substantial L1 energy of negative eigenvalues for models trained on ImageNet . Critical periods in deep learning : Achille et al . ( 2019 ) found that perturbing the training process by providing corrupted data early on in training can result in irrevocable damage to the final performance of the network . Note that the timescales over which the authors find a critical period extend well beyond those we study here . However , architecture , learning rate schedule , and regularization all modify the timing of the critical period , and follow-up work found that critical periods were also present for regularization , in particular weight decay and data augmentation ( Golatkar et al. , 2019 ) . 3 PRELIMINARIES AND METHODOLOGY Networks : Throughout this paper , we study five standard convolutional neural networks for CIFAR-10 . These include the ResNet-20 and ResNet-56 architectures designed for CIFAR-10 ( He et al. , 2015 ) , the ResNet-18 architecture designed for ImageNet but commonly used on CIFAR-10 ( He et al. , 2015 ) , the WRN-16-8 wide residual network ( Zagoruyko & Komodakis , 2016 ) , and the VGG-13 network ( Simonyan & Zisserman ( 2015 ) as adapted by Liu et al . ( 2019 ) ) . Throughout the main body of the paper , we show ResNet-20 ; in Appendix B , we present the same experiments for the other networks . Unless otherwise stated , results were qualitatively similar across all three networks . All experiments in this paper display the mean and standard deviation across five replicates with different random seeds . See Appendix A for further model details . Iterative magnitude pruning with rewinding : In order to test the effect of various hypotheses about the state of sparse networks early in training , we use the Iterative Magnitude Pruning with rewinding ( IMP ) procedure of Frankle et al . ( 2019 ) to extract sub-networks from various points in training that could have learned on their own . The procedure involves training a network to completion , pruning the 20 % of weights with the lowest magnitudes globally throughout the network , and rewinding the remaining weights to their values from an earlier iteration k during the initial , pre-pruning training run . This process is iterated to produce networks with high sparsity levels . As demonstrated in Frankle et al . ( 2019 ) , IMP with rewinding leads to sparse sub-networks which can train to high performance even at high sparsity levels > 90 % . Figure 1 shows the results of the IMP with rewinding procedure , showing the accuracy of ResNet20 at increasing sparsity when performing this procedure for several rewinding values of k. For k ≥ 500 , sub-networks can match the performance of the original network with 16.8 % of weights remaining . For k > 2000 , essentially no further improvement is observed ( not shown ) . 4 THE STATE OF THE NETWORK EARLY IN TRAINING Many of the aforementioned papers refer to various points in the “ early ” part of training . In this section , we descriptively chart the state of ResNet-20 during the earliest phase of training to provide context for this related work and our subsequent experiments . We specifically focus on the first 4,000 iterations ( 10 epochs ) . See Figure A3 for the characterization of additional networks . We include a summary of these results for ResNet-20 as a timeline in Figure 2 , and include a broader timeline including results from several previous papers for ResNet-18 in Figure A1 . As shown in Figure 3 , during the earliest ten iterations , the network undergoes substantial change . It experiences large gradients that correspond to a rapid increase in distance from the initialization and a large number of sign changes of the weights . After these initial iterations , gradient magnitudes drop and the rate of change in each of the aforementioned quantities gradually slows through the remainder of the period we observe . Interestingly , gradient magnitudes reach a minimum after the first 200 iterations and subsequently increase to a stable level by iteration 500 . Evaluation accuracy , improves rapidly , reaching 55 % by the end of the first epoch ( 400 iterations ) , more than halfway to the final 91.5 % . By 2000 iterations , accuracy approaches 80 % . During the first 4000 iterations of training , we observe three sub-phases . In the first phase , lasting only the initial few iterations , gradient magnitudes are very large and , consequently , the network changes rapidly . In the second phase , lasting about 500 iterations , performance quickly improves , weight magnitudes quickly increase , sign differences from initialization quickly increase , and gradient magnitudes reach a minimum before settling at a stable level . Finally , in the third phase , all of these quantities continue to change in the same direction , but begin to decelerate . 5 PERTURBING NEURAL NETWORKS EARLY IN TRAINING Figure 1 shows that the changes in the network weights over the first 500 iterations of training are essential to enable high performance at high sparsity levels . What features of this weight transformation are necessary to recover increased performance ? Can they be summarized by maintaining the weight signs , but discarding their magnitudes as implied by Zhou et al . ( 2019 ) ? Can they be represented distributionally ? In this section , we evaluate these questions by perturbing the early state of the network in various ways . Concretely , we either add noise or shuffle the weights of IMP sub-networks of ResNet-20 across different network sub-compenents and examine the effect on the network ’ s ability to learn thereafter . The sub-networks derived by IMP with rewinding make it possible to understand the causal impact of perturbations on sub-networks that are as capable as the full networks but more visibly decline in performance when improperly configured . To enable comparisons between the experiments in Section 5 and provide a common frame of reference , we measure the effective standard deviation of each perturbation , i.e . stddev ( wperturb − worig ) . 5.1 ARE SIGNS ALL YOU NEED ? Zhou et al . ( 2019 ) show that , for a set of small convolutional networks , signs alone are sufficient to capture the state of lottery ticket sub-networks . However , it is unclear whether signs are still sufficient for larger networks early in training . In Figure 4 , we investigate the impact of combining the magnitudes of the weights from one time-point with the signs from another . We found that the signs at iteration 500 paired with the magnitudes from initialization ( red line ) or from a separate random initialization ( green line ) were insufficient to maintain the performance reached by using both signs and magnitudes from iteration 500 ( orange line ) , and performance drops to that of using both magnitudes and signs from initialization ( blue line ) . However , while using the magnitudes from iteration 500 and the signs from initialization , performance is still substantially better than initialization signs and magnitudes . In addition , the overall perturbation to the network by using the magnitudes at iteration 500 and signs from initialization ( mean : 0.0 , stddev : 0.033 ) is smaller than by using the signs at iteration 500 and the magnitudes from initialization ( 0.0± 0.042 , mean ± std ) . These results suggest that the change in weight magnitudes over the first 500 iterations of training are substantially more important than the change in the signs for enabling subsequent training . By iteration 2000 , however , pairing the iteration 2000 signs with magnitudes from initialization ( red line ) reaches similar performance to using the signs from initialization and the magnitudes from iteration 2000 ( purple line ) though not as high performance as using both from iteration 2000 . This result suggests that network signs undergo important changes between iterations 500 and 2000 , as only 9 % of signs change during this period . Our results also suggest that counter to the observations of Zhou et al . ( 2019 ) in shallow networks , signs are not sufficient in deeper networks . 5.2 ARE WEIGHT DISTRIBUTIONS I.I.D. ? Can the changes in weights over the first k iterations be approximated distributionally ? To measure this , we permuted the weights at iteration k within various structural sub-components of the network ( globally , within layers , and within convolutional filters ) . If networks are robust to these permutations , it would suggest that the weights in such sub-compenents might be approximated and sampled from . As Figure 5 shows , however , we found that performance was not robust to shuffling weights globally ( green line ) or within layers ( red line ) , and drops substantially to no better than that of the original initialization ( blue line ) at both 500 and 2000 iterations.1 Shuffling within filters ( purple line ) performs slightly better , but results in a smaller overall perturbation ( 0.0 ± 0.092 for k = 500 ) than shuffling layerwise ( 0.0±0.143 ) or globally ( 0.0±0.144 ) , suggesting that this change in perturbation strength may simply account for the difference . 1We also considered shuffling within the incoming and outgoing weights for each neuron , but performance was equivalent to shuffling within layers . We elided these lines for readability . Are the signs from the rewinding iteration , k , sufficient to recover the damage caused by permutation ? In Figure 6 , we also consider shuffling only amongst weights that have the same sign . Doing so substantially improves the performance of the filter-wise shuffle ; however , it also reduces the extent of the overall perturbation ( 0.0± 0.049 for k = 500 ) . It also improves the performance of shuffling within layers slightly for k = 500 and substantially for k = 2000 . We attribute the behavior for k = 2000 to the signs just as in Figure 4 : when the magnitudes are similar in value ( Figure 4 red line ) or distribution ( Figure 6 red and green lines ) , using the signs improves performance . Reverting back to the initial signs while shuffling magnitudes within layers ( brown line ) , however , damages the network too severely ( 0.0 ± 0.087 for k = 500 ) to yield any performance improvement over random noise . These results suggest that , while the signs from initialization are not sufficient for high performance at high sparsity as shown in Section 5.1 , the signs from the rewinding iteration are sufficient to recover the damage caused by permutation , at least to some extent . 5.3 IS IT ALL JUST NOISE ? Some of our previous results suggested that the impact of signs and permutations may simply reduce to adding noise to the weights . To evaluate this hypothesis , we next study the effect of simply adding Gaussian noise to the network weights at iteration k. To add noise appropriately for layers with different scales , the standard deviation of the noise added for each layer was normalized to a multiple of the standard deviation σ of the initialization distribution for that layer . In Figure 7 , we see that for iteration k = 500 , sub-networks can tolerate 0.5σ to 1σ of noise before performance degrades back to that of the original initialization at higher levels of noise . For iteration k = 2000 , networks are surprisingly robust to noise up to 1σ , and even 2σ exhibits nontrivial performance . In Figure 8 , we plot the performance of each network at a fixed sparsity level as a function of the effective standard deviation of the noise imposed by each of the aforementioned perturbations . We find that the standard deviation of the effective noise explained fairly well the resultant performance ( k = 500 : r = −0.672 , p = 0.008 ; k = 2000 : r = −0.726 , p = 0.003 ) . As expected , perturbations that preserved the performance of the network generally resulted in smaller changes to the state of the network at iteration k. Interestingly , experiments that mixed signs and magnitudes from different points in training ( green points ) aligned least well with this pattern : the standard deviation of the perturbation is roughly similar among all of these experiments , but the accuracy of the resulting networks changes substantially . This result suggests that although the standard deviation of the noise is certainly indicative of lower accuracy , there are still specific perturbations that , while small in overall magnitude , can have a large effect on the network ’ s ability to learn , suggesting that the observed perturbation effects are not , in fact , just a consequence of noise . 6 THE DATA-DEPENDENCE OF NEURAL NETWORKS EARLY IN TRAINING Section 5 suggests that the change in network behavior by iteration k is not due to easilyascertainable , distributional properties of the network weights and signs . Rather , it appears that training is required to reach these network states . It is unclear , however , the extent to which various aspects of the data distribution are necessary . Mainly , is the change in weights during the early phase of training dependent on p ( x ) or p ( y|x ) ? Here , we attempt to answer this question by mea- suring the extent to which we can re-create a favorable network state for sub-network training using restricted information from the training data and labels . In particular , we consider pre-training the network with techniques that ignore labels entirely ( self-supervised rotation prediction , Section 6.2 ) , provide misleading labels ( training with random labels , Section 6.1 ) , or eliminate information from examples ( blurring training examples Section 6.3 ) . We first train a randomly-initialized , unpruned network on CIFAR-10 on the pre-training task for a set number of epochs . After pre-training , we train the network normally as if the pre-trained state were the original initialization . We then use the state of the network at the end of the pre-training phase as the “ initialization ” to find masks for IMP . Finally , we examine the performance of the IMPpruned sub-networks as initialized using the state after pre-training . This experiment determines the extent to which pre-training places the network in a state suitable for sub-network training as compared to using the state of the network at iteration k of training on the original task . 6.1 RANDOM LABELS To evaluate whether this phase of training is dependent on underlying structure in the data , we drew inspiration from Zhang et al . ( 2017 ) and pre-trained networks on data with randomized labels . This experiment tests whether the input distribution of the training data is sufficient to put the network in a position from which IMP with rewinding can find a sparse , trainable sub-network despite the presence of incorrect ( not just missing ) labels . Figure 9 ( upper left ) shows that pre-training on random labels for up to 10 epochs provides no improvement above rewinding to iteration 0 and that pre-training for longer begins to hurt accuracy . This result suggests that , though it is still possible that labels may not be required for learning , the presence incorrect labels is sufficient to prevent learning which approximates the early phase of training . 6.2 SELF-SUPERVISED ROTATION PREDICTION What if we remove labels entirely ? Is p ( x ) sufficient to approximate the early phase of training ? Historically , neural network training often involved two steps : a self-supervised pre-training phase followed by a supervised phase on the target task ( Erhan et al. , 2010 ) . Here , we consider one such self-supervised technique : rotation prediction ( Gidaris et al. , 2018 ) . During the pre-training phase , the network is presented with a training image that has randomly been rotated 90n degrees ( where n ∈ { 0 , 1 , 2 , 3 } ) . The network must classify examples by the value of n. If self-supervised pretraining can approximate the early phase of training , it would suggest that p ( x ) is sufficient on its own . Indeed , as shown in Figure 9 ( upper right ) , this pre-training regime leads to well-trainable subnetworks , though networks must be trained for many more epochs compared to supervised training ( 40 compared to 1.25 , or a factor of 32× ) . This result suggests that the labels for the ultimate task themselves are not necessary to put the network in such a state ( although explicitly misleading labels are detrimental ) . We emphasize that the duration of the pre-training phase required is an order of magnitude larger than the original rewinding iteration , however , suggesting that labels add important information which accelerates the learning process . 6.3 BLURRING TRAINING EXAMPLES To probe the importance of p ( x ) for the early phase of training , we study the extent to which the training input distribution is necessary . Namely , we pretrain using blurred training inputs with the correct labels . Following Achille et al . ( 2019 ) , we blur training inputs by downsampling by 4x and then upsampling back to the full size . Figure 9 ( bottom left ) shows that this pre-training method succeeds : after 40 epochs of pre-training , IMP with rewinding can find sub-networks that are similar in performance to those found after training on the original task for 500 iterations ( 1.25 epochs ) . Due to the success of the the rotation and blurring pre-training tasks , we explored the effect of combining these pre-training techniques . Doing so tests the extent to which we can discard both the training labels and some information from the training inputs . Figure 9 ( bottom right ) shows that doing so provides the network too little information : no amount of pre-training we considered makes it possible for IMP with rewinding to find sub-networks that perform tangibly better than rewinding to iteration 0 . Interestingly however , as shown in Appendix B , trainable sub-networks are found for VGG-13 with this pre-training regime , suggesting that different network architectures have different sensitivities to the deprivation of labels and input content . 6.4 SPARSE PRETRAINING Since sparse sub-networks are often challenging to train from scratch without the proper initialization ( Han et al. , 2015 ; Liu et al. , 2019 ; Frankle & Carbin , 2019 ) , does pre-training make it easier for sparse neural networks to learn ? Doing so would serve as a rough form of curriculum learning ( Bengio et al. , 2009 ) for sparse neural networks . We experimented with training sparse sub-networks of ResNet-20 ( IMP sub-networks , randomly reinitialized sub-networks , and randomly pruned subnetworks ) first on self-supervised rotation and then on the main task , but found no benefit beyond rewinding to iteration 0 ( Figure 10 ) . Moreover , doing so when starting from a sub-network rewound to iteration 500 actually hurts final accuracy . This result suggests that while pre-training is sufficient to approximate the early phase of supervised training with an appropriately structured mask , it is not sufficient to do so with an inappropriate mask . 7 DISCUSSION In this paper , we first performed extensive measurements of various statistics summarizing learning over the early part of training . Notably , we uncovered 3 sub-phases : in the very first iterations , gradient magnitudes are anomalously large and motion is rapid . Subsequently , gradients overshoot to smaller magnitudes before leveling off while performance increases rapidly . Then , learning slowly begins to decelerate . We then studied a suite of perturbations to the network state in the early phase finding that , counter to observations in smaller networks ( Zhou et al. , 2019 ) , deeper networks are not robust to reinitializing with random weights with maintained signs . We also found that the weight distribution after the early phase of training is highly non-independent . Finally , we measured the data-dependence of the early phase with the surprising result that pre-training on a self-supervised task yields equivalent performance to late rewinding with IMP . These results have significant implications for the lottery ticket hypothesis . The seeming necessity of late rewinding calls into question certain interpretations of lottery tickets as well as the ability to identify sub-networks at initialization . Our observation that weights are highly non-independent at the rewinding point suggests that the weights at this point can not be easily approximated , making approaches which attempt to “ jump ” directly to the rewinding point unlikely to succeed . However , our result that labels are not necessary to approximate the rewinding point suggests that the learning during this phase does not require task-specific information , suggesting that rewinding may not be necessary if networks are pre-trained appropriately . REFERENCES Alessandro Achille , Matteo Rovere , and Stefano Soatto . Critical learning periods in deep neural networks . 2019 . Yoshua Bengio , Jérôme Louradour , Ronan Collobert , and Jason Weston . Curriculum learning . In Proceedings of the 26th annual international conference on machine learning , pp . 41–48 . ACM , 2009 . Pratik Chaudhuri and Stefano Soatto . Stochastic gradient descent performs variational inference , converges to limit cycles for deep networks . 2017 . URL https : //arxiv.org/abs/1710 . 11029 . Dumitru Erhan , Yoshua Bengio , Aaron Courville , Pierre-Antoine Manzagol , Pascal Vincent , and Samy Bengio . Why does unsupervised pre-training help deep learning ? Journal of Machine Learning Research , 11 ( Feb ) :625–660 , 2010 . Jonathan Frankle and Michael Carbin . The lottery ticket hypothesis : Finding sparse , trainable neural networks . In International Conference on Learning Representations , 2019 . URL https : // openreview.net/forum ? id=rJl-b3RcF7 . Jonathan Frankle , Gintare Karolina Dziugaite , Daniel M Roy , and Michael Carbin . Stabilizing the lottery ticket hypothesis . arXiv preprint arXiv:1903.01611 , 2019 . Behrooz Ghorbani , Shankar Krishnan , and Ying Xiao . An investigation into neural net optimization via hessian eigenvalue density . In Proceedings of the 36th International Conference on Machine Learning , volume 97 , pp . 2232–2241 , 2019 . URL http : //proceedings.mlr.press/ v97/ghorbani19b.html . Spyros Gidaris , Praveer Singh , and Nikos Komodakis . Unsupervised representation learning by predicting image rotations . In International Conference on Learning Representations , 2018 . URL https : //openreview.net/forum ? id=S1v4N2l0- . Aditya Golatkar , Alessandro Achille , and Stefano Soatto . Time matters in regularizing deep networks : Weight decay and data augmentation affect early learning dynamics , matter little near convergence . 2019 . Guy Gur-Ari , Daniel A Roberts , and Ethan Dyer . Gradient descent happens in a tiny subspace . arXiv preprint arXiv:1812.04754 , 2018 . Song Han , Jeff Pool , John Tran , and William Dally . Learning both weights and connections for efficient neural network . In Advances in neural information processing systems , pp . 1135–1143 , 2015 . K He , X Zhang , S Ren , and J Sun . Deep residual learning for image recognition . In Computer Vision and Pattern Recogntion ( CVPR ) , volume 5 , pp . 6 , 2015 . Sergey Ioffe and Christian Szegedy . Batch normalization : Accelerating deep network training by reducing internal covariate shift . In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37 , ICML ’ 15 , pp . 448–456 . JMLR.org , 2015 . URL http : //dl.acm.org/citation.cfm ? id=3045118.3045167 . Zhuang Liu , Mingjie Sun , Tinghui Zhou , Gao Huang , and Trevor Darrell . Rethinking the value of network pruning . In International Conference on Learning Representations , 2019 . URL https : //openreview.net/forum ? id=rJlnB3C5Ym . Behnam Neyshabur , Zhiyuan Li , Srinadh Bhojanapalli , Yann LeCun , and Nathan Srebro . Towards understanding the role of over-parametrization in generalization of neural networks . 2019 . Levent Sagun , Utku Evci , Ugur Guney , Yann Dauphin , and Leon Bottou . Empirical analysis of the hessian of over-parametrized neural networks . 2017 . URL https : //arxiv.org/abs/ 1706.04454 . Shibani Santurkar , Dimitris Tsipras , Andrew Ilyas , and Aleksander Madry . How does batch normalization help optimization ? In Advances in neural information processing systems , 2018 . Karen Simonyan and Andrew Zisserman . Very deep convolutional networks for large-scale image recognition . 2015 . Mingwei Wei and David Schwab . How noise during training affects the hessian spectrum in overparameterized neural networks . 2019 . Sho Yaida . Fluctuation-dissipation relations for stochastic gradient descent . 2019 . Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . arXiv preprint arXiv:1605.07146 , 2016 . Chiyuan Zhang , Samy Bengio , Moritz Hardt , Benjamin Recht , and Oriol Vinyals . Understanding deep learning requires rethinking generalization . 2017 . Hattie Zhou , Janice Lan , Rosanne Liu , and Jason Yosinski . Deconstructing lottery tickets : Zeros , signs , and the supermask . 2019 . A MODEL DETAILS Network Epochs Batch Size Learning Rate Parameters Eval Accuracy ResNet-20 160 128 0.1 ( Mom 0.9 ) 272K 91.5 ± 0.2 % ResNet-56 160 128 0.1 ( Mom 0.9 ) 856K 93.0 ± 0.1 % ResNet-18 160 128 0.1 ( Mom 0.9 ) 11.2M 86.8 ± 0.3 % VGG-13 160 64 0.1 ( Mom 0.9 ) 9.42M 93.5 ± 0.1 % WRN-16-8 160 128 0.1 ( Mom 0.9 ) 11.1M 94.8 ± 0.1 % Table A1 : Summary of the networks we study in this paper . We present ResNet-20 in the main body of the paper and the remaining networks in Appendix B . Table A1 summarizes the networks . All networks follow the same training regime : we train with SGD for 160 epochs starting at learning rate 0.1 ( momentum 0.9 ) and drop the learning rate by a factor of ten at epoch 80 and again at epoch 120 . Training includes weight decay with weight 1e4 . Data is augmented with normalization , random flips , and random crops up to four pixels in any direction . B EXPERIMENTS FOR OTHER NETWORKS Training iterations 1-10 10-500 500-2000 2000+ Gradient magnitudes are very large Rapid motion in weight space Large sign changes ( 6 % in rst 10 it ) Gradients converge to roughly constant magnitude Weight magnitudes decrease linearly Weight magnitudes decrease linearly Performance increase decelerates , but is still rapid Rewinding begins to become highly e ective Rewinding begins to become e ective ( ~250 iterations ) Hessian eigenspectrum separates ( 700 iterations ; Gur-Ari et al. , 2018 ) Gradient magnitudes remain roughly constant Critical periods end ( ~8000 iterations ; Achille et al. , 2019 ; Golatkar et al. , 2019 ) Performance slowly asymptotes Bene t of rewinding saturates Gradient magnitudes reach a minimum before stabilizing 10 % higher at iteration 250 Motion in weight space slows , but is still rapid Performance increases rapidly Figure A1 : Rough timeline of the early phase of training for ResNet-18 on CIFAR-10 , including results from previous papers . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) Rewind to 0 Rewind to 10 Rewind to 50 Rewind to 100 Rewind to 250 Rewind to 500 Rewind to 1000 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) Rewind to 0 Rewind to 100 Rewind to 250 Rewind to 500 Rewind to 1000 Rewind to 2000 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) Rewind to 0 Rewind to 100 Rewind to 250 Rewind to 500 Rewind to 1000 Rewind to 2000 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) Rewind to 0 Rewind to 10 Rewind to 25 Rewind to 50 Rewind to 100 Rewind to 250 Rewind to 500 Figure A2 : The effect of IMP rewinding iteration on the accuracy of sub-networks at various levels of sparsity . Accompanies Figure 1 . 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.4 0.6 0.8 Ev al Ac cu ra cy ( % ) VGG-13 - Eval Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.5 1.0 1.5 2.0 Ev al Lo ss VGG-13 - Eval Loss 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.0165 0.0170 0.0175 0.0180 0.0185 0.0190 0.0195 M ag ni tu de VGG-13 - Average Weight Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 2 4 6 8 10 12 % o f S ig ns th at Di ffe r f ro m In it VGG-13 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.03 0.02 0.01 0.00 0.01 0.02 0.03 M ag ni tu de VGG-13 - Weight Trace 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 0.45 0.50 M ag ni tu de VGG-13 - Gradient Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 10 20 30 40 50 L2 D ist an ce VGG-13 - L2 Dist from Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 80 85 90 95 L2 D ist an ce VGG-13 - L2 Dist from Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.75 0.80 0.85 0.90 0.95 1.00 Co sin e S im ila rit y VGG-13 - Cosine Similarity To Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 Co sin e S im ila rit y VGG-13 - Cosine Similarity to Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.4 0.6 0.8 Ev al Ac cu ra cy ( % ) Resnet-56 - Eval Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 1 2 3 4 5 6 Ev al Lo ss Resnet-56 - Eval Loss 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.050 0.051 0.052 0.053 0.054 0.055 0.056 M ag ni tu de Resnet-56 - Average Weight Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 % o f S ig ns th at Di ffe r f ro m In it Resnet-56 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.10 0.05 0.00 0.05 0.10 0.15 M ag ni tu de Resnet-56 - Weight Trace 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 M ag ni tu de Resnet-56 - Gradient Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 10 20 30 40 50 L2 D ist an ce Resnet-56 - L2 Dist from Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 65.0 67.5 70.0 72.5 75.0 77.5 80.0 82.5 L2 D ist an ce Resnet-56 - L2 Dist from Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.75 0.80 0.85 0.90 0.95 1.00 Co sin e S im ila rit y Resnet-56 - Cosine Similarity To Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.25 0.30 0.35 0.40 0.45 0.50 Co sin e S im ila rit y Resnet-56 - Cosine Similarity to Final 0 1000 2000 3000 4000 Training Iteration 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Ev al Ac cu ra cy ( % ) Resnet-18 - Eval Accuracy 0 1000 2000 3000 4000 Training Iteration 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 Ev al Lo ss Resnet-18 - Eval Loss 0 1000 2000 3000 4000 Training Iteration 0.016 0.017 0.018 0.019 0.020 0.021 0.022 M ag ni tu de Resnet-18 - Average Weight Magnitude 0 1000 2000 3000 4000 Training Iteration 0 1 2 3 4 5 6 7 8 % o f S ig ns th at Di ffe r f ro m In it Resnet-18 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.04 0.02 0.00 0.02 0.04 0.06 M ag ni tu de Resnet-18 - Weight Trace 0 1000 2000 3000 4000 Training Iteration 0.5 1.0 1.5 2.0 M ag ni tu de Resnet-18 - Gradient Magnitude 0 1000 2000 3000 4000 Training Iteration 0 20 40 60 80 100 120 L2 D ist an ce Resnet-18 - L2 Dist Init Final 0 1000 2000 3000 4000 Training Iteration 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Co sin e S im ila rit y Resnet-18 - Cosine Similarity Init Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.4 0.6 0.8 Ev al Ac cu ra cy ( % ) WRN-16-8 - Eval Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.5 1.0 1.5 2.0 Ev al Lo ss WRN-16-8 - Eval Loss 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.0170 0.0175 0.0180 0.0185 0.0190 0.0195 0.0200 0.0205 0.0210 M ag ni tu de WRN-16-8 - Average Weight Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 2 4 6 8 10 12 14 % o f S ig ns th at Di ffe r f ro m In it WRN-16-8 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.08 0.06 0.04 0.02 0.00 0.02 M ag ni tu de WRN-16-8 - Weight Trace 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 M ag ni tu de WRN-16-8 - Gradient Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 10 20 30 40 50 60 L2 D ist an ce WRN-16-8 - L2 Dist from Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 80 85 90 95 100 105 L2 D ist an ce WRN-16-8 - L2 Dist from Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 Co sin e S im ila rit y WRN-16-8 - Cosine Similarity To Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 0.45 Co sin e S im ila rit y WRN-16-8 - Cosine Similarity to Final Figure A3 : Basic telemetry about the state of all networks in Table A1 during the first 4000 iterations of training . Accompanies Figure 3 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) init : 0 signs : 0 init : 250 signs : 250 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) init : 0 signs : 0 init : 500 signs : 500 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) init : 0 signs : 0 init : 500 signs : 500 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) init : 0 signs : 0 init : 2000 signs : 2000 random init signs : 2000 init : 0 signs : 2000 init : 2000 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) init : 0 signs : 0 init : 500 signs : 500 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) init : 0 signs : 0 init : 2000 signs : 2000 random init signs : 2000 init : 0 signs : 2000 init : 2000 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 88 89 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) init : 0 signs : 0 init : 50 signs : 50 random init signs : 50 init : 0 signs : 50 init : 50 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 88 89 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) init : 0 signs : 0 init : 250 signs : 250 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 Figure A4 : The effect of training an IMP-derived sub-network initialized to the signs at iteration 0 or k and the magnitudes at iteration 0 or k. Accompanies Figure A4 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 50 Rewind to 0 Rewind to 50 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 Shuffle Globally Shuffle Layers Shuffle Filters Figure A5 : The effect of training an IMP-derived sub-network initialized with the weights at iteration k as shuffled within various structural elements . Accompanies Figure 5 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) Rewind to 0 Rewind to 250 Shuffle Globally , Signs at 250 Shuffle Layers , Signs at 250 Shuffle Filters , Signs at 250 Shuffle Layers , Signs at Init 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) Rewind to 0 Rewind to 500 Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) Rewind to 0 Rewind to 500 Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) Rewind to 0 Rewind to 2000 Shuffle Globally , Signs at 2000 Shuffle Layers , Signs at 2000 Shuffle Filters , Signs at 2000 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 Shuffle Globally , Signs at 2000 Shuffle Layers , Signs at 2000 Shuffle Filters , Signs at 2000 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 50 Rewind to 0 Rewind to 50 Shuffle Globally , Signs at 50 Shuffle Layers , Signs at 50 Shuffle Filters , Signs at 50 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 Shuffle Globally , Signs at 250 Shuffle Layers , Signs at 250 Shuffle Filters , Signs at 250 Shuffle Layers , Signs at Init Figure A6 : The effect of training an IMP-derived sub-network initialized with the weights at iteration k as shuffled within various structural elements where shuffling only occurs between weights with the same sign . Accompanies Figure 6 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 0.5 1.0 2.0 3.0 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 0.5 1.0 2.0 3.0 100.0 64.0 41.0 26.2 16.8 10.7 6.9 4.4 Percent of Weights Remaining 87 88 89 90 91 92 93 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 0.5 1.0 2.0 3.0 100.0 64.0 41.0 26.2 16.8 10.7 6.9 4.4 Percent of Weights Remaining 87 88 89 90 91 92 93 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 50 Rewind to 0 Rewind to 50 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 0.5 1.0 2.0 3.0 Figure A7 : The effect of training an IMP-derived sub-network initialized with the weights at iteration k and Gaussian noise of nσ , where σ is the standard deviation of the initialization distribution for each layer . Accompanies Figure 7 . 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Stddev of Effective Noise vs. Rewind to 250 89.0 89.5 90.0 90.5 91.0 91.5 Ev al Ac cu ra cy Rewind to 250 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 250 Shuffle Layers , Signs at 250 Shuffle Filters , Signs at 250 VGG-13 - Rewind to 250 - 0.8 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 Stddev of Effective Noise vs. Rewind to 500 88.5 89.0 89.5 90.0 90.5 91.0 91.5 92.0 92.5 Ev al Ac cu ra cy Rewind to 500 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 VGG-13 - Rewind to 500 - 0.8 % Sparsity 0.00 0.05 0.10 0.15 0.20 0.25 Stddev of Effective Noise vs. Rewind to 500 90.75 91.00 91.25 91.50 91.75 92.00 92.25 92.50 92.75 Ev al Ac cu ra cy Rewind to 500 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 500init : 0 signs : 500 init : 500 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Resnet-56 - Rewind to 500 - 16.8 % Sparsity 0.00 0.05 0.10 0.15 0.20 0.25 Stddev of Effective Noise vs. Rewind to 2000 91.0 91.5 92.0 92.5 93.0 93.5 Ev al Ac cu ra cy Rewind to 2000 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0random init signs : 2000 init : 0 signs : 2000 init : 2000 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 2000 Shuffle Layers , Signs at 2000 Shuffle Filters , Signs at 2000 Resnet-56 - Rewind to 2000 - 16.8 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Stddev of Effective Noise vs. Rewind to 500 86.2 86.4 86.6 86.8 87.0 Ev al Ac cu ra cy Rewind to 500 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 Shuffle Globally Shuffle Filters Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Resnet-18 - Rewind to 500 - 3.5 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Stddev of Effective Noise vs. Rewind to 2000 86.4 86.6 86.8 87.0 87.2 87.4 87.6 87.8 Ev al Ac cu ra cy Rewind to 2000 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0random init signs : 2000init : 0 signs : 2000 init : 2000 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 2000 Resnet-18 - Rewind to 2000 - 3.5 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Stddev of Effective Noise vs. Rewind to 50 93.6 93.8 94.0 94.2 94.4 94.6 94.8 Ev al Ac cu ra cy Rewind to 50 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 50 init : 0 signs : 50 init : 50 signs : 0 Shuffle GloballyShuffle Layers Shuffle Filters Shuffle Globally , Signs at 50 Shuffle Filters , Signs at 50 WRN-16-8 - Rewind to 50 - 6.9 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Stddev of Effective Noise vs. Rewind to 250 93.6 93.8 94.0 94.2 94.4 94.6 94.8 95.0 95.2 Ev al Ac cu ra cy Rewind to 250 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 WRN-16-8 - Rewind to 250 - 6.9 % Sparsity Figure A8 : The effective standard deviation of each of the perturbations studied in Section 5 as a function of mean evaluation accuracy ( across five seeds ) . Accompanies Figure 8 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 20 40 60 80 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs Figure A9 : The effect of pre-training CIFAR-10 with random labels . Accompanies Figure 9 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs Figure A10 : The effect of pre-training CIFAR-10 with self-supervised rotation . Accompanies Figure 9 .
This paper aims at exploring the properties of neural network training during the early phase. By some studies on the lottery ticket hypothesis, something important happens during the early phase of training so rewinding the network should go to these early phases instead of the initial phase. So, what is important during training? The paper explores this problem from four aspects through empirical studies:
SP:2e7c43705291298211f8934cf38e84f8446d71ae
The Early Phase of Neural Network Training
1 INTRODUCTION Over the past decade , methods for successfully training big , deep neural networks have revolutionized machine learning . Yet surprisingly , the underlying reasons for the success of these approaches remain poorly understood , despite remarkable empirical performance ( Santurkar et al. , 2018 ; Zhang et al. , 2017 ) . A large body of work has focused on understanding what happens during the later stages of training ( Neyshabur et al. , 2019 ; Yaida , 2019 ; Chaudhuri & Soatto , 2017 ; Wei & Schwab , 2019 ) , while the initial phase has been less explored . However , a number of distinct observations indicate that significant and consequential changes are occurring during the most early stage of training . These include the presence of critical periods during training ( Achille et al. , 2019 ) , the dramatic reshaping of the local loss landscape ( Sagun et al. , 2017 ; Gur-Ari et al. , 2018 ) , and the necessity of rewinding in the context of the lottery ticket hypothesis ( Frankle et al. , 2019 ) . Here we perform a thorough investigation of the state of the network in this early stage . To provide a unified framework for understanding the changes the network undergoes during the early phase , we employ the methodology of iterative magnitude pruning with rewinding ( IMP ) , as detailed below , throughout the bulk of this work ( Frankle & Carbin , 2019 ; Frankle et al. , 2019 ) . The initial lottery ticket hypothesis , which was validated on comparatively small networks , proposed that small , sparse sub-networks found via pruning of converged larger models could be trained to high performance provided they were initialized with the same values used in the training of the unpruned model ( Frankle & Carbin , 2019 ) . However , follow-up work found that rewinding the weights to their values at some iteration early in the training of the unpruned model , rather than to their initial values , was necessary to achieve good performance on deeper networks such as ResNets ( Frankle et al. , 2019 ) . This observation suggests that the changes in the network during this initial phase are vital for the success of the training of small , sparse sub-networks . As a result , this paradigm provides a simple and quantitative scheme for measuring the importance of the weights at various points early in training within an actionable and causal framework . †Work done while an intern at Facebook AI Research . We make the following contributions , all evaluated across three different network architectures : 1 . We provide an in-depth overview of various statistics summarizing learning over the early part of training . 2 . We evaluate the impact of perturbing the state of the network in various ways during the early phase of training , finding that : ( i ) counter to observations in smaller networks ( Zhou et al. , 2019 ) , deeper networks are not robust to reinitializion with random weights , but maintained signs ( ii ) the distribution of weights after the early phase of training is already highly non-i.i.d. , as permuting them dramatically harms performance , even when signs are maintained ( iii ) both of the above perturbations can roughly be approximated by simply adding noise to the network weights , though this effect is stronger for ( ii ) than ( i ) 3 . We measure the data-dependence of the early phase of training , finding that pre-training using only p ( x ) can approximate the changes that occur in the early phase of training , though pre-training must last for far longer ( ∼32× longer ) and not be fed misleading labels . 2 KNOWN PHENOMENA IN THE EARLY PHASE OF TRAINING Lottery ticket rewinding : The original lottery ticket paper ( Frankle & Carbin , 2019 ) rewound weights to initialization , i.e. , k = 0 , during IMP . Follow up work on larger models demonstrated that it is necessary to rewind to a later point during training for IMP to succeed , i.e. , k < < T , where T is total training iterations ( Frankle et al. , 2019 ) . Notably , the benefit of rewinding to a later point in training saturates quickly , roughly between 500 and 2000 iterations for ResNet-20 on CIFAR-10 ( Figure 1 ) . This timescale is strikingly similar to the changes in the Hessian described below . Hessian eigenspectrum : The shape of the loss landscape around the network state also appears to change rapidly during the early phase of training ( Sagun et al. , 2017 ; Gur-Ari et al. , 2018 ) . At initialization , the Hessian of the loss contains a number of large positive and negative eigenvalues . However , very rapidly the curvature is reshaped in a few marked ways : a few large eigenvalues emerge , the bulk eigenvalues are close to zero , and the negative eigenvalues become very small . Moreover , once the Hessian spectrum has reshaped , gradient descent appears to occur largely within the top subspace of the Hessian ( Gur-Ari et al. , 2018 ) . These results have been largely confirmed in large scale studies ( Ghorbani et al. , 2019 ) , but note they depend to some extent on architecture and ( absence of ) batch normalization ( Ioffe & Szegedy , 2015 ) . A notable exception to this consistency is the presence of substantial L1 energy of negative eigenvalues for models trained on ImageNet . Critical periods in deep learning : Achille et al . ( 2019 ) found that perturbing the training process by providing corrupted data early on in training can result in irrevocable damage to the final performance of the network . Note that the timescales over which the authors find a critical period extend well beyond those we study here . However , architecture , learning rate schedule , and regularization all modify the timing of the critical period , and follow-up work found that critical periods were also present for regularization , in particular weight decay and data augmentation ( Golatkar et al. , 2019 ) . 3 PRELIMINARIES AND METHODOLOGY Networks : Throughout this paper , we study five standard convolutional neural networks for CIFAR-10 . These include the ResNet-20 and ResNet-56 architectures designed for CIFAR-10 ( He et al. , 2015 ) , the ResNet-18 architecture designed for ImageNet but commonly used on CIFAR-10 ( He et al. , 2015 ) , the WRN-16-8 wide residual network ( Zagoruyko & Komodakis , 2016 ) , and the VGG-13 network ( Simonyan & Zisserman ( 2015 ) as adapted by Liu et al . ( 2019 ) ) . Throughout the main body of the paper , we show ResNet-20 ; in Appendix B , we present the same experiments for the other networks . Unless otherwise stated , results were qualitatively similar across all three networks . All experiments in this paper display the mean and standard deviation across five replicates with different random seeds . See Appendix A for further model details . Iterative magnitude pruning with rewinding : In order to test the effect of various hypotheses about the state of sparse networks early in training , we use the Iterative Magnitude Pruning with rewinding ( IMP ) procedure of Frankle et al . ( 2019 ) to extract sub-networks from various points in training that could have learned on their own . The procedure involves training a network to completion , pruning the 20 % of weights with the lowest magnitudes globally throughout the network , and rewinding the remaining weights to their values from an earlier iteration k during the initial , pre-pruning training run . This process is iterated to produce networks with high sparsity levels . As demonstrated in Frankle et al . ( 2019 ) , IMP with rewinding leads to sparse sub-networks which can train to high performance even at high sparsity levels > 90 % . Figure 1 shows the results of the IMP with rewinding procedure , showing the accuracy of ResNet20 at increasing sparsity when performing this procedure for several rewinding values of k. For k ≥ 500 , sub-networks can match the performance of the original network with 16.8 % of weights remaining . For k > 2000 , essentially no further improvement is observed ( not shown ) . 4 THE STATE OF THE NETWORK EARLY IN TRAINING Many of the aforementioned papers refer to various points in the “ early ” part of training . In this section , we descriptively chart the state of ResNet-20 during the earliest phase of training to provide context for this related work and our subsequent experiments . We specifically focus on the first 4,000 iterations ( 10 epochs ) . See Figure A3 for the characterization of additional networks . We include a summary of these results for ResNet-20 as a timeline in Figure 2 , and include a broader timeline including results from several previous papers for ResNet-18 in Figure A1 . As shown in Figure 3 , during the earliest ten iterations , the network undergoes substantial change . It experiences large gradients that correspond to a rapid increase in distance from the initialization and a large number of sign changes of the weights . After these initial iterations , gradient magnitudes drop and the rate of change in each of the aforementioned quantities gradually slows through the remainder of the period we observe . Interestingly , gradient magnitudes reach a minimum after the first 200 iterations and subsequently increase to a stable level by iteration 500 . Evaluation accuracy , improves rapidly , reaching 55 % by the end of the first epoch ( 400 iterations ) , more than halfway to the final 91.5 % . By 2000 iterations , accuracy approaches 80 % . During the first 4000 iterations of training , we observe three sub-phases . In the first phase , lasting only the initial few iterations , gradient magnitudes are very large and , consequently , the network changes rapidly . In the second phase , lasting about 500 iterations , performance quickly improves , weight magnitudes quickly increase , sign differences from initialization quickly increase , and gradient magnitudes reach a minimum before settling at a stable level . Finally , in the third phase , all of these quantities continue to change in the same direction , but begin to decelerate . 5 PERTURBING NEURAL NETWORKS EARLY IN TRAINING Figure 1 shows that the changes in the network weights over the first 500 iterations of training are essential to enable high performance at high sparsity levels . What features of this weight transformation are necessary to recover increased performance ? Can they be summarized by maintaining the weight signs , but discarding their magnitudes as implied by Zhou et al . ( 2019 ) ? Can they be represented distributionally ? In this section , we evaluate these questions by perturbing the early state of the network in various ways . Concretely , we either add noise or shuffle the weights of IMP sub-networks of ResNet-20 across different network sub-compenents and examine the effect on the network ’ s ability to learn thereafter . The sub-networks derived by IMP with rewinding make it possible to understand the causal impact of perturbations on sub-networks that are as capable as the full networks but more visibly decline in performance when improperly configured . To enable comparisons between the experiments in Section 5 and provide a common frame of reference , we measure the effective standard deviation of each perturbation , i.e . stddev ( wperturb − worig ) . 5.1 ARE SIGNS ALL YOU NEED ? Zhou et al . ( 2019 ) show that , for a set of small convolutional networks , signs alone are sufficient to capture the state of lottery ticket sub-networks . However , it is unclear whether signs are still sufficient for larger networks early in training . In Figure 4 , we investigate the impact of combining the magnitudes of the weights from one time-point with the signs from another . We found that the signs at iteration 500 paired with the magnitudes from initialization ( red line ) or from a separate random initialization ( green line ) were insufficient to maintain the performance reached by using both signs and magnitudes from iteration 500 ( orange line ) , and performance drops to that of using both magnitudes and signs from initialization ( blue line ) . However , while using the magnitudes from iteration 500 and the signs from initialization , performance is still substantially better than initialization signs and magnitudes . In addition , the overall perturbation to the network by using the magnitudes at iteration 500 and signs from initialization ( mean : 0.0 , stddev : 0.033 ) is smaller than by using the signs at iteration 500 and the magnitudes from initialization ( 0.0± 0.042 , mean ± std ) . These results suggest that the change in weight magnitudes over the first 500 iterations of training are substantially more important than the change in the signs for enabling subsequent training . By iteration 2000 , however , pairing the iteration 2000 signs with magnitudes from initialization ( red line ) reaches similar performance to using the signs from initialization and the magnitudes from iteration 2000 ( purple line ) though not as high performance as using both from iteration 2000 . This result suggests that network signs undergo important changes between iterations 500 and 2000 , as only 9 % of signs change during this period . Our results also suggest that counter to the observations of Zhou et al . ( 2019 ) in shallow networks , signs are not sufficient in deeper networks . 5.2 ARE WEIGHT DISTRIBUTIONS I.I.D. ? Can the changes in weights over the first k iterations be approximated distributionally ? To measure this , we permuted the weights at iteration k within various structural sub-components of the network ( globally , within layers , and within convolutional filters ) . If networks are robust to these permutations , it would suggest that the weights in such sub-compenents might be approximated and sampled from . As Figure 5 shows , however , we found that performance was not robust to shuffling weights globally ( green line ) or within layers ( red line ) , and drops substantially to no better than that of the original initialization ( blue line ) at both 500 and 2000 iterations.1 Shuffling within filters ( purple line ) performs slightly better , but results in a smaller overall perturbation ( 0.0 ± 0.092 for k = 500 ) than shuffling layerwise ( 0.0±0.143 ) or globally ( 0.0±0.144 ) , suggesting that this change in perturbation strength may simply account for the difference . 1We also considered shuffling within the incoming and outgoing weights for each neuron , but performance was equivalent to shuffling within layers . We elided these lines for readability . Are the signs from the rewinding iteration , k , sufficient to recover the damage caused by permutation ? In Figure 6 , we also consider shuffling only amongst weights that have the same sign . Doing so substantially improves the performance of the filter-wise shuffle ; however , it also reduces the extent of the overall perturbation ( 0.0± 0.049 for k = 500 ) . It also improves the performance of shuffling within layers slightly for k = 500 and substantially for k = 2000 . We attribute the behavior for k = 2000 to the signs just as in Figure 4 : when the magnitudes are similar in value ( Figure 4 red line ) or distribution ( Figure 6 red and green lines ) , using the signs improves performance . Reverting back to the initial signs while shuffling magnitudes within layers ( brown line ) , however , damages the network too severely ( 0.0 ± 0.087 for k = 500 ) to yield any performance improvement over random noise . These results suggest that , while the signs from initialization are not sufficient for high performance at high sparsity as shown in Section 5.1 , the signs from the rewinding iteration are sufficient to recover the damage caused by permutation , at least to some extent . 5.3 IS IT ALL JUST NOISE ? Some of our previous results suggested that the impact of signs and permutations may simply reduce to adding noise to the weights . To evaluate this hypothesis , we next study the effect of simply adding Gaussian noise to the network weights at iteration k. To add noise appropriately for layers with different scales , the standard deviation of the noise added for each layer was normalized to a multiple of the standard deviation σ of the initialization distribution for that layer . In Figure 7 , we see that for iteration k = 500 , sub-networks can tolerate 0.5σ to 1σ of noise before performance degrades back to that of the original initialization at higher levels of noise . For iteration k = 2000 , networks are surprisingly robust to noise up to 1σ , and even 2σ exhibits nontrivial performance . In Figure 8 , we plot the performance of each network at a fixed sparsity level as a function of the effective standard deviation of the noise imposed by each of the aforementioned perturbations . We find that the standard deviation of the effective noise explained fairly well the resultant performance ( k = 500 : r = −0.672 , p = 0.008 ; k = 2000 : r = −0.726 , p = 0.003 ) . As expected , perturbations that preserved the performance of the network generally resulted in smaller changes to the state of the network at iteration k. Interestingly , experiments that mixed signs and magnitudes from different points in training ( green points ) aligned least well with this pattern : the standard deviation of the perturbation is roughly similar among all of these experiments , but the accuracy of the resulting networks changes substantially . This result suggests that although the standard deviation of the noise is certainly indicative of lower accuracy , there are still specific perturbations that , while small in overall magnitude , can have a large effect on the network ’ s ability to learn , suggesting that the observed perturbation effects are not , in fact , just a consequence of noise . 6 THE DATA-DEPENDENCE OF NEURAL NETWORKS EARLY IN TRAINING Section 5 suggests that the change in network behavior by iteration k is not due to easilyascertainable , distributional properties of the network weights and signs . Rather , it appears that training is required to reach these network states . It is unclear , however , the extent to which various aspects of the data distribution are necessary . Mainly , is the change in weights during the early phase of training dependent on p ( x ) or p ( y|x ) ? Here , we attempt to answer this question by mea- suring the extent to which we can re-create a favorable network state for sub-network training using restricted information from the training data and labels . In particular , we consider pre-training the network with techniques that ignore labels entirely ( self-supervised rotation prediction , Section 6.2 ) , provide misleading labels ( training with random labels , Section 6.1 ) , or eliminate information from examples ( blurring training examples Section 6.3 ) . We first train a randomly-initialized , unpruned network on CIFAR-10 on the pre-training task for a set number of epochs . After pre-training , we train the network normally as if the pre-trained state were the original initialization . We then use the state of the network at the end of the pre-training phase as the “ initialization ” to find masks for IMP . Finally , we examine the performance of the IMPpruned sub-networks as initialized using the state after pre-training . This experiment determines the extent to which pre-training places the network in a state suitable for sub-network training as compared to using the state of the network at iteration k of training on the original task . 6.1 RANDOM LABELS To evaluate whether this phase of training is dependent on underlying structure in the data , we drew inspiration from Zhang et al . ( 2017 ) and pre-trained networks on data with randomized labels . This experiment tests whether the input distribution of the training data is sufficient to put the network in a position from which IMP with rewinding can find a sparse , trainable sub-network despite the presence of incorrect ( not just missing ) labels . Figure 9 ( upper left ) shows that pre-training on random labels for up to 10 epochs provides no improvement above rewinding to iteration 0 and that pre-training for longer begins to hurt accuracy . This result suggests that , though it is still possible that labels may not be required for learning , the presence incorrect labels is sufficient to prevent learning which approximates the early phase of training . 6.2 SELF-SUPERVISED ROTATION PREDICTION What if we remove labels entirely ? Is p ( x ) sufficient to approximate the early phase of training ? Historically , neural network training often involved two steps : a self-supervised pre-training phase followed by a supervised phase on the target task ( Erhan et al. , 2010 ) . Here , we consider one such self-supervised technique : rotation prediction ( Gidaris et al. , 2018 ) . During the pre-training phase , the network is presented with a training image that has randomly been rotated 90n degrees ( where n ∈ { 0 , 1 , 2 , 3 } ) . The network must classify examples by the value of n. If self-supervised pretraining can approximate the early phase of training , it would suggest that p ( x ) is sufficient on its own . Indeed , as shown in Figure 9 ( upper right ) , this pre-training regime leads to well-trainable subnetworks , though networks must be trained for many more epochs compared to supervised training ( 40 compared to 1.25 , or a factor of 32× ) . This result suggests that the labels for the ultimate task themselves are not necessary to put the network in such a state ( although explicitly misleading labels are detrimental ) . We emphasize that the duration of the pre-training phase required is an order of magnitude larger than the original rewinding iteration , however , suggesting that labels add important information which accelerates the learning process . 6.3 BLURRING TRAINING EXAMPLES To probe the importance of p ( x ) for the early phase of training , we study the extent to which the training input distribution is necessary . Namely , we pretrain using blurred training inputs with the correct labels . Following Achille et al . ( 2019 ) , we blur training inputs by downsampling by 4x and then upsampling back to the full size . Figure 9 ( bottom left ) shows that this pre-training method succeeds : after 40 epochs of pre-training , IMP with rewinding can find sub-networks that are similar in performance to those found after training on the original task for 500 iterations ( 1.25 epochs ) . Due to the success of the the rotation and blurring pre-training tasks , we explored the effect of combining these pre-training techniques . Doing so tests the extent to which we can discard both the training labels and some information from the training inputs . Figure 9 ( bottom right ) shows that doing so provides the network too little information : no amount of pre-training we considered makes it possible for IMP with rewinding to find sub-networks that perform tangibly better than rewinding to iteration 0 . Interestingly however , as shown in Appendix B , trainable sub-networks are found for VGG-13 with this pre-training regime , suggesting that different network architectures have different sensitivities to the deprivation of labels and input content . 6.4 SPARSE PRETRAINING Since sparse sub-networks are often challenging to train from scratch without the proper initialization ( Han et al. , 2015 ; Liu et al. , 2019 ; Frankle & Carbin , 2019 ) , does pre-training make it easier for sparse neural networks to learn ? Doing so would serve as a rough form of curriculum learning ( Bengio et al. , 2009 ) for sparse neural networks . We experimented with training sparse sub-networks of ResNet-20 ( IMP sub-networks , randomly reinitialized sub-networks , and randomly pruned subnetworks ) first on self-supervised rotation and then on the main task , but found no benefit beyond rewinding to iteration 0 ( Figure 10 ) . Moreover , doing so when starting from a sub-network rewound to iteration 500 actually hurts final accuracy . This result suggests that while pre-training is sufficient to approximate the early phase of supervised training with an appropriately structured mask , it is not sufficient to do so with an inappropriate mask . 7 DISCUSSION In this paper , we first performed extensive measurements of various statistics summarizing learning over the early part of training . Notably , we uncovered 3 sub-phases : in the very first iterations , gradient magnitudes are anomalously large and motion is rapid . Subsequently , gradients overshoot to smaller magnitudes before leveling off while performance increases rapidly . Then , learning slowly begins to decelerate . We then studied a suite of perturbations to the network state in the early phase finding that , counter to observations in smaller networks ( Zhou et al. , 2019 ) , deeper networks are not robust to reinitializing with random weights with maintained signs . We also found that the weight distribution after the early phase of training is highly non-independent . Finally , we measured the data-dependence of the early phase with the surprising result that pre-training on a self-supervised task yields equivalent performance to late rewinding with IMP . These results have significant implications for the lottery ticket hypothesis . The seeming necessity of late rewinding calls into question certain interpretations of lottery tickets as well as the ability to identify sub-networks at initialization . Our observation that weights are highly non-independent at the rewinding point suggests that the weights at this point can not be easily approximated , making approaches which attempt to “ jump ” directly to the rewinding point unlikely to succeed . However , our result that labels are not necessary to approximate the rewinding point suggests that the learning during this phase does not require task-specific information , suggesting that rewinding may not be necessary if networks are pre-trained appropriately . REFERENCES Alessandro Achille , Matteo Rovere , and Stefano Soatto . Critical learning periods in deep neural networks . 2019 . Yoshua Bengio , Jérôme Louradour , Ronan Collobert , and Jason Weston . Curriculum learning . In Proceedings of the 26th annual international conference on machine learning , pp . 41–48 . ACM , 2009 . Pratik Chaudhuri and Stefano Soatto . Stochastic gradient descent performs variational inference , converges to limit cycles for deep networks . 2017 . URL https : //arxiv.org/abs/1710 . 11029 . Dumitru Erhan , Yoshua Bengio , Aaron Courville , Pierre-Antoine Manzagol , Pascal Vincent , and Samy Bengio . Why does unsupervised pre-training help deep learning ? Journal of Machine Learning Research , 11 ( Feb ) :625–660 , 2010 . Jonathan Frankle and Michael Carbin . The lottery ticket hypothesis : Finding sparse , trainable neural networks . In International Conference on Learning Representations , 2019 . URL https : // openreview.net/forum ? id=rJl-b3RcF7 . Jonathan Frankle , Gintare Karolina Dziugaite , Daniel M Roy , and Michael Carbin . Stabilizing the lottery ticket hypothesis . arXiv preprint arXiv:1903.01611 , 2019 . Behrooz Ghorbani , Shankar Krishnan , and Ying Xiao . An investigation into neural net optimization via hessian eigenvalue density . In Proceedings of the 36th International Conference on Machine Learning , volume 97 , pp . 2232–2241 , 2019 . URL http : //proceedings.mlr.press/ v97/ghorbani19b.html . Spyros Gidaris , Praveer Singh , and Nikos Komodakis . Unsupervised representation learning by predicting image rotations . In International Conference on Learning Representations , 2018 . URL https : //openreview.net/forum ? id=S1v4N2l0- . Aditya Golatkar , Alessandro Achille , and Stefano Soatto . Time matters in regularizing deep networks : Weight decay and data augmentation affect early learning dynamics , matter little near convergence . 2019 . Guy Gur-Ari , Daniel A Roberts , and Ethan Dyer . Gradient descent happens in a tiny subspace . arXiv preprint arXiv:1812.04754 , 2018 . Song Han , Jeff Pool , John Tran , and William Dally . Learning both weights and connections for efficient neural network . In Advances in neural information processing systems , pp . 1135–1143 , 2015 . K He , X Zhang , S Ren , and J Sun . Deep residual learning for image recognition . In Computer Vision and Pattern Recogntion ( CVPR ) , volume 5 , pp . 6 , 2015 . Sergey Ioffe and Christian Szegedy . Batch normalization : Accelerating deep network training by reducing internal covariate shift . In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37 , ICML ’ 15 , pp . 448–456 . JMLR.org , 2015 . URL http : //dl.acm.org/citation.cfm ? id=3045118.3045167 . Zhuang Liu , Mingjie Sun , Tinghui Zhou , Gao Huang , and Trevor Darrell . Rethinking the value of network pruning . In International Conference on Learning Representations , 2019 . URL https : //openreview.net/forum ? id=rJlnB3C5Ym . Behnam Neyshabur , Zhiyuan Li , Srinadh Bhojanapalli , Yann LeCun , and Nathan Srebro . Towards understanding the role of over-parametrization in generalization of neural networks . 2019 . Levent Sagun , Utku Evci , Ugur Guney , Yann Dauphin , and Leon Bottou . Empirical analysis of the hessian of over-parametrized neural networks . 2017 . URL https : //arxiv.org/abs/ 1706.04454 . Shibani Santurkar , Dimitris Tsipras , Andrew Ilyas , and Aleksander Madry . How does batch normalization help optimization ? In Advances in neural information processing systems , 2018 . Karen Simonyan and Andrew Zisserman . Very deep convolutional networks for large-scale image recognition . 2015 . Mingwei Wei and David Schwab . How noise during training affects the hessian spectrum in overparameterized neural networks . 2019 . Sho Yaida . Fluctuation-dissipation relations for stochastic gradient descent . 2019 . Sergey Zagoruyko and Nikos Komodakis . Wide residual networks . arXiv preprint arXiv:1605.07146 , 2016 . Chiyuan Zhang , Samy Bengio , Moritz Hardt , Benjamin Recht , and Oriol Vinyals . Understanding deep learning requires rethinking generalization . 2017 . Hattie Zhou , Janice Lan , Rosanne Liu , and Jason Yosinski . Deconstructing lottery tickets : Zeros , signs , and the supermask . 2019 . A MODEL DETAILS Network Epochs Batch Size Learning Rate Parameters Eval Accuracy ResNet-20 160 128 0.1 ( Mom 0.9 ) 272K 91.5 ± 0.2 % ResNet-56 160 128 0.1 ( Mom 0.9 ) 856K 93.0 ± 0.1 % ResNet-18 160 128 0.1 ( Mom 0.9 ) 11.2M 86.8 ± 0.3 % VGG-13 160 64 0.1 ( Mom 0.9 ) 9.42M 93.5 ± 0.1 % WRN-16-8 160 128 0.1 ( Mom 0.9 ) 11.1M 94.8 ± 0.1 % Table A1 : Summary of the networks we study in this paper . We present ResNet-20 in the main body of the paper and the remaining networks in Appendix B . Table A1 summarizes the networks . All networks follow the same training regime : we train with SGD for 160 epochs starting at learning rate 0.1 ( momentum 0.9 ) and drop the learning rate by a factor of ten at epoch 80 and again at epoch 120 . Training includes weight decay with weight 1e4 . Data is augmented with normalization , random flips , and random crops up to four pixels in any direction . B EXPERIMENTS FOR OTHER NETWORKS Training iterations 1-10 10-500 500-2000 2000+ Gradient magnitudes are very large Rapid motion in weight space Large sign changes ( 6 % in rst 10 it ) Gradients converge to roughly constant magnitude Weight magnitudes decrease linearly Weight magnitudes decrease linearly Performance increase decelerates , but is still rapid Rewinding begins to become highly e ective Rewinding begins to become e ective ( ~250 iterations ) Hessian eigenspectrum separates ( 700 iterations ; Gur-Ari et al. , 2018 ) Gradient magnitudes remain roughly constant Critical periods end ( ~8000 iterations ; Achille et al. , 2019 ; Golatkar et al. , 2019 ) Performance slowly asymptotes Bene t of rewinding saturates Gradient magnitudes reach a minimum before stabilizing 10 % higher at iteration 250 Motion in weight space slows , but is still rapid Performance increases rapidly Figure A1 : Rough timeline of the early phase of training for ResNet-18 on CIFAR-10 , including results from previous papers . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) Rewind to 0 Rewind to 10 Rewind to 50 Rewind to 100 Rewind to 250 Rewind to 500 Rewind to 1000 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) Rewind to 0 Rewind to 100 Rewind to 250 Rewind to 500 Rewind to 1000 Rewind to 2000 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) Rewind to 0 Rewind to 100 Rewind to 250 Rewind to 500 Rewind to 1000 Rewind to 2000 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) Rewind to 0 Rewind to 10 Rewind to 25 Rewind to 50 Rewind to 100 Rewind to 250 Rewind to 500 Figure A2 : The effect of IMP rewinding iteration on the accuracy of sub-networks at various levels of sparsity . Accompanies Figure 1 . 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.4 0.6 0.8 Ev al Ac cu ra cy ( % ) VGG-13 - Eval Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.5 1.0 1.5 2.0 Ev al Lo ss VGG-13 - Eval Loss 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.0165 0.0170 0.0175 0.0180 0.0185 0.0190 0.0195 M ag ni tu de VGG-13 - Average Weight Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 2 4 6 8 10 12 % o f S ig ns th at Di ffe r f ro m In it VGG-13 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.03 0.02 0.01 0.00 0.01 0.02 0.03 M ag ni tu de VGG-13 - Weight Trace 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 0.45 0.50 M ag ni tu de VGG-13 - Gradient Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 10 20 30 40 50 L2 D ist an ce VGG-13 - L2 Dist from Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 80 85 90 95 L2 D ist an ce VGG-13 - L2 Dist from Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.75 0.80 0.85 0.90 0.95 1.00 Co sin e S im ila rit y VGG-13 - Cosine Similarity To Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 Co sin e S im ila rit y VGG-13 - Cosine Similarity to Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.4 0.6 0.8 Ev al Ac cu ra cy ( % ) Resnet-56 - Eval Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 1 2 3 4 5 6 Ev al Lo ss Resnet-56 - Eval Loss 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.050 0.051 0.052 0.053 0.054 0.055 0.056 M ag ni tu de Resnet-56 - Average Weight Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 % o f S ig ns th at Di ffe r f ro m In it Resnet-56 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.10 0.05 0.00 0.05 0.10 0.15 M ag ni tu de Resnet-56 - Weight Trace 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 M ag ni tu de Resnet-56 - Gradient Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 10 20 30 40 50 L2 D ist an ce Resnet-56 - L2 Dist from Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 65.0 67.5 70.0 72.5 75.0 77.5 80.0 82.5 L2 D ist an ce Resnet-56 - L2 Dist from Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.75 0.80 0.85 0.90 0.95 1.00 Co sin e S im ila rit y Resnet-56 - Cosine Similarity To Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.25 0.30 0.35 0.40 0.45 0.50 Co sin e S im ila rit y Resnet-56 - Cosine Similarity to Final 0 1000 2000 3000 4000 Training Iteration 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Ev al Ac cu ra cy ( % ) Resnet-18 - Eval Accuracy 0 1000 2000 3000 4000 Training Iteration 0.50 0.75 1.00 1.25 1.50 1.75 2.00 2.25 Ev al Lo ss Resnet-18 - Eval Loss 0 1000 2000 3000 4000 Training Iteration 0.016 0.017 0.018 0.019 0.020 0.021 0.022 M ag ni tu de Resnet-18 - Average Weight Magnitude 0 1000 2000 3000 4000 Training Iteration 0 1 2 3 4 5 6 7 8 % o f S ig ns th at Di ffe r f ro m In it Resnet-18 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.04 0.02 0.00 0.02 0.04 0.06 M ag ni tu de Resnet-18 - Weight Trace 0 1000 2000 3000 4000 Training Iteration 0.5 1.0 1.5 2.0 M ag ni tu de Resnet-18 - Gradient Magnitude 0 1000 2000 3000 4000 Training Iteration 0 20 40 60 80 100 120 L2 D ist an ce Resnet-18 - L2 Dist Init Final 0 1000 2000 3000 4000 Training Iteration 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Co sin e S im ila rit y Resnet-18 - Cosine Similarity Init Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.2 0.4 0.6 0.8 Ev al Ac cu ra cy ( % ) WRN-16-8 - Eval Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.5 1.0 1.5 2.0 Ev al Lo ss WRN-16-8 - Eval Loss 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.0170 0.0175 0.0180 0.0185 0.0190 0.0195 0.0200 0.0205 0.0210 M ag ni tu de WRN-16-8 - Average Weight Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 2 4 6 8 10 12 14 % o f S ig ns th at Di ffe r f ro m In it WRN-16-8 - Sign Differences 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.08 0.06 0.04 0.02 0.00 0.02 M ag ni tu de WRN-16-8 - Weight Trace 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 M ag ni tu de WRN-16-8 - Gradient Magnitude 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0 10 20 30 40 50 60 L2 D ist an ce WRN-16-8 - L2 Dist from Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 80 85 90 95 100 105 L2 D ist an ce WRN-16-8 - L2 Dist from Final 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.825 0.850 0.875 0.900 0.925 0.950 0.975 1.000 Co sin e S im ila rit y WRN-16-8 - Cosine Similarity To Init 0 500 1000 1500 2000 2500 3000 3500 4000 Training Iteration 0.20 0.25 0.30 0.35 0.40 0.45 Co sin e S im ila rit y WRN-16-8 - Cosine Similarity to Final Figure A3 : Basic telemetry about the state of all networks in Table A1 during the first 4000 iterations of training . Accompanies Figure 3 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) init : 0 signs : 0 init : 250 signs : 250 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) init : 0 signs : 0 init : 500 signs : 500 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) init : 0 signs : 0 init : 500 signs : 500 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) init : 0 signs : 0 init : 2000 signs : 2000 random init signs : 2000 init : 0 signs : 2000 init : 2000 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) init : 0 signs : 0 init : 500 signs : 500 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) init : 0 signs : 0 init : 2000 signs : 2000 random init signs : 2000 init : 0 signs : 2000 init : 2000 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 88 89 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) init : 0 signs : 0 init : 50 signs : 50 random init signs : 50 init : 0 signs : 50 init : 50 signs : 0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 88 89 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) init : 0 signs : 0 init : 250 signs : 250 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 Figure A4 : The effect of training an IMP-derived sub-network initialized to the signs at iteration 0 or k and the magnitudes at iteration 0 or k. Accompanies Figure A4 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 50 Rewind to 0 Rewind to 50 Shuffle Globally Shuffle Layers Shuffle Filters 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 Shuffle Globally Shuffle Layers Shuffle Filters Figure A5 : The effect of training an IMP-derived sub-network initialized with the weights at iteration k as shuffled within various structural elements . Accompanies Figure 5 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) Rewind to 0 Rewind to 250 Shuffle Globally , Signs at 250 Shuffle Layers , Signs at 250 Shuffle Filters , Signs at 250 Shuffle Layers , Signs at Init 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) Rewind to 0 Rewind to 500 Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) Rewind to 0 Rewind to 500 Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) Rewind to 0 Rewind to 2000 Shuffle Globally , Signs at 2000 Shuffle Layers , Signs at 2000 Shuffle Filters , Signs at 2000 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 Shuffle Globally , Signs at 2000 Shuffle Layers , Signs at 2000 Shuffle Filters , Signs at 2000 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 50 Rewind to 0 Rewind to 50 Shuffle Globally , Signs at 50 Shuffle Layers , Signs at 50 Shuffle Filters , Signs at 50 Shuffle Layers , Signs at Init 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 Shuffle Globally , Signs at 250 Shuffle Layers , Signs at 250 Shuffle Filters , Signs at 250 Shuffle Layers , Signs at Init Figure A6 : The effect of training an IMP-derived sub-network initialized with the weights at iteration k as shuffled within various structural elements where shuffling only occurs between weights with the same sign . Accompanies Figure 6 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 0.5 1.0 2.0 3.0 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 0.5 1.0 2.0 3.0 100.0 64.0 41.0 26.2 16.8 10.7 6.9 4.4 Percent of Weights Remaining 87 88 89 90 91 92 93 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 0.5 1.0 2.0 3.0 100.0 64.0 41.0 26.2 16.8 10.7 6.9 4.4 Percent of Weights Remaining 87 88 89 90 91 92 93 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 81 82 83 84 85 86 87 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 500 Rewind to 0 Rewind to 500 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Rewinding Iteration 2000 Rewind to 0 Rewind to 2000 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 50 Rewind to 0 Rewind to 50 0.5 1.0 2.0 3.0 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Rewinding Iteration 250 Rewind to 0 Rewind to 250 0.5 1.0 2.0 3.0 Figure A7 : The effect of training an IMP-derived sub-network initialized with the weights at iteration k and Gaussian noise of nσ , where σ is the standard deviation of the initialization distribution for each layer . Accompanies Figure 7 . 0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 Stddev of Effective Noise vs. Rewind to 250 89.0 89.5 90.0 90.5 91.0 91.5 Ev al Ac cu ra cy Rewind to 250 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 250 Shuffle Layers , Signs at 250 Shuffle Filters , Signs at 250 VGG-13 - Rewind to 250 - 0.8 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 Stddev of Effective Noise vs. Rewind to 500 88.5 89.0 89.5 90.0 90.5 91.0 91.5 92.0 92.5 Ev al Ac cu ra cy Rewind to 500 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 VGG-13 - Rewind to 500 - 0.8 % Sparsity 0.00 0.05 0.10 0.15 0.20 0.25 Stddev of Effective Noise vs. Rewind to 500 90.75 91.00 91.25 91.50 91.75 92.00 92.25 92.50 92.75 Ev al Ac cu ra cy Rewind to 500 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 500init : 0 signs : 500 init : 500 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Resnet-56 - Rewind to 500 - 16.8 % Sparsity 0.00 0.05 0.10 0.15 0.20 0.25 Stddev of Effective Noise vs. Rewind to 2000 91.0 91.5 92.0 92.5 93.0 93.5 Ev al Ac cu ra cy Rewind to 2000 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0random init signs : 2000 init : 0 signs : 2000 init : 2000 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 2000 Shuffle Layers , Signs at 2000 Shuffle Filters , Signs at 2000 Resnet-56 - Rewind to 2000 - 16.8 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Stddev of Effective Noise vs. Rewind to 500 86.2 86.4 86.6 86.8 87.0 Ev al Ac cu ra cy Rewind to 500 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 500 init : 0 signs : 500 init : 500 signs : 0 Shuffle Globally Shuffle Filters Shuffle Globally , Signs at 500 Shuffle Layers , Signs at 500 Shuffle Filters , Signs at 500 Resnet-18 - Rewind to 500 - 3.5 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Stddev of Effective Noise vs. Rewind to 2000 86.4 86.6 86.8 87.0 87.2 87.4 87.6 87.8 Ev al Ac cu ra cy Rewind to 2000 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0random init signs : 2000init : 0 signs : 2000 init : 2000 signs : 0 Shuffle Globally Shuffle Layers Shuffle Filters Shuffle Globally , Signs at 2000 Resnet-18 - Rewind to 2000 - 3.5 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Stddev of Effective Noise vs. Rewind to 50 93.6 93.8 94.0 94.2 94.4 94.6 94.8 Ev al Ac cu ra cy Rewind to 50 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 50 init : 0 signs : 50 init : 50 signs : 0 Shuffle GloballyShuffle Layers Shuffle Filters Shuffle Globally , Signs at 50 Shuffle Filters , Signs at 50 WRN-16-8 - Rewind to 50 - 6.9 % Sparsity 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Stddev of Effective Noise vs. Rewind to 250 93.6 93.8 94.0 94.2 94.4 94.6 94.8 95.0 95.2 Ev al Ac cu ra cy Rewind to 250 Gaussian 0.5 Gaussian 1.0 Gaussian 2.0 Gaussian 3.0 random init signs : 250 init : 0 signs : 250 init : 250 signs : 0 WRN-16-8 - Rewind to 250 - 6.9 % Sparsity Figure A8 : The effective standard deviation of each of the perturbations studied in Section 5 as a function of mean evaluation accuracy ( across five seeds ) . Accompanies Figure 8 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 20 40 60 80 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Random Labels Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs Figure A9 : The effect of pre-training CIFAR-10 with random labels . Accompanies Figure 9 . 100.0 51.2 26.3 13.5 6.9 3.6 1.9 1.0 0.5 Percent of Weights Remaining 86 88 90 92 94 Ev al Ac cu ra cy ( % ) VGG-13 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 Percent of Weights Remaining 80 82 84 86 88 90 92 94 Ev al Ac cu ra cy ( % ) Resnet-56 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 0.5 Percent of Weights Remaining 82 84 86 88 Ev al Ac cu ra cy ( % ) Resnet-18 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 500 Rewind to 2000 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs 100.0 51.2 26.2 13.4 6.9 3.5 1.8 0.9 Percent of Weights Remaining 90 91 92 93 94 95 Ev al Ac cu ra cy ( % ) WRN-16-8 ( CIFAR-10 ) - Self-Supervised Rotation Rewind to 0 Rewind to 250 Rewind to 500 Pretrain 2 Epochs Pretrain 5 Epochs Pretrain 10 Epochs Pretrain 40 Epochs Figure A10 : The effect of pre-training CIFAR-10 with self-supervised rotation . Accompanies Figure 9 .
This paper is dedicated to examining the changes that networks undergo during the early phase of the network training. The author conducts extensive measurements of the network state and its updates during the early iterations of training. Based on the observations, they find that: i) deep network is not robust to reinitialization with random weights, but maintained signs; ii) the distribution is highly non-i.i.d after the early phase of network training. This is why permuting weight dramatically harms performance. iii) the changes in the supervised networks are label-agnostic. The author claims these results can play an important role in explaining the network changes in the initial critical period.
SP:2e7c43705291298211f8934cf38e84f8446d71ae
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
1 INTRODUCTION . Humans can naturally learn and perform well at a wide variety of tasks , driven by instinct and practice ; more importantly , they are able to justify why they would take a certain action . Artificial agents should be equipped with the same capability , so that their decision making process is interpretable by researchers . Following the enormous success of deep learning in various domains , such as the application of convolutional neural networks ( CNNs ) to computer vision ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Long et al. , 2015 ; Ren et al. , 2015 ) , a need for understanding and analyzing the trained models has arisen . Several such methods have been proposed and work well in this domain , for example for image classification ( Simonyan et al. , 2013 ; Zeiler & Fergus , 2014 ; Fong & Vedaldi , 2017 ) , sequential models ( Karpathy et al. , 2016 ) or through attention ( Xu et al. , 2015 ) . Deep reinforcement learning ( RL ) agents also use CNNs to gain perception and learn policies directly from image sequences . However , little work has been so far done in analyzing RL networks . We found that directly applying common visualization techniques to RL agents often leads to poor results . In this paper , we present a novel technique to generate insightful visualizations for pre-trained agents . Currently , the generalization capability of an agent is—in the best case—evaluated on a validation set of scenarios . However , this means that this validation set has to be carefully crafted to encompass as many potential failure cases as possible . As an example , consider the case of a self-driving agent , where it is near impossible to exhaustively model all interactions of the agent with other drivers , pedestrians , cyclists , weather conditions , even in simulation . Our goal is to extrapolate from the training scenes to novel states that induce a specified behavior in the agent . In our work , we learn a generative model of the environment as an input to the agent . This allows us to probe the agent ’ s behavior in novel states created by an optimization scheme to induce specific actions in the agent . For example we could optimize for states in which the agent sees the only option as being to slam on the brakes ; or states in which the agent expects to score exceptionally low . Visualizing such states allows to observe the agent ’ s interaction with the environment in critical scenarios to understand its shortcomings . Furthermore , it is possible to generate states based on an objective function specified by the user . Lastly , our method does not affect and does not depend on the training of the agent and thus is applicable to a wide variety of reinforcement learning algorithms . Our contributions are : 1 . We introduce a series of objectives to quantify different forms of interestingness and danger of states for RL agents . 2 . We evaluate our algorithm on 50 Atari games and a driving simulator , and compare performance across three different reinforcement learning algorithms . 3 . We quantitatively evaluate parts of our model in a comprehensive loss study ( Tab . 1 ) and analyze generalization though a pixel level analysis of synthesized unseen states ( Tab . 2 ) . 4 . An extensive supplement shows additional comprehensive visualizations on 50 Atari games . We will describe our method before we will discuss relevant related work from the literature . 2 METHODS . We will first introduce the notation and definitions that will be used through out the remainder of the paper . We formulate the reinforcement learning problem as a discounted , infinite horizon Markov decision process ( S , A , γ , P , r ) , where at every time step t the agent finds itself in a state st ∈ S and chooses an action at ∈ A following its policy πθ ( a|st ) . Then the environment transitions from state st to state st+1 given the model P ( st+1|st , at ) . Our goal is to visualize RL agents given a user-defined objective function , without adding constraints on the optimization process of the agent itself , i.e . assuming that we are given a previously trained agent with fixed parameters θ . We approach visualization via a generative model over the state space S and synthesize states that lead to an interesting , user-specified behavior of the agent . This could be , for instance , states in which the agent expresses high uncertainty regarding which action to take or states in which it sees no good way out . This approach is fundamentally different than saliency-based methods as they always need an input for the test-set on which the saliency maps can be computed . The generative model constrains the optimization of states to induce specific agent behavior . 2.1 STATE MODEL . Often in feature visualization for CNNs , an image is optimized starting from random noise . However , we found this formulation too unconstrained , often ending up in local minima or fooling examples ( Figure 3a ) . To constrain the optimization problem we learn a generative model on a set S of states generated by the given agent that is acting in the environment . The model is inspired by variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ) and consists of an encoder f ( s ) = ( fµ ( s ) , fσ ( s ) ) ∈ R2×n that maps inputs to a Gaussian distribution in latent space and a decoder g ( z ( s ) ) = ŝ that reconstructs the input . z ( s ) = fµ ( s ) + fσ ( s ) is a sample from the predicted distribution , that is obtained via the reparametrization trick , where is sampled from N ( 0 , In ) . The training of our generator has three objectives . First , we want the generated samples to be close to the manifold of valid states s. To avoid fooling examples , the samples should also induce correct behavior in the agent and lastly , sampling states needs to be efficient . We encode these goals in three corresponding loss terms . L ( s ) = Lp ( s ) + ηLa ( s ) +KL ( f ( s ) , N ( 0 , In ) ) , ( 1 ) The role of Lp ( s ) is to ensure that the reconstruction g ( z ( s ) ) is close to the input s such that ‖ g ( z ( s ) ) − s ‖22 is minimized . We observe that in the typical reinforcement learning benchmarks , such as Atari games , small details—e.g . the ball in Pong or Breakout—are often critical for the decision making of the agent . However , a typical VAE model tends to yield blurry samples that are not able to capture such details . To address this issue , we model the reconstruction error Lp ( s ) with an attentive loss term , which leverages the saliency of the agent to put focus on critical regions of the reconstruction . The saliency maps are computed by guided backpropagation of the policy ’ s gradient with respect to the state . Lp ( s ) = d∑ i=1 ( g ( z ( s ) ) i − si ) 2 |∇π ( s ) i|∑d j=1 | ∇π ( s ) j | , ( 2 ) where i and j iterate over all d pixels in the image/gradient . As discussed earlier , gradient based reconstruction methods might not be ideal for explaining a CNN ’ s reasoning process ( Kindermans et al. , 2017a ) . Here however , we only use it to focus the reconstruction on salient regions of the agent and do not use it to explain the agent ’ s behavior for which these methods are ideally suited . This approach puts emphasis on details ( salient regions ) when training the generative model . Since we are interested in the actions of the agent on synthesized states , the second objective La ( s ) is used to model the perception of the agent : La ( s ) = ‖A ( s ) −A ( g ( z ( s ) ) ) ‖22 , ( 3 ) where A is a generic formulation of the output of the agent . For a DQN for example , π ( s ) = maxaA ( s ) a , i.e . the final action is the one with the maximal Q-value . This term encourages the reconstructions to be interpreted by the agent the same way as the original inputs s. The last term KL ( f ( s ) , N ( 0 , In ) ) ensures that the distribution predicted by the encoder f stays close to a Gaussian distribution . This allows us to initialize the optimization with a reasonable random vector later and forms the basis of a regularizer . Thus , after training , the model approximates the distribution of states p ( s ) by sampling z directly from N ( 0 , In ) . The generative model ( f and g ) is trained with ( 1 ) . We will then use the generator inside an optimization scheme to generate state samples that satisfy a user defined target objective . 2.2 SAMPLING STATES OF INTEREST . Training a generator with the objective function of Equation 1 allows us to sample states that are not only visually close to the real ones , but which the agent can also interpret and act upon as if they were states from a real environment . We can further exploit this property and formulate an energy optimization scheme to generate samples that satisfy a specified objective . The energy operates on the latent space x = ( xµ , xσ ) of the generator and is defined as the sum of a target function T on agent ’ s policy and a regularizer R E ( x ) = T ( π ( g ( xµ + xσ ) ) + αR ( x ) . ( 4 ) The target function can be defined freely by the user and depends on the agent that is being visualized . For a DQN , one could for example define T as the Q-value of a certain action , e.g . pressing the brakes of a car . In section 2.3 , we show several examples of targets that are interesting to analyze . The regularizer R can again be chosen as the KL divergence between x and the normal distribution : R ( x ) = KL ( x , N ( 0 , In ) ) , ( 5 ) forcing the samples that are drawn from the distribution x to be close to the Gaussian distribution that the generator was trained with . We can optimize equation 4 with gradient descent on x as detailed in algorithm 1 . 2.3 TARGET FUNCTIONS . Depending on the agent , one can define several interesting target functions T – we present and explore seven below , which we refer to as : T+ , T− , T± , S+ , S− , and action maximization . For a DQN the previously discussed action maximization is interesting to find situations in which the agent assigns a high value to a certain action e.g . Tleft ( s ) = −Aleft ( s ) . Other states of interest are those to which the agent assigns a low ( or high ) value for all possible actions A ( s ) = q = ( q1 , . . . , qm ) . Consequently , one can optimize towards a low Q-value for the highest valued action with the following objective : T− ( q ) = ∑m i=1 qie βqi∑m k=1 e βqk , ( 6 ) Algorithm 1 Optimize x for target T 1 : Input : Target objective T , step size λ , regularizer weight α , trained generator g 2 : Output : x 3 : xµ ← 0 4 : xσ ← In 5 : while not converged do 6 : ← sample from N ( 0 , In ) 7 : z ← xµ + xσ . sample z from x 8 : s← g ( z ) . generate state s using g 9 : xµ ← xµ − λ ∂∂xµ ( T ( π ( s ) ) + αR ( x ) ) . gradient step using ( 4 ) and ( 5 ) 10 : xσ ← xσ − λ ∂∂xσ ( T ( π ( s ) ) + αR ( x ) ) 11 : end while ( a ) Kung Fu Master - T± enemies on both sides ( b ) Kung Fu Master - T+ easy , many points to score ( c ) Kung Fu Master - T− no enemies ( d ) Pong - T+ scoring a point ( e ) Space Invaders - T+ shooting an enemy ( f ) Enduro - T+ overtaking an opponent ( g ) Name This Game - T± whether to refill air ( h ) Seaquest - T− out of oxygen ( i ) Beamrider - Tleft avoiding the enemy Figure 1 : Qualitative Results : Visualization of different target functions ( Sec . 2.3 ) . T+ generates high reward and T− low reward states ; T± generates states in which one action is highly beneficial and another is bad . For a long list of results , with over 50 Atari games , please see the appendix . where β > 0 controls the sharpness of the soft maximum formulation . Analogously , one can maximize the lowest Q-value with T+ ( q ) = −T− ( −q ) . We can also optimize for interesting situations in which one action is of very high value and another is of very low value by defining T± ( q ) = T− ( q ) − T+ ( q ) . ( 7 ) Finally , we can optimize for overall good states with S+ ( q ) = ∑m i=1 qi and overall bad states with S− = S− ( −q ) .
This paper proposes a generative technique to sample "interesting" states useful for analyzing the behavior of deep reinforcement learning agents. In this context, the concept of "interesting" is defined via user-specific target functions, e.g. states that arise as a consequence of taking specific actions (such as actions associated with high or low Q-values for example). The approach is evaluated in the Atari domain and in an autonomous driving simulator. Results are mainly presented as visualizations of interesting states that are described verbally.
SP:3ae544b075487fc2a3731e1c017546ef0ff525e9
Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents
1 INTRODUCTION . Humans can naturally learn and perform well at a wide variety of tasks , driven by instinct and practice ; more importantly , they are able to justify why they would take a certain action . Artificial agents should be equipped with the same capability , so that their decision making process is interpretable by researchers . Following the enormous success of deep learning in various domains , such as the application of convolutional neural networks ( CNNs ) to computer vision ( LeCun et al. , 1998 ; Krizhevsky et al. , 2012 ; Long et al. , 2015 ; Ren et al. , 2015 ) , a need for understanding and analyzing the trained models has arisen . Several such methods have been proposed and work well in this domain , for example for image classification ( Simonyan et al. , 2013 ; Zeiler & Fergus , 2014 ; Fong & Vedaldi , 2017 ) , sequential models ( Karpathy et al. , 2016 ) or through attention ( Xu et al. , 2015 ) . Deep reinforcement learning ( RL ) agents also use CNNs to gain perception and learn policies directly from image sequences . However , little work has been so far done in analyzing RL networks . We found that directly applying common visualization techniques to RL agents often leads to poor results . In this paper , we present a novel technique to generate insightful visualizations for pre-trained agents . Currently , the generalization capability of an agent is—in the best case—evaluated on a validation set of scenarios . However , this means that this validation set has to be carefully crafted to encompass as many potential failure cases as possible . As an example , consider the case of a self-driving agent , where it is near impossible to exhaustively model all interactions of the agent with other drivers , pedestrians , cyclists , weather conditions , even in simulation . Our goal is to extrapolate from the training scenes to novel states that induce a specified behavior in the agent . In our work , we learn a generative model of the environment as an input to the agent . This allows us to probe the agent ’ s behavior in novel states created by an optimization scheme to induce specific actions in the agent . For example we could optimize for states in which the agent sees the only option as being to slam on the brakes ; or states in which the agent expects to score exceptionally low . Visualizing such states allows to observe the agent ’ s interaction with the environment in critical scenarios to understand its shortcomings . Furthermore , it is possible to generate states based on an objective function specified by the user . Lastly , our method does not affect and does not depend on the training of the agent and thus is applicable to a wide variety of reinforcement learning algorithms . Our contributions are : 1 . We introduce a series of objectives to quantify different forms of interestingness and danger of states for RL agents . 2 . We evaluate our algorithm on 50 Atari games and a driving simulator , and compare performance across three different reinforcement learning algorithms . 3 . We quantitatively evaluate parts of our model in a comprehensive loss study ( Tab . 1 ) and analyze generalization though a pixel level analysis of synthesized unseen states ( Tab . 2 ) . 4 . An extensive supplement shows additional comprehensive visualizations on 50 Atari games . We will describe our method before we will discuss relevant related work from the literature . 2 METHODS . We will first introduce the notation and definitions that will be used through out the remainder of the paper . We formulate the reinforcement learning problem as a discounted , infinite horizon Markov decision process ( S , A , γ , P , r ) , where at every time step t the agent finds itself in a state st ∈ S and chooses an action at ∈ A following its policy πθ ( a|st ) . Then the environment transitions from state st to state st+1 given the model P ( st+1|st , at ) . Our goal is to visualize RL agents given a user-defined objective function , without adding constraints on the optimization process of the agent itself , i.e . assuming that we are given a previously trained agent with fixed parameters θ . We approach visualization via a generative model over the state space S and synthesize states that lead to an interesting , user-specified behavior of the agent . This could be , for instance , states in which the agent expresses high uncertainty regarding which action to take or states in which it sees no good way out . This approach is fundamentally different than saliency-based methods as they always need an input for the test-set on which the saliency maps can be computed . The generative model constrains the optimization of states to induce specific agent behavior . 2.1 STATE MODEL . Often in feature visualization for CNNs , an image is optimized starting from random noise . However , we found this formulation too unconstrained , often ending up in local minima or fooling examples ( Figure 3a ) . To constrain the optimization problem we learn a generative model on a set S of states generated by the given agent that is acting in the environment . The model is inspired by variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ) and consists of an encoder f ( s ) = ( fµ ( s ) , fσ ( s ) ) ∈ R2×n that maps inputs to a Gaussian distribution in latent space and a decoder g ( z ( s ) ) = ŝ that reconstructs the input . z ( s ) = fµ ( s ) + fσ ( s ) is a sample from the predicted distribution , that is obtained via the reparametrization trick , where is sampled from N ( 0 , In ) . The training of our generator has three objectives . First , we want the generated samples to be close to the manifold of valid states s. To avoid fooling examples , the samples should also induce correct behavior in the agent and lastly , sampling states needs to be efficient . We encode these goals in three corresponding loss terms . L ( s ) = Lp ( s ) + ηLa ( s ) +KL ( f ( s ) , N ( 0 , In ) ) , ( 1 ) The role of Lp ( s ) is to ensure that the reconstruction g ( z ( s ) ) is close to the input s such that ‖ g ( z ( s ) ) − s ‖22 is minimized . We observe that in the typical reinforcement learning benchmarks , such as Atari games , small details—e.g . the ball in Pong or Breakout—are often critical for the decision making of the agent . However , a typical VAE model tends to yield blurry samples that are not able to capture such details . To address this issue , we model the reconstruction error Lp ( s ) with an attentive loss term , which leverages the saliency of the agent to put focus on critical regions of the reconstruction . The saliency maps are computed by guided backpropagation of the policy ’ s gradient with respect to the state . Lp ( s ) = d∑ i=1 ( g ( z ( s ) ) i − si ) 2 |∇π ( s ) i|∑d j=1 | ∇π ( s ) j | , ( 2 ) where i and j iterate over all d pixels in the image/gradient . As discussed earlier , gradient based reconstruction methods might not be ideal for explaining a CNN ’ s reasoning process ( Kindermans et al. , 2017a ) . Here however , we only use it to focus the reconstruction on salient regions of the agent and do not use it to explain the agent ’ s behavior for which these methods are ideally suited . This approach puts emphasis on details ( salient regions ) when training the generative model . Since we are interested in the actions of the agent on synthesized states , the second objective La ( s ) is used to model the perception of the agent : La ( s ) = ‖A ( s ) −A ( g ( z ( s ) ) ) ‖22 , ( 3 ) where A is a generic formulation of the output of the agent . For a DQN for example , π ( s ) = maxaA ( s ) a , i.e . the final action is the one with the maximal Q-value . This term encourages the reconstructions to be interpreted by the agent the same way as the original inputs s. The last term KL ( f ( s ) , N ( 0 , In ) ) ensures that the distribution predicted by the encoder f stays close to a Gaussian distribution . This allows us to initialize the optimization with a reasonable random vector later and forms the basis of a regularizer . Thus , after training , the model approximates the distribution of states p ( s ) by sampling z directly from N ( 0 , In ) . The generative model ( f and g ) is trained with ( 1 ) . We will then use the generator inside an optimization scheme to generate state samples that satisfy a user defined target objective . 2.2 SAMPLING STATES OF INTEREST . Training a generator with the objective function of Equation 1 allows us to sample states that are not only visually close to the real ones , but which the agent can also interpret and act upon as if they were states from a real environment . We can further exploit this property and formulate an energy optimization scheme to generate samples that satisfy a specified objective . The energy operates on the latent space x = ( xµ , xσ ) of the generator and is defined as the sum of a target function T on agent ’ s policy and a regularizer R E ( x ) = T ( π ( g ( xµ + xσ ) ) + αR ( x ) . ( 4 ) The target function can be defined freely by the user and depends on the agent that is being visualized . For a DQN , one could for example define T as the Q-value of a certain action , e.g . pressing the brakes of a car . In section 2.3 , we show several examples of targets that are interesting to analyze . The regularizer R can again be chosen as the KL divergence between x and the normal distribution : R ( x ) = KL ( x , N ( 0 , In ) ) , ( 5 ) forcing the samples that are drawn from the distribution x to be close to the Gaussian distribution that the generator was trained with . We can optimize equation 4 with gradient descent on x as detailed in algorithm 1 . 2.3 TARGET FUNCTIONS . Depending on the agent , one can define several interesting target functions T – we present and explore seven below , which we refer to as : T+ , T− , T± , S+ , S− , and action maximization . For a DQN the previously discussed action maximization is interesting to find situations in which the agent assigns a high value to a certain action e.g . Tleft ( s ) = −Aleft ( s ) . Other states of interest are those to which the agent assigns a low ( or high ) value for all possible actions A ( s ) = q = ( q1 , . . . , qm ) . Consequently , one can optimize towards a low Q-value for the highest valued action with the following objective : T− ( q ) = ∑m i=1 qie βqi∑m k=1 e βqk , ( 6 ) Algorithm 1 Optimize x for target T 1 : Input : Target objective T , step size λ , regularizer weight α , trained generator g 2 : Output : x 3 : xµ ← 0 4 : xσ ← In 5 : while not converged do 6 : ← sample from N ( 0 , In ) 7 : z ← xµ + xσ . sample z from x 8 : s← g ( z ) . generate state s using g 9 : xµ ← xµ − λ ∂∂xµ ( T ( π ( s ) ) + αR ( x ) ) . gradient step using ( 4 ) and ( 5 ) 10 : xσ ← xσ − λ ∂∂xσ ( T ( π ( s ) ) + αR ( x ) ) 11 : end while ( a ) Kung Fu Master - T± enemies on both sides ( b ) Kung Fu Master - T+ easy , many points to score ( c ) Kung Fu Master - T− no enemies ( d ) Pong - T+ scoring a point ( e ) Space Invaders - T+ shooting an enemy ( f ) Enduro - T+ overtaking an opponent ( g ) Name This Game - T± whether to refill air ( h ) Seaquest - T− out of oxygen ( i ) Beamrider - Tleft avoiding the enemy Figure 1 : Qualitative Results : Visualization of different target functions ( Sec . 2.3 ) . T+ generates high reward and T− low reward states ; T± generates states in which one action is highly beneficial and another is bad . For a long list of results , with over 50 Atari games , please see the appendix . where β > 0 controls the sharpness of the soft maximum formulation . Analogously , one can maximize the lowest Q-value with T+ ( q ) = −T− ( −q ) . We can also optimize for interesting situations in which one action is of very high value and another is of very low value by defining T± ( q ) = T− ( q ) − T+ ( q ) . ( 7 ) Finally , we can optimize for overall good states with S+ ( q ) = ∑m i=1 qi and overall bad states with S− = S− ( −q ) .
This paper proposes a new visualization tool in order to understand the behavior of agents trained using deep RL. Specifically, they train a generative model of game states, and then optimize an energy-based distribution over state embeddings according to some target function, and then by sampling from the resulting distribution they create a diverse set of realistic states that score highly according to the target function. They propose a few target cost functions, which allow them to optimize for states in which the agent takes a particular action, states which are high reward (worst Q-value is large), states which are low reward (best Q-value is small), and critical states. They demonstrate results on Atari games as well as a simulated driving environment.
SP:3ae544b075487fc2a3731e1c017546ef0ff525e9
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
1 INTRODUCTION . Few-shot learning refers to learning new concepts from few examples , an ability that humans naturally possess , but machines still lack . Improving on this aspect would lead to more efficient algorithms that can flexibly expand their knowledge without requiring large labeled datasets . We focus on few-shot classification : classifying unseen examples into one of N new ‘ test ’ classes , given only a few reference examples of each . Recent progress in this direction has been made by considering a meta-problem : though we are not interested in learning about any training class in particular , we can exploit the training classes for the purpose of learning to learn new classes from few examples , thus acquiring a learning procedure that can be directly applied to new few-shot learning problems too . This intuition has inspired numerous models of increasing complexity ( see Related Work for some examples ) . However , we believe that the commonly-used setup for measuring success in this direction is lacking . Specifically , two datasets have emerged as de facto benchmarks for few-shot learning : Omniglot ( Lake et al. , 2015 ) , and mini-ImageNet ( Vinyals et al. , 2016 ) , and we believe that both of them are approaching their limit in terms of allowing one to discriminate between the merits of different approaches . Omniglot is a dataset of 1623 handwritten characters from 50 different alphabets and contains 20 examples per class ( character ) . Most recent methods obtain very high accuracy on Omniglot , rendering the comparisons between them mostly uninformative . mini-ImageNet is formed out of 100 ImageNet ( Russakovsky et al. , 2015 ) classes ( 64/16/20 for train/validation/test ) and contains 600 examples per class . Albeit harder than Omniglot , it has the same property that most recent methods trained on it present similar accuracy when controlling for model capacity . We advocate that a more challenging and realistic benchmark is required for further progress in this area . More specifically , current benchmarks : 1 ) Consider homogeneous learning tasks . In contrast , real-life learning experiences are heterogeneous : they vary in terms of the number of classes and examples per class , and are unbalanced . 2 ) Measure only within-dataset generalization . However , we are eventually after models that can generalize to entirely new distributions ( e.g. , datasets ) . 3 ) Ignore the relationships between classes when forming episodes . Specifically , the coarse-grained classification of dogs and chairs may present different difficulties than the fine-grained classification of dog breeds , and current benchmarks do not establish a distinction between the two . META-DATASET aims to improve upon previous benchmarks in the above directions : it is significantly larger-scale and is comprised of multiple datasets of diverse data distributions ; its task creation is informed by class structure for ImageNet and Omniglot ; it introduces realistic class imbalance ; and it varies the number of classes in each task and the size of the training set , thus testing the robustness of models across the spectrum from very-low-shot learning onwards . The main contributions of this work are : 1 ) A more realistic , large-scale and diverse environment for training and testing few-shot learners . 2 ) Experimental evaluation of popular models , and a new set of baselines combining inference algorithms of meta-learners with non-episodic training . 3 ) Analyses of whether different models benefit from more data , heterogeneous training sources , pre-trained weights , and meta-training . 4 ) A novel meta-learner that performs strongly on META-DATASET . 2 FEW-SHOT CLASSIFICATION : TASK FORMULATION AND APPROACHES . Task Formulation The end-goal of few-shot classification is to produce a model which , given a new learning episode with N classes and a few labeled examples ( kc per class , c ∈ 1 , . . . , N ) , is able to generalize to unseen examples for that episode . In other words , the model learns from a training ( support ) set S = { ( x1 , y1 ) , ( x2 , y2 ) , . . . , ( xK , yK ) } ( with K = ∑ c kc ) and is evaluated on a held-out test ( query ) setQ = { ( x∗1 , y∗1 ) , ( x∗2 , y∗2 ) , . . . , ( x∗T , y∗T ) } . Each example ( x , y ) is formed of an input vector x ∈ RD and a class label y ∈ { 1 , . . . , N } . Episodes with balanced training sets ( i.e. , kc = k , ∀c ) are usually described as ‘ N -way , k-shot ’ episodes . Evaluation episodes are constructed by sampling their N classes from a larger set Ctest of classes and sampling the desired number of examples per class . A disjoint set Ctrain of classes is available to train the model ; note that this notion of training is distinct from the training that occurs within a few-shot learning episode . Few-shot learning does not prescribe a specific procedure for exploiting Ctrain , but a common approach matches the conditions in which the model is trained and evaluated ( Vinyals et al. , 2016 ) . In other words , training often ( but not always ) proceeds in an episodic fashion . Some authors use training and testing to refer to what happens within any given episode , and meta-training and meta-testing to refer to using Ctrain to turn the model into a learner capable of fast adaptation and Ctest for evaluating its success to learn using few shots , respectively . This nomenclature highlights the meta-learning perspective alluded to earlier , but to avoid confusion we will adopt another common nomenclature and refer to the training and test sets of an episode as the support and query sets and to the process of learning from Ctrain simply as training . We use the term ‘ meta-learner ’ to describe a model that is trained episodically , i.e. , learns to learn across multiple tasks that are sampled from the training set Ctrain . Non-episodic Approaches to Few-shot Classification A natural non-episodic approach simply trains a classifier over all of the training classes Ctrain at once , which can be parameterized by a neural network with a linear layer on top with one output unit per class . After training , this neural network is used as an embedding function g that maps images into a meaningful representation space . The hope of using this model for few-shot learning is that this representation space is useful even for examples of classes that were not included in training . It would then remain to define an algorithm for performing few-shot classification on top of these representations of the images of a task . We consider two choices for this algorithm , yielding the ‘ k-NN ’ and ‘ Finetune ’ variants of this baseline . Given a test episode , the ‘ k-NN ’ baseline classifies each query example as the class that its ‘ closest ’ support example belongs to . Closeness is measured by either Euclidean or cosine distance in the learned embedding space ; a choice that we treat as a hyperparameter . On the other hand , the ‘ Finetune ’ baseline uses the support set of the given test episode to train a new ‘ output layer ’ on top of the embeddings g , and optionally finetune those embedding too ( another hyperparameter ) , for the purpose of classifying between the N new classes of the associated task . A variant of the ‘ Finetune ’ baseline has recently become popular : Baseline++ ( Chen et al. , 2019 ) , originally inspired by Gidaris & Komodakis ( 2018 ) ; Qi et al . ( 2018 ) . It uses a ‘ cosine classifier ’ as the final layer ( ` 2-normalizing embeddings and weights before taking the dot product ) , both during the non-episodic training phase , and for evaluation on test episodes . We incorporate this idea in our codebase by adding a hyperparameter that optionally enables using a cosine classifier for the ‘ k-NN ’ ( training only ) and ‘ Finetune ’ ( both phases ) baselines . Meta-Learners for Few-shot Classification In the episodic setting , models are trained end-to-end for the purpose of learning to build classifiers from a few examples . We choose to experiment with Matching Networks ( Vinyals et al. , 2016 ) , Relation Networks ( Sung et al. , 2018 ) , Prototypical Networks ( Snell et al. , 2017 ) and Model Agnostic Meta-Learning ( MAML , Finn et al. , 2017 ) since they cover a diverse set of approaches to few-shot learning . We also introduce a novel meta-learner which is inspired by the last two models . In each training episode , episodic models compute for each query example x∗ ∈ Q , the distribution for its label p ( y∗|x∗ , S ) conditioned on the support set S and allow to train this differentiablyparameterized conditional distribution end-to-end via gradient descent . The different models are distinguished by the manner in which this conditioning on the support set is realized . In all cases , the performance on the query set drives the update of the meta-learner ’ s weights , which include ( and sometimes consist only of ) the embedding weights . We briefly describe each method below . Prototypical Networks Prototypical Networks construct a prototype for each class and then classify each query example as the class whose prototype is ‘ nearest ’ to it under Euclidean distance . More concretely , the probability that a query example x∗ belongs to class k is defined as : p ( y∗ = k|x∗ , S ) = exp ( −||g ( x ∗ ) − ck||22 ) ∑ k′∈ { 1 , ... , N } exp ( −||g ( x∗ ) − ck′ ||22 ) where ck is the ‘ prototype ’ for class k : the average of the embeddings of class k ’ s support examples . Matching Networks Matching Networks ( in their simplest form ) label each query example as a ( cosine ) distance-weighted linear combination of the support labels : p ( y∗ = k|x∗ , S ) = |S|∑ i=1 α ( x∗ , xi ) 1yi=k , where 1A is the indicator function and α ( x∗ , xi ) is the cosine similarity between g ( x∗ ) and g ( xi ) , softmax-normalized over all support examples xi , where 1 ≤ i ≤ |S| . Relation Networks Relation Networks are comprised of an embedding function g as usual , and a ‘ relation module ’ parameterized by some additional neural network layers . They first embed each support and query using g and create a prototype pc for each class c by averaging its support embeddings . Each prototype pc is concatenated with each embedded query and fed through the relation module which outputs a number in [ 0 , 1 ] representing the predicted probability that that query belongs to class c. The query loss is then defined as the mean square error of that prediction compared to the ( binary ) ground truth . Both g and the relation module are trained to minimize this loss . MAML MAML uses a linear layer parametrized by W and b on top of the embedding function g ( · ; θ ) and classifies a query example as p ( y∗|x∗ , S ) = softmax ( b′ +W′g ( x∗ ; θ′ ) ) , where the output layer parameters W′ and b′ and the embedding function parameters θ′ are obtained by performing a small number of within-episode training steps on the support set S , starting from initial parameter values ( b , W , θ ) . The model is trained by backpropagating the query set loss through the within-episode gradient descent procedure and into ( b , W , θ ) . This normally requires computing second-order gradients , which can be expensive to obtain ( both in terms of time and memory ) . For this reason , an approximation is often used whereby gradients of the within-episode descent steps are ignored . This variant is referred to as first-order MAML ( fo-MAML ) and was used in our experiments . We did attempt to use the full-order version , but found it to be impractically expensive ( e.g. , it caused frequent out-of-memory problems ) . Moreover , since in our setting the number of ways varies between episodes , b , W are set to zero and are not trained ( i.e. , b′ , W′ are the result of within-episode gradient descent initialized at 0 ) , leaving only θ to be trained . In other words , MAML focuses on learning the within-episode initialization θ of the embedding network so that it can be rapidly adapted for a new task . Introducing Proto-MAML We introduce a novel meta-learner that combines the complementary strengths of Prototypical Networks and MAML : the former ’ s simple inductive bias that is evidently effective for very-few-shot learning , and the latter ’ s flexible adaptation mechanism . As explained by Snell et al . ( 2017 ) , Prototypical Networks can be re-interpreted as a linear classifier applied to a learned representation g ( x ) . The use of a squared Euclidean distance means that output logits are expressed as −||g ( x∗ ) − ck||2 = −g ( x∗ ) T g ( x∗ ) + 2cTk g ( x∗ ) − cTk ck = 2cTk g ( x∗ ) − ||ck||2 + constant where constant is a class-independent scalar which can be ignored , as it leaves output probabilities unchanged . The k-th unit of the equivalent linear layer therefore has weights Wk , · = 2ck and biases bk = −||ck||2 , which are both differentiable with respect to θ as they are a function of g ( · ; θ ) . We refer to ( fo- ) Proto-MAML as the ( fo- ) MAML model where the task-specific linear layer of each episode is initialized from the Prototypical Network-equivalent weights and bias defined above and subsequently optimized as usual on the given support set . When computing the update for θ , we allow gradients to flow through the Prototypical Network-equivalent linear layer initialization . We show that this simple modification significantly helps the optimization of this model and outperforms vanilla fo-MAML by a large margin on META-DATASET .
The authors of this paper construct a new few-shot learning dataset. The whole dataset consists of several data from different sources. The authors test several representative meta-learning models (e.g., matching network, Prototype network, MAML) on this dataset and give the analysis. Furthermore, the authors combine MAML and Prototype network, which achieves the best performance on this new dataset.
SP:61bb7f39ffbf7caed41c8c0ef0650010d8a253aa
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
1 INTRODUCTION . Few-shot learning refers to learning new concepts from few examples , an ability that humans naturally possess , but machines still lack . Improving on this aspect would lead to more efficient algorithms that can flexibly expand their knowledge without requiring large labeled datasets . We focus on few-shot classification : classifying unseen examples into one of N new ‘ test ’ classes , given only a few reference examples of each . Recent progress in this direction has been made by considering a meta-problem : though we are not interested in learning about any training class in particular , we can exploit the training classes for the purpose of learning to learn new classes from few examples , thus acquiring a learning procedure that can be directly applied to new few-shot learning problems too . This intuition has inspired numerous models of increasing complexity ( see Related Work for some examples ) . However , we believe that the commonly-used setup for measuring success in this direction is lacking . Specifically , two datasets have emerged as de facto benchmarks for few-shot learning : Omniglot ( Lake et al. , 2015 ) , and mini-ImageNet ( Vinyals et al. , 2016 ) , and we believe that both of them are approaching their limit in terms of allowing one to discriminate between the merits of different approaches . Omniglot is a dataset of 1623 handwritten characters from 50 different alphabets and contains 20 examples per class ( character ) . Most recent methods obtain very high accuracy on Omniglot , rendering the comparisons between them mostly uninformative . mini-ImageNet is formed out of 100 ImageNet ( Russakovsky et al. , 2015 ) classes ( 64/16/20 for train/validation/test ) and contains 600 examples per class . Albeit harder than Omniglot , it has the same property that most recent methods trained on it present similar accuracy when controlling for model capacity . We advocate that a more challenging and realistic benchmark is required for further progress in this area . More specifically , current benchmarks : 1 ) Consider homogeneous learning tasks . In contrast , real-life learning experiences are heterogeneous : they vary in terms of the number of classes and examples per class , and are unbalanced . 2 ) Measure only within-dataset generalization . However , we are eventually after models that can generalize to entirely new distributions ( e.g. , datasets ) . 3 ) Ignore the relationships between classes when forming episodes . Specifically , the coarse-grained classification of dogs and chairs may present different difficulties than the fine-grained classification of dog breeds , and current benchmarks do not establish a distinction between the two . META-DATASET aims to improve upon previous benchmarks in the above directions : it is significantly larger-scale and is comprised of multiple datasets of diverse data distributions ; its task creation is informed by class structure for ImageNet and Omniglot ; it introduces realistic class imbalance ; and it varies the number of classes in each task and the size of the training set , thus testing the robustness of models across the spectrum from very-low-shot learning onwards . The main contributions of this work are : 1 ) A more realistic , large-scale and diverse environment for training and testing few-shot learners . 2 ) Experimental evaluation of popular models , and a new set of baselines combining inference algorithms of meta-learners with non-episodic training . 3 ) Analyses of whether different models benefit from more data , heterogeneous training sources , pre-trained weights , and meta-training . 4 ) A novel meta-learner that performs strongly on META-DATASET . 2 FEW-SHOT CLASSIFICATION : TASK FORMULATION AND APPROACHES . Task Formulation The end-goal of few-shot classification is to produce a model which , given a new learning episode with N classes and a few labeled examples ( kc per class , c ∈ 1 , . . . , N ) , is able to generalize to unseen examples for that episode . In other words , the model learns from a training ( support ) set S = { ( x1 , y1 ) , ( x2 , y2 ) , . . . , ( xK , yK ) } ( with K = ∑ c kc ) and is evaluated on a held-out test ( query ) setQ = { ( x∗1 , y∗1 ) , ( x∗2 , y∗2 ) , . . . , ( x∗T , y∗T ) } . Each example ( x , y ) is formed of an input vector x ∈ RD and a class label y ∈ { 1 , . . . , N } . Episodes with balanced training sets ( i.e. , kc = k , ∀c ) are usually described as ‘ N -way , k-shot ’ episodes . Evaluation episodes are constructed by sampling their N classes from a larger set Ctest of classes and sampling the desired number of examples per class . A disjoint set Ctrain of classes is available to train the model ; note that this notion of training is distinct from the training that occurs within a few-shot learning episode . Few-shot learning does not prescribe a specific procedure for exploiting Ctrain , but a common approach matches the conditions in which the model is trained and evaluated ( Vinyals et al. , 2016 ) . In other words , training often ( but not always ) proceeds in an episodic fashion . Some authors use training and testing to refer to what happens within any given episode , and meta-training and meta-testing to refer to using Ctrain to turn the model into a learner capable of fast adaptation and Ctest for evaluating its success to learn using few shots , respectively . This nomenclature highlights the meta-learning perspective alluded to earlier , but to avoid confusion we will adopt another common nomenclature and refer to the training and test sets of an episode as the support and query sets and to the process of learning from Ctrain simply as training . We use the term ‘ meta-learner ’ to describe a model that is trained episodically , i.e. , learns to learn across multiple tasks that are sampled from the training set Ctrain . Non-episodic Approaches to Few-shot Classification A natural non-episodic approach simply trains a classifier over all of the training classes Ctrain at once , which can be parameterized by a neural network with a linear layer on top with one output unit per class . After training , this neural network is used as an embedding function g that maps images into a meaningful representation space . The hope of using this model for few-shot learning is that this representation space is useful even for examples of classes that were not included in training . It would then remain to define an algorithm for performing few-shot classification on top of these representations of the images of a task . We consider two choices for this algorithm , yielding the ‘ k-NN ’ and ‘ Finetune ’ variants of this baseline . Given a test episode , the ‘ k-NN ’ baseline classifies each query example as the class that its ‘ closest ’ support example belongs to . Closeness is measured by either Euclidean or cosine distance in the learned embedding space ; a choice that we treat as a hyperparameter . On the other hand , the ‘ Finetune ’ baseline uses the support set of the given test episode to train a new ‘ output layer ’ on top of the embeddings g , and optionally finetune those embedding too ( another hyperparameter ) , for the purpose of classifying between the N new classes of the associated task . A variant of the ‘ Finetune ’ baseline has recently become popular : Baseline++ ( Chen et al. , 2019 ) , originally inspired by Gidaris & Komodakis ( 2018 ) ; Qi et al . ( 2018 ) . It uses a ‘ cosine classifier ’ as the final layer ( ` 2-normalizing embeddings and weights before taking the dot product ) , both during the non-episodic training phase , and for evaluation on test episodes . We incorporate this idea in our codebase by adding a hyperparameter that optionally enables using a cosine classifier for the ‘ k-NN ’ ( training only ) and ‘ Finetune ’ ( both phases ) baselines . Meta-Learners for Few-shot Classification In the episodic setting , models are trained end-to-end for the purpose of learning to build classifiers from a few examples . We choose to experiment with Matching Networks ( Vinyals et al. , 2016 ) , Relation Networks ( Sung et al. , 2018 ) , Prototypical Networks ( Snell et al. , 2017 ) and Model Agnostic Meta-Learning ( MAML , Finn et al. , 2017 ) since they cover a diverse set of approaches to few-shot learning . We also introduce a novel meta-learner which is inspired by the last two models . In each training episode , episodic models compute for each query example x∗ ∈ Q , the distribution for its label p ( y∗|x∗ , S ) conditioned on the support set S and allow to train this differentiablyparameterized conditional distribution end-to-end via gradient descent . The different models are distinguished by the manner in which this conditioning on the support set is realized . In all cases , the performance on the query set drives the update of the meta-learner ’ s weights , which include ( and sometimes consist only of ) the embedding weights . We briefly describe each method below . Prototypical Networks Prototypical Networks construct a prototype for each class and then classify each query example as the class whose prototype is ‘ nearest ’ to it under Euclidean distance . More concretely , the probability that a query example x∗ belongs to class k is defined as : p ( y∗ = k|x∗ , S ) = exp ( −||g ( x ∗ ) − ck||22 ) ∑ k′∈ { 1 , ... , N } exp ( −||g ( x∗ ) − ck′ ||22 ) where ck is the ‘ prototype ’ for class k : the average of the embeddings of class k ’ s support examples . Matching Networks Matching Networks ( in their simplest form ) label each query example as a ( cosine ) distance-weighted linear combination of the support labels : p ( y∗ = k|x∗ , S ) = |S|∑ i=1 α ( x∗ , xi ) 1yi=k , where 1A is the indicator function and α ( x∗ , xi ) is the cosine similarity between g ( x∗ ) and g ( xi ) , softmax-normalized over all support examples xi , where 1 ≤ i ≤ |S| . Relation Networks Relation Networks are comprised of an embedding function g as usual , and a ‘ relation module ’ parameterized by some additional neural network layers . They first embed each support and query using g and create a prototype pc for each class c by averaging its support embeddings . Each prototype pc is concatenated with each embedded query and fed through the relation module which outputs a number in [ 0 , 1 ] representing the predicted probability that that query belongs to class c. The query loss is then defined as the mean square error of that prediction compared to the ( binary ) ground truth . Both g and the relation module are trained to minimize this loss . MAML MAML uses a linear layer parametrized by W and b on top of the embedding function g ( · ; θ ) and classifies a query example as p ( y∗|x∗ , S ) = softmax ( b′ +W′g ( x∗ ; θ′ ) ) , where the output layer parameters W′ and b′ and the embedding function parameters θ′ are obtained by performing a small number of within-episode training steps on the support set S , starting from initial parameter values ( b , W , θ ) . The model is trained by backpropagating the query set loss through the within-episode gradient descent procedure and into ( b , W , θ ) . This normally requires computing second-order gradients , which can be expensive to obtain ( both in terms of time and memory ) . For this reason , an approximation is often used whereby gradients of the within-episode descent steps are ignored . This variant is referred to as first-order MAML ( fo-MAML ) and was used in our experiments . We did attempt to use the full-order version , but found it to be impractically expensive ( e.g. , it caused frequent out-of-memory problems ) . Moreover , since in our setting the number of ways varies between episodes , b , W are set to zero and are not trained ( i.e. , b′ , W′ are the result of within-episode gradient descent initialized at 0 ) , leaving only θ to be trained . In other words , MAML focuses on learning the within-episode initialization θ of the embedding network so that it can be rapidly adapted for a new task . Introducing Proto-MAML We introduce a novel meta-learner that combines the complementary strengths of Prototypical Networks and MAML : the former ’ s simple inductive bias that is evidently effective for very-few-shot learning , and the latter ’ s flexible adaptation mechanism . As explained by Snell et al . ( 2017 ) , Prototypical Networks can be re-interpreted as a linear classifier applied to a learned representation g ( x ) . The use of a squared Euclidean distance means that output logits are expressed as −||g ( x∗ ) − ck||2 = −g ( x∗ ) T g ( x∗ ) + 2cTk g ( x∗ ) − cTk ck = 2cTk g ( x∗ ) − ||ck||2 + constant where constant is a class-independent scalar which can be ignored , as it leaves output probabilities unchanged . The k-th unit of the equivalent linear layer therefore has weights Wk , · = 2ck and biases bk = −||ck||2 , which are both differentiable with respect to θ as they are a function of g ( · ; θ ) . We refer to ( fo- ) Proto-MAML as the ( fo- ) MAML model where the task-specific linear layer of each episode is initialized from the Prototypical Network-equivalent weights and bias defined above and subsequently optimized as usual on the given support set . When computing the update for θ , we allow gradients to flow through the Prototypical Network-equivalent linear layer initialization . We show that this simple modification significantly helps the optimization of this model and outperforms vanilla fo-MAML by a large margin on META-DATASET .
The paper presents Meta-Dataset, a benchmark for few-shot classification that combines various image classification data sets, allows the number of classes and examples per class to vary, and considers the relationships between classes. It performs an empirical evaluation of six algorithms from the literature, k-NN, FineTune, Prototypical Networks, Matching Networks, Relation Networks, and MAML. A new approach combining Prototypical Networks and first-order MAML is shown to outperform those algorithms, but there is substantial room for improvement overall.
SP:61bb7f39ffbf7caed41c8c0ef0650010d8a253aa
Deep Mining: Detecting Anomalous Patterns in Neural Network Activations with Subset Scanning
1 INTRODUCTION . The vast majority of data in the world can be thought of as created by unknown , and possibly complex , normal behavior of data generating systems . But what happens when data is generated by an alternative system instead ? Fraudulent records , disease outbreaks , cancerous cells on pathology slides , or adversarial noised images are all examples of data that does not come from the original , normal system . These are the interesting data points worth studying . The goal of anomalous pattern detection is to quantify , detect , and characterize the data that are generated under these alternative systems . Furthermore , subset scanning extends these ideas to consider groups of data records that may only appear anomalous when viewed together ( as a subset ) due to the assumption that they were generated by the same alternative system . Neural networks may be viewed as one of these data generating systems . The activations are a source of high-dimensional data that can be mined to discover anomalous patterns . Mining activation data has implications for interpretable machine learning as well as more objective tasks such as detecting groups of out-of-distribution samples . This paper addresses the question : ” Which of the exponentially many subset of inputs ( images ) have higher-than-expected activations at which of the exponentially many subset of nodes in a hidden layer of a neural network ? ” We treat this scenario as a search problem with the goal of finding a “ high-scoring ” subset of images × nodes by efficiently maximizing nonparametric scan statistics in the activation space of neural networks . The primary contribution of this work is to demonstrate that nonparametric scan statistics , efficiently optimized over node-activations×multiple inputs ( images ) , are able to quantify the anomalousness of a subset of those inputs ( images ) into a real-valued “ score ” . This definition of anomalousness is with respect to a set of clean “ background ” inputs ( images ) that are assumed to generate normal or expected patterns in the activation space of the network . Our method measures the deviance between the activations of a subset of inputs ( images ) under evaluation and the activations generated by the background inputs . The challenging aspect of measuring deviances in the activation space of neural networks is dealing with high-dimensional data , on the order of the number of nodes in a hidden layer × the number of inputs ( images ) under consideration . Therefore , the measure of anomalousness must be effective in capturing systematic ( yet potentially subtle ) deviances in a high-dimensional subspace and be computationally tractable . Subset scanning meets both of these requirements ( see Section 2 ) . The reward for addressing this difficult problem is an unsupervised , anomalous-input detector that can be applied to any input and to any type of neural network architecture . Neural networks universally rely on their activation space to encode the features of their inputs and therefore quantifying deviations from expected behavior in the activation space has broad appeal and potential beyond detecting anomalous patterns in groups of images . Furthermore , an additional output of subset scanning not fully explored in this paper is the subset of nodes at which the subset of inputs ( images ) had the higher-than-expected activations . These may be used to characterize the anomalous pattern that is affecting the inputs . The second contribution of this work focuses on detection of targeted adversarial noise added to inputs in order to change the labels to a target class Szegedy et al . ( 2013 ) ; Goodfellow et al . ( 2014 ) ; Papernot & McDaniel ( 2016 ) . Our critical insight to this problem is the ability to detect the presence of noise ( i.e . an anomalous pattern ) across multiple images simultaneously . This view is grounded by the idea that targeted attacks will create a subtle , but systematic , anomalous pattern of activations across multiple noised images . Therefore , during a realistic attack on a machine learning system , we expect a subset of the inputs to be anomalous together by sharing higher-than-expected activations at similar nodes . Empirical results show that detection power drastically increases when targeted images compose 8 % -10 % of the data under evaluation . Detection power is near 1 when the proportion reaches 14 % . In summary , this is the first work to apply subset scanning techniques to data generated from neural networks in order to detect anomalous patterns of activations that span multiple inputs ( images ) . To the best of our knowledge , this is the first topic to address adversarial noise detection by considering images as a group rather than individually . 2 SUBSET SCANNING . Subset scanning treats pattern detection as a search for the “ most anomalous ” subset of observations in the data where anomalousness is quantified by a scoring function , F ( S ) ( typically a loglikelihood ratio ) . Therefore , we wish to efficiently identify S∗ = argmaxS F ( S ) over all relevant subsets of the data S. Subset scanning has been shown to succeed where other heuristic approaches may fail ( Neill , 2012 ) . “ Top-down ” methods look for globally interesting patterns and then identifies sub-partitions to find smaller anomalous groups of records . These approaches may fail when the true anomaly is not evident from global aggregates . Similarly , “ Bottom-up ” methods look for individually anomalous data points and attempt to aggregate them into clusters . These methods may fail when the pattern is only evident by evaluating a group of data points collectively . Treating the detection problem as a subset scan has desirable statistical properties for maximizing detection power but the exhaustive search is infeasible for even moderately sized data sets . However , a large class of scoring functions satisfy the “ Linear Time Subset Scanning ” ( LTSS ) property which allows for exact , efficient maximization over all subsets of data without requiring an exhaustive search Neill ( 2012 ) . The following sub-sections highlight a class of functions that satisfy LTSS and describe how the efficient maximization process works for scanning over activations . 2.1 NONPARAMETRIC SCAN STATISTICS . This work uses nonparametric scan statistics ( NPSS ) that have been used in other pattern detection methods Neill & Lingwall ( 2007 ) ; McFowland III et al . ( 2013 ) ; McFowland et al . ( 2018 ) ; Chen & Neill ( 2014 ) . These scoring functions make no parametric assumptions about how activations are distributed at a node in a neural network . They do , however , require baseline or background data to inform their data distribution under the null hypothesis H0 of no anomaly present . The first step of NPSS is to compute empirical p-values for the evaluation input ( e.g . images potentially affected with adversarial noise ) by comparing it to the empirical baseline distribution generated from the background inputs that are “ natural ” inputs known to be free of an anomalous pattern . NPSS then searches for subsets of data S in the evaluation inputs that contain the most evidence for not having been generated under H0 . This evidence is quantified by an unexpectedly large number of low empirical p-values generated by the evaluation inputs . See Figure 1 . In our specific context , we have evaluation data D a collection of N = |D| images Xi , which are fed through a neural network ( defined by a set of nodesO = { O1 , . . . , OJ } ) producing an activation Aij , at each node Oj . The challenge we face is detecting which subset S = SX × SO , if any , of the evaluation images SX ⊆ D is anomalous as evidenced from their activation at some nodes SO ⊆ O . To approach this challenge , NPSS uses as baseline data DH0 of known , clean CIFAR-10 images . Each of the M = |DH0 | background images Xz , also generates an activation A H0 zj at each network node Oj . GivenD , ourN×J matrix of activations for each evaluation image at each network node , andDH0 , our corresponding M × J matrix of activations for each of the background images , we can obtain an empirical p-value for each Aij : a means to measure of how anomalous the activation value of ( potentially contaminated ) image Xi is at node Oj . This p-value pij is the proportion of activations from the background images , AH0zj , that are larger than the activation from the evaluation images Aij at node Oj . We note that McFowland III et al . ( 2013 ) extend this notion to p-value ranges such that pij is uniformly distributed between pminij and p max ij . This current work makes a simplifying assumption here to only consider a range by its upper bound , pmaxij = ∑ Xz∈DH0 I ( Azj > =Aij ) +1 M+1 . The matrix of activations Aij is now converted into a matrix of p-values Pij . Intuitively , if an evaluation imageXi is “ natural ” ( its activations are drawn from the same distribution as the baseline images ) then few of the p-values generated by image Xi across the network nodes–will be extreme . The key assumption for subset scanning approaches is that under the alternative hypothesis of an anomaly present in the activation data then at least some subset of the activations , for the effected subset of images , will systematically appear extreme . The goal is to identify this “ high-scoring ” subset through an efficient search procedure that maximizes a nonparametric scan statistic . The matrix of p-values Pij from evaluation images is processed by a nonparametric scan statistic in order to identify the subset of evaluation images SX ⊆ D whose activations at some subset of nodes SO ⊆ O maximizes the scoring function maxS=SX×SO F ( S ) , where S = SX × SO represents a submatrix of Pij , as this is the subset with the most statistical evidence for having been effected by an anomalous pattern . The general form of the NPSS score function is F ( S ) = maxα Fα ( S ) = maxα φ ( α , Nα ( S ) , N ( S ) ) where N ( S ) represents the number of empirical p-values contained in subset S and Nα ( S ) is the number of p-values less than ( significance level ) α contained in subset S. This generalizes to a submatrix , S = SX × SO , intuitively . Nα ( S ) = ∑ Xi∈SX ∑ Oj∈SO I ( p max ij < α ) and N ( S ) = ∑ Xi∈SX ∑ Oj∈SO 1 . There are well-known goodness-of-fit statistics that can be utilized in NPSS McFowland et al . ( 2018 ) , the most popular is the Kolmogorov-Smirnov test Kolmogorov ( 1933 ) . Another option is Higher-Criticism Donoho & Jin ( 2004 ) . In this work we use the Berk-Jones test statistic Berk & Jones ( 1979 ) : φBJ ( α , Nα , N ) = N ∗ KL ( Nα N , α ) , where KL is the Kullback-Liebler divergence KL ( x , y ) = x log xy + ( 1 − x ) log 1−x 1−y between the observed and expected proportions of significant p-values . Berk-Jones can be interpreted as the log-likelihood ratio for testing whether the p-values are uniformly distributed on [ 0 , 1 ] as compared to following a piece-wise constant step function alternative distribution , and has been shown to fulfill several optimality properties and has greater power than any weighted Kolmogorov statistic .
The paper proposed a scheme to detect the presence of anomalous inputs, such as samples designed adversarially for deep learning tasks, that is based on a "subset scanning" approach to detect anomalous activations in the deep learning network. The paper is considering a very interesting problem and provides the suitable application of an approach previously developed for pattern detection. The approach is motivated by p-value statistics of the activation patterns in the deep learning network under the "null hypothesis" of a non-anomalous input.
SP:be873fc103ccbf8312a711d8798834c2d824438b
Deep Mining: Detecting Anomalous Patterns in Neural Network Activations with Subset Scanning
1 INTRODUCTION . The vast majority of data in the world can be thought of as created by unknown , and possibly complex , normal behavior of data generating systems . But what happens when data is generated by an alternative system instead ? Fraudulent records , disease outbreaks , cancerous cells on pathology slides , or adversarial noised images are all examples of data that does not come from the original , normal system . These are the interesting data points worth studying . The goal of anomalous pattern detection is to quantify , detect , and characterize the data that are generated under these alternative systems . Furthermore , subset scanning extends these ideas to consider groups of data records that may only appear anomalous when viewed together ( as a subset ) due to the assumption that they were generated by the same alternative system . Neural networks may be viewed as one of these data generating systems . The activations are a source of high-dimensional data that can be mined to discover anomalous patterns . Mining activation data has implications for interpretable machine learning as well as more objective tasks such as detecting groups of out-of-distribution samples . This paper addresses the question : ” Which of the exponentially many subset of inputs ( images ) have higher-than-expected activations at which of the exponentially many subset of nodes in a hidden layer of a neural network ? ” We treat this scenario as a search problem with the goal of finding a “ high-scoring ” subset of images × nodes by efficiently maximizing nonparametric scan statistics in the activation space of neural networks . The primary contribution of this work is to demonstrate that nonparametric scan statistics , efficiently optimized over node-activations×multiple inputs ( images ) , are able to quantify the anomalousness of a subset of those inputs ( images ) into a real-valued “ score ” . This definition of anomalousness is with respect to a set of clean “ background ” inputs ( images ) that are assumed to generate normal or expected patterns in the activation space of the network . Our method measures the deviance between the activations of a subset of inputs ( images ) under evaluation and the activations generated by the background inputs . The challenging aspect of measuring deviances in the activation space of neural networks is dealing with high-dimensional data , on the order of the number of nodes in a hidden layer × the number of inputs ( images ) under consideration . Therefore , the measure of anomalousness must be effective in capturing systematic ( yet potentially subtle ) deviances in a high-dimensional subspace and be computationally tractable . Subset scanning meets both of these requirements ( see Section 2 ) . The reward for addressing this difficult problem is an unsupervised , anomalous-input detector that can be applied to any input and to any type of neural network architecture . Neural networks universally rely on their activation space to encode the features of their inputs and therefore quantifying deviations from expected behavior in the activation space has broad appeal and potential beyond detecting anomalous patterns in groups of images . Furthermore , an additional output of subset scanning not fully explored in this paper is the subset of nodes at which the subset of inputs ( images ) had the higher-than-expected activations . These may be used to characterize the anomalous pattern that is affecting the inputs . The second contribution of this work focuses on detection of targeted adversarial noise added to inputs in order to change the labels to a target class Szegedy et al . ( 2013 ) ; Goodfellow et al . ( 2014 ) ; Papernot & McDaniel ( 2016 ) . Our critical insight to this problem is the ability to detect the presence of noise ( i.e . an anomalous pattern ) across multiple images simultaneously . This view is grounded by the idea that targeted attacks will create a subtle , but systematic , anomalous pattern of activations across multiple noised images . Therefore , during a realistic attack on a machine learning system , we expect a subset of the inputs to be anomalous together by sharing higher-than-expected activations at similar nodes . Empirical results show that detection power drastically increases when targeted images compose 8 % -10 % of the data under evaluation . Detection power is near 1 when the proportion reaches 14 % . In summary , this is the first work to apply subset scanning techniques to data generated from neural networks in order to detect anomalous patterns of activations that span multiple inputs ( images ) . To the best of our knowledge , this is the first topic to address adversarial noise detection by considering images as a group rather than individually . 2 SUBSET SCANNING . Subset scanning treats pattern detection as a search for the “ most anomalous ” subset of observations in the data where anomalousness is quantified by a scoring function , F ( S ) ( typically a loglikelihood ratio ) . Therefore , we wish to efficiently identify S∗ = argmaxS F ( S ) over all relevant subsets of the data S. Subset scanning has been shown to succeed where other heuristic approaches may fail ( Neill , 2012 ) . “ Top-down ” methods look for globally interesting patterns and then identifies sub-partitions to find smaller anomalous groups of records . These approaches may fail when the true anomaly is not evident from global aggregates . Similarly , “ Bottom-up ” methods look for individually anomalous data points and attempt to aggregate them into clusters . These methods may fail when the pattern is only evident by evaluating a group of data points collectively . Treating the detection problem as a subset scan has desirable statistical properties for maximizing detection power but the exhaustive search is infeasible for even moderately sized data sets . However , a large class of scoring functions satisfy the “ Linear Time Subset Scanning ” ( LTSS ) property which allows for exact , efficient maximization over all subsets of data without requiring an exhaustive search Neill ( 2012 ) . The following sub-sections highlight a class of functions that satisfy LTSS and describe how the efficient maximization process works for scanning over activations . 2.1 NONPARAMETRIC SCAN STATISTICS . This work uses nonparametric scan statistics ( NPSS ) that have been used in other pattern detection methods Neill & Lingwall ( 2007 ) ; McFowland III et al . ( 2013 ) ; McFowland et al . ( 2018 ) ; Chen & Neill ( 2014 ) . These scoring functions make no parametric assumptions about how activations are distributed at a node in a neural network . They do , however , require baseline or background data to inform their data distribution under the null hypothesis H0 of no anomaly present . The first step of NPSS is to compute empirical p-values for the evaluation input ( e.g . images potentially affected with adversarial noise ) by comparing it to the empirical baseline distribution generated from the background inputs that are “ natural ” inputs known to be free of an anomalous pattern . NPSS then searches for subsets of data S in the evaluation inputs that contain the most evidence for not having been generated under H0 . This evidence is quantified by an unexpectedly large number of low empirical p-values generated by the evaluation inputs . See Figure 1 . In our specific context , we have evaluation data D a collection of N = |D| images Xi , which are fed through a neural network ( defined by a set of nodesO = { O1 , . . . , OJ } ) producing an activation Aij , at each node Oj . The challenge we face is detecting which subset S = SX × SO , if any , of the evaluation images SX ⊆ D is anomalous as evidenced from their activation at some nodes SO ⊆ O . To approach this challenge , NPSS uses as baseline data DH0 of known , clean CIFAR-10 images . Each of the M = |DH0 | background images Xz , also generates an activation A H0 zj at each network node Oj . GivenD , ourN×J matrix of activations for each evaluation image at each network node , andDH0 , our corresponding M × J matrix of activations for each of the background images , we can obtain an empirical p-value for each Aij : a means to measure of how anomalous the activation value of ( potentially contaminated ) image Xi is at node Oj . This p-value pij is the proportion of activations from the background images , AH0zj , that are larger than the activation from the evaluation images Aij at node Oj . We note that McFowland III et al . ( 2013 ) extend this notion to p-value ranges such that pij is uniformly distributed between pminij and p max ij . This current work makes a simplifying assumption here to only consider a range by its upper bound , pmaxij = ∑ Xz∈DH0 I ( Azj > =Aij ) +1 M+1 . The matrix of activations Aij is now converted into a matrix of p-values Pij . Intuitively , if an evaluation imageXi is “ natural ” ( its activations are drawn from the same distribution as the baseline images ) then few of the p-values generated by image Xi across the network nodes–will be extreme . The key assumption for subset scanning approaches is that under the alternative hypothesis of an anomaly present in the activation data then at least some subset of the activations , for the effected subset of images , will systematically appear extreme . The goal is to identify this “ high-scoring ” subset through an efficient search procedure that maximizes a nonparametric scan statistic . The matrix of p-values Pij from evaluation images is processed by a nonparametric scan statistic in order to identify the subset of evaluation images SX ⊆ D whose activations at some subset of nodes SO ⊆ O maximizes the scoring function maxS=SX×SO F ( S ) , where S = SX × SO represents a submatrix of Pij , as this is the subset with the most statistical evidence for having been effected by an anomalous pattern . The general form of the NPSS score function is F ( S ) = maxα Fα ( S ) = maxα φ ( α , Nα ( S ) , N ( S ) ) where N ( S ) represents the number of empirical p-values contained in subset S and Nα ( S ) is the number of p-values less than ( significance level ) α contained in subset S. This generalizes to a submatrix , S = SX × SO , intuitively . Nα ( S ) = ∑ Xi∈SX ∑ Oj∈SO I ( p max ij < α ) and N ( S ) = ∑ Xi∈SX ∑ Oj∈SO 1 . There are well-known goodness-of-fit statistics that can be utilized in NPSS McFowland et al . ( 2018 ) , the most popular is the Kolmogorov-Smirnov test Kolmogorov ( 1933 ) . Another option is Higher-Criticism Donoho & Jin ( 2004 ) . In this work we use the Berk-Jones test statistic Berk & Jones ( 1979 ) : φBJ ( α , Nα , N ) = N ∗ KL ( Nα N , α ) , where KL is the Kullback-Liebler divergence KL ( x , y ) = x log xy + ( 1 − x ) log 1−x 1−y between the observed and expected proportions of significant p-values . Berk-Jones can be interpreted as the log-likelihood ratio for testing whether the p-values are uniformly distributed on [ 0 , 1 ] as compared to following a piece-wise constant step function alternative distribution , and has been shown to fulfill several optimality properties and has greater power than any weighted Kolmogorov statistic .
The paper is the first paper, in my knowledge, that introduces the problem of identifying anomalous (or corrupted) subset of data input to a neural network. The corrupted inputs are identified vis-a-vis a set of “clean” background set (e.g., the training/ validation data set). Experimental evaluation is performed on the problem of identifying the subset of noisy CIFAR-10 images created by adding targeted adversarial perturbations.
SP:be873fc103ccbf8312a711d8798834c2d824438b
An Exponential Learning Rate Schedule for Deep Learning
• Training can be done using SGD with momentum and an exponentially increasing learning rate schedule , i.e. , learning rate increases by some ( 1 + α ) factor in every epoch for some α > 0 . ( Precise statement in the paper . ) To the best of our knowledge this is the first time such a rate schedule has been successfully used , let alone for highly successful architectures . As expected , such training rapidly blows up network weights , but the network stays wellbehaved due to normalization . • Mathematical explanation of the success of the above rate schedule : a rigorous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum . This equivalence holds for other normalization layers as well , Group Normalization ( Wu & He , 2018 ) , Layer Normalization ( Ba et al. , 2016 ) , Instance Norm ( Ulyanov et al. , 2016 ) , etc . • A worked-out toy example illustrating the above linkage of hyperparameters . Using either weight decay or BN alone reaches global minimum , but convergence fails when both are used . 1 INTRODUCTION Batch Normalization ( BN ) offers significant benefits in optimization and generalization across architectures , and has become ubiquitous . Usually best performance is attained by adding weight decay and momentum in addition to BN . Usually weight decay is thought to improve generalization by controlling the norm of the parameters . However , it is fallacious to try to separately think of optimization and generalization because we are dealing with a nonconvex objective with multiple optima . Even slight changes to the training surely lead to a different trajectory in the loss landscape , potentially ending up at a different solution ! One needs trajectory analysis to have a hope of reasoning about the effects of such changes . In the presence of BN and other normalization schemes , including GroupNorm , LayerNorm , and InstanceNorm , the optimization objective is scale invariant to the parameters , which means rescaling parameters would not change the prediction , except the parameters that compute the output which do not have BN . However , Hoffer et al . ( 2018b ) shows that fixing the output layer randomly doesn ’ t harm the performance of the network . So the trainable parameters satisfy scale invariance . ( See more in Appendix E ) The current paper introduces new modes of analysis for such settings . This rigorous analysis yields the surprising conclusion that the original learning rate ( LR ) schedule and weight decay ( WD ) can be folded into a new exponential schedule for learning rate : in each iteration multiplying it by ( 1 + α ) for some α > 0 that depends upon the momentum and weight decay rate . Theorem 1.1 ( Main , Informal ) . SGD on a scale-invariant objective with initial learning rate η , weight decay factor λ , and momentum factor γ is equivalent to SGD with momentum factor γ where at iteration t , the learning rate η̃t in the new exponential learning rate schedule is defined as η̃t = α −2t−1η without weight decay ( λ̃ = 0 ) where α is a non-zero root of equation x2 − ( 1 + γ − λη ) x+ γ = 0 , ( 1 ) Specifically , when momentum γ = 0 , the above schedule can be simplified as η̃t = ( 1−λη ) −2t−1η . The above theorem requires that the product of learning rate and weight decay factor , λη , is small than ( 1−√γ ) 2 , which is almost always satisfied in practice . The rigorous and most general version of above theorem is Theorem 2.12 , which deals with multi-phase LR schedule , momentum and weight decay . There are other recently discovered exotic LR schedules , e.g . Triangular LR schedule ( Smith , 2017 ) and Cosine LR schedule ( Loshchilov & Hutter , 2016 ) , and our exponential LR schedule is an extreme example of LR schedules that become possible in presence of BN . Such an exponential increase in learning rate seems absurd at first sight and to the best of our knowledge , no deep learning success has been reported using such an idea before . It does highlight the above-mentioned viewpoint that in deep learning , optimization and regularization are not easily separated . Of course , the exponent trumps the effect of initial lr very fast ( See Figure 4 ) , which explains why training with BN and WD is not sensitive to the scale of initialization , since with BN , tuning the scale of initialization is equivalent to tuning the initial LR η while fixing the product of LR and WD , ηλ ( See Lemma 2.7 ) . Note that it is customary in BN to switch to a lower LR upon reaching a plateau in the validation loss . According to the analysis in the above theorem , this corresponds to an exponential growth with a smaller exponent , except for a transient effect when a correction term is needed for the two processes to be equivalent ( see discussion around Theorem 2.12 ) . Thus the final training algorithm is roughly as follows : Start from a convenient LR like 0.1 , and grow it at an exponential rate with a suitable exponent . When validation loss plateaus , switch to an exponential growth of LR with a lower exponent . Repeat the procedure until the training loss saturates . In Section 3 , we demonstrate on a toy example how weight decay and normalization are inseparably involved in the optimization process . With either weight decay or normalization alone , SGD will achieve zero training error . But with both turned on , SGD fails to converge to global minimum . In Section 4 , we experimentally verify our theoretical findings on CNNs and ResNets . We also construct better exponential LR schedules by incorporating the Cosine LR schedule on CIFAR10 , which opens the possibility of even more general theory of rate schedule tuning towards better performance . 1.1 RELATED WORK There have been other theoretical analyses of training models with scale-invariance . ( Cho & Lee , 2017 ) proposed to run Riemanian gradient descent on Grassmann manifold G ( 1 , n ) since the weight matrix is scaling invariant to the loss function . observed that the effective stepsize is proportional to ηw ‖wt‖2 . ( Arora et al. , 2019 ) show the gradient is always perpendicular to the current parameter vector which has the effect that norm of each scale invariant parameter group increases monotonically , which has an auto-tuning effect . ( Wu et al. , 2018 ) proposes a new adaptive learning rate schedule motivated by scale-invariance property of Weight Normalization . Previous work for understanding Batch Normalization . ( Santurkar et al. , 2018 ) suggested that the success of BNhas does not derive from reduction in Internal Covariate Shift , but by making landscape smoother . ( Kohler et al. , 2018 ) essentially shows linear model with BN could achieve exponential convergence rate assuming gaussian inputs , but their analysis is for a variant of GD with an inner optimization loop rather than GD itself . ( Bjorck et al. , 2018 ) observe that the higher learning rates enabled by BN empirically improves generalization . ( Arora et al. , 2019 ) prove that with certain mild assumption , ( S ) GD with BN finds approximate first order stationary point with any fixed learning rate . None of the above analyses incorporated weight decay , but ( Zhang et al. , 2019 ; Hoffer et al. , 2018a ; van Laarhoven , 2017 ) argued qualitatively that weight decay makes parameters have smaller norms , and thus the effective learning rate , ηw‖wt‖2 is larger . They described experiments showing this effect but didn ’ t have a closed form theoretical analysis like ours . None of the above analyses deals with momentum rigorously . 1.2 PRELIMINARIES AND NOTATIONS For batch B = { xi } Bi=1 , network parameter θ , we denote the network by fθ and the loss function at iteration t by Lt ( fθ ) = L ( fθ , Bt ) . When there ’ s no ambiguity , we also use Lt ( θ ) for convenience . We say a loss function L ( θ ) is scale invariant to its parameter θ is for any c ∈ R+ , L ( θ ) = L ( cθ ) . In practice , the source of scale invariance is usually different types of normalization layers , including Batch Normalization ( Ioffe & Szegedy , 2015 ) , Group Normalization ( Wu & He , 2018 ) , Layer Normalization ( Ba et al. , 2016 ) , Instance Norm ( Ulyanov et al. , 2016 ) , etc . Implementations of SGD with Momentum/Nesterov comes with subtle variations in literature . We adopt the variant from Sutskever et al . ( 2013 ) , also the default in PyTorch ( Paszke et al. , 2017 ) . L2 regularization ( a.k.a . Weight Decay ) is another common trick used in deep learning . Combining them together , we get the one of the mostly used optimization algorithms below . Definition 1.2 . [ SGD with Momentum and Weight Decay ] At iteration t , with randomly sampled batch Bt , update the parameters θt and momentum vt as following : θt =θt−1 − ηt−1vt ( 2 ) vt =γvt−1 +∇θ ( Lt ( θt−1 ) + λt−1 2 ‖θt−1‖2 ) , ( 3 ) where ηt is the learning rate at epoch t , γ is the momentum coefficient , and λ is the factor of weight decay . Usually , v0 is initialized to be 0 . For ease of analysis , we will use the following equivalent of Definition 1.2. θt − θt−1 ηt−1 = γ θt−1 − θt−2 ηt−2 −∇θ ( ( L ( θt−1 ) + λt−1 2 ‖θt−1‖22 ) , ( 4 ) where η−1 and θ−1 must be chosen in a way such that v0 = θ0−θ−1 η−1 is satisfied , e.g . when v0 = 0 , θ−1 = θ0 and η−1 could be arbitrary . A key source of intuition is the following simple lemma about scale-invariant networks Arora et al . ( 2019 ) . The first property ensures GD ( with momentum ) always increases the norm of the weight . ( See Lemma C.1 in Appendix C ) and the second property says that the gradients are smaller for parameteres with larger norm , thus stabilizing the trajectory from diverging to infinity . Lemma 1.3 ( Scale Invariance ) . If for any c ∈ R+ , L ( θ ) = L ( cθ ) , then ( 1 ) . 〈∇θL , θ〉 = 0 ; ( 2 ) . ∇θL ∣∣ θ=θ0 = c∇θL ∣∣ θ=cθ0 , for any c > 0 2 DERIVING EXPONENTIAL LEARNING RATE SCHEDULE As a warm-up in Section 2.1 we show that if momentum is turned off then Fixed LR + Fixed WD can be translated to an equivalent Exponential LR . In Section 2.2 we give a more general analysis on the equivalence between Fixed LR + Fixed WD + Fixed Momentum Factor and Exponential LR + Fixed Momentum Factor . While interesting , this still does completely apply to real-life deep learning where reaching full accuracy usually requires multiple phases in training where LR is fixed within a phase and reduced by some factor from one phase to the next . Section 2.3 shows how to interpret such a multi-phase LR schedule + WD + Momentum as a certain multi-phase exponential LR schedule with Momentum . 2.1 REPLACING WD BY EXPONENTIAL LR IN MOMENTUM-FREE SGD We use notation of Section 1.2 and assume LR is fixed over iterations , i.e . ηt = η0 , and γ ( momentum factor ) is set as 0 . We also use λ to denote WD factor and θ0 to denote the initial parameters . The intuition should be clear from Lemma 1.3 , which says that shrinking parameter weights by factor ρ ( where ρ < 1 ) amounts to making the gradient ρ−1 times larger without changing its direction . Thus in order to restore the ratio between original parameter and its update ( LR×Gradient ) , the easiest way would be scaling LR by ρ2 . This suggests that scaling the parameter θ by ρ at each step is equivalent to scaling the LR η by ρ−2 . To prove this formally we use the following formalism . We ’ ll refer to the vector ( θ , η ) the state of a training algorithm and study how this evolves under various combinations of parameter changes . We will think of each step in training as a mapping from one state to another . Since mappings can be composed , any finite number of steps also correspond to a mapping . The following are some basic mappings used in the proof . 1 . Run GD with WD for a step : GDρt ( θ , η ) = ( ρθ − η∇Lt ( θ ) , η ) ; 2 . Scale the parameter θ : Πc1 ( θ , η ) = ( cθ , η ) ; 3 . Scale the LR η : Πc2 ( θ , η ) = ( θ , cη ) . For example , when ρ = 1 , GD1t is vanilla GD update without WD , also abbreviated as GDt . When ρ = 1−λη0 , GD1−λη0t is GD update with WD λ and LR η0 . Here Lt is the loss function at iteration t , which is decided by the batch of the training samples Bt in tth iteration . Below is the main result of this subsection , showing our claim that GD + WD ⇔ GD+ Exp LR ( when Momentum is zero ) . It will be proved after a series of lemmas . Theorem 2.1 ( WD ⇔ Exp LR ) . For every ρ < 1 and positive integer t following holds : GDρt−1 ◦ · · · ◦ GD ρ 0 = [ Πρ t 1 ◦Π ρ2t 2 ] ◦Πρ −1 2 ◦ GDt−1 ◦Π ρ−2 2 ◦ · · · ◦ GD1 ◦Π ρ−2 2 ◦ GD0 ◦Π ρ−1 2 . With WD being λ , ρ is set as 1 − λη0 and thus the scaling factor of LR per iteration is ρ−2 = ( 1− λη0 ) −2 , except for the first iteration it ’ s ρ−1 = ( 1− λη0 ) −1 . We first show how to write GD update with WD as a composition of above defined basic maps . Lemma 2.2 . GDρt = Π ρ 2 ◦Π ρ 1 ◦ GDt ◦Π ρ−1 2 . Below we will define the proper notion of equivalence such that ( 1 ) . Πρ1 ∼ Π ρ−2 2 , which implies GDρt ∼ Π ρ−1 2 ◦ GDt ◦Π ρ−1 2 ; ( 2 ) the equivalence is preserved under future GD updates . We first extend the equivalence between weights ( same direction ) to that between states , with additional requirement that the ratio between the size of GD update and that of parameter are the same among all equivalent states , which yields the notion of Equivalent Scaling . Definition 2.3 ( Equivalent States ) . ( θ , η ) is equivalent to ( θ′ , η′ ) iff ∃c > 0 , ( θ̃ , η̃ ) = [ Πc1 ◦ Πc 2 2 ] ( θ , η ) = ( cθ , c 2η ) , which is also denoted by ( θ̃ , η̃ ) c∼ ( θ , η ) . Πc1 ◦ Πc 2 2 is called Equivalent Scaling for all c > 0 . The following lemma shows that equivalent scaling commutes with GD update with WD , implying that equivalence is preserved under GD update ( Lemma 2.4 ) . This anchors the notion of equivalence — we could insert equivalent scaling anywhere in a sequence of basic maps ( GD update , LR/parameter scaling ) , without changing the final network . Lemma 2.4 . For any constant c , ρ > 0 and t ≥ 0 , GDρt ◦ [ Πc1 ◦Πc 2 2 ] = [ Π c 1 ◦Πc 2 2 ] ◦ GD ρ t .
This work makes an interesting observation that it is possible to use exponentially growing learning rate schedule when training with neural networks with batch normalization. This paper provides both theoretical insights and empirical demonstration of this remarkable property. In detail, the authors prove that for stochastic gradient descent (SGD) with momentum, this exponential learning rate schedule is equivalent to constant learning rate + weight decay, for any scale invariant networks, including networks with Batch Normalization and other normalization methods. This paper also contains an interesting toy example where gd converges when normalization or weight decay is used alone while not when normalization and weight decay are used together.
SP:b619dae0690930ba616bfeb3e32e89de6e798993
An Exponential Learning Rate Schedule for Deep Learning
• Training can be done using SGD with momentum and an exponentially increasing learning rate schedule , i.e. , learning rate increases by some ( 1 + α ) factor in every epoch for some α > 0 . ( Precise statement in the paper . ) To the best of our knowledge this is the first time such a rate schedule has been successfully used , let alone for highly successful architectures . As expected , such training rapidly blows up network weights , but the network stays wellbehaved due to normalization . • Mathematical explanation of the success of the above rate schedule : a rigorous proof that it is equivalent to the standard setting of BN + SGD + Standard Rate Tuning + Weight Decay + Momentum . This equivalence holds for other normalization layers as well , Group Normalization ( Wu & He , 2018 ) , Layer Normalization ( Ba et al. , 2016 ) , Instance Norm ( Ulyanov et al. , 2016 ) , etc . • A worked-out toy example illustrating the above linkage of hyperparameters . Using either weight decay or BN alone reaches global minimum , but convergence fails when both are used . 1 INTRODUCTION Batch Normalization ( BN ) offers significant benefits in optimization and generalization across architectures , and has become ubiquitous . Usually best performance is attained by adding weight decay and momentum in addition to BN . Usually weight decay is thought to improve generalization by controlling the norm of the parameters . However , it is fallacious to try to separately think of optimization and generalization because we are dealing with a nonconvex objective with multiple optima . Even slight changes to the training surely lead to a different trajectory in the loss landscape , potentially ending up at a different solution ! One needs trajectory analysis to have a hope of reasoning about the effects of such changes . In the presence of BN and other normalization schemes , including GroupNorm , LayerNorm , and InstanceNorm , the optimization objective is scale invariant to the parameters , which means rescaling parameters would not change the prediction , except the parameters that compute the output which do not have BN . However , Hoffer et al . ( 2018b ) shows that fixing the output layer randomly doesn ’ t harm the performance of the network . So the trainable parameters satisfy scale invariance . ( See more in Appendix E ) The current paper introduces new modes of analysis for such settings . This rigorous analysis yields the surprising conclusion that the original learning rate ( LR ) schedule and weight decay ( WD ) can be folded into a new exponential schedule for learning rate : in each iteration multiplying it by ( 1 + α ) for some α > 0 that depends upon the momentum and weight decay rate . Theorem 1.1 ( Main , Informal ) . SGD on a scale-invariant objective with initial learning rate η , weight decay factor λ , and momentum factor γ is equivalent to SGD with momentum factor γ where at iteration t , the learning rate η̃t in the new exponential learning rate schedule is defined as η̃t = α −2t−1η without weight decay ( λ̃ = 0 ) where α is a non-zero root of equation x2 − ( 1 + γ − λη ) x+ γ = 0 , ( 1 ) Specifically , when momentum γ = 0 , the above schedule can be simplified as η̃t = ( 1−λη ) −2t−1η . The above theorem requires that the product of learning rate and weight decay factor , λη , is small than ( 1−√γ ) 2 , which is almost always satisfied in practice . The rigorous and most general version of above theorem is Theorem 2.12 , which deals with multi-phase LR schedule , momentum and weight decay . There are other recently discovered exotic LR schedules , e.g . Triangular LR schedule ( Smith , 2017 ) and Cosine LR schedule ( Loshchilov & Hutter , 2016 ) , and our exponential LR schedule is an extreme example of LR schedules that become possible in presence of BN . Such an exponential increase in learning rate seems absurd at first sight and to the best of our knowledge , no deep learning success has been reported using such an idea before . It does highlight the above-mentioned viewpoint that in deep learning , optimization and regularization are not easily separated . Of course , the exponent trumps the effect of initial lr very fast ( See Figure 4 ) , which explains why training with BN and WD is not sensitive to the scale of initialization , since with BN , tuning the scale of initialization is equivalent to tuning the initial LR η while fixing the product of LR and WD , ηλ ( See Lemma 2.7 ) . Note that it is customary in BN to switch to a lower LR upon reaching a plateau in the validation loss . According to the analysis in the above theorem , this corresponds to an exponential growth with a smaller exponent , except for a transient effect when a correction term is needed for the two processes to be equivalent ( see discussion around Theorem 2.12 ) . Thus the final training algorithm is roughly as follows : Start from a convenient LR like 0.1 , and grow it at an exponential rate with a suitable exponent . When validation loss plateaus , switch to an exponential growth of LR with a lower exponent . Repeat the procedure until the training loss saturates . In Section 3 , we demonstrate on a toy example how weight decay and normalization are inseparably involved in the optimization process . With either weight decay or normalization alone , SGD will achieve zero training error . But with both turned on , SGD fails to converge to global minimum . In Section 4 , we experimentally verify our theoretical findings on CNNs and ResNets . We also construct better exponential LR schedules by incorporating the Cosine LR schedule on CIFAR10 , which opens the possibility of even more general theory of rate schedule tuning towards better performance . 1.1 RELATED WORK There have been other theoretical analyses of training models with scale-invariance . ( Cho & Lee , 2017 ) proposed to run Riemanian gradient descent on Grassmann manifold G ( 1 , n ) since the weight matrix is scaling invariant to the loss function . observed that the effective stepsize is proportional to ηw ‖wt‖2 . ( Arora et al. , 2019 ) show the gradient is always perpendicular to the current parameter vector which has the effect that norm of each scale invariant parameter group increases monotonically , which has an auto-tuning effect . ( Wu et al. , 2018 ) proposes a new adaptive learning rate schedule motivated by scale-invariance property of Weight Normalization . Previous work for understanding Batch Normalization . ( Santurkar et al. , 2018 ) suggested that the success of BNhas does not derive from reduction in Internal Covariate Shift , but by making landscape smoother . ( Kohler et al. , 2018 ) essentially shows linear model with BN could achieve exponential convergence rate assuming gaussian inputs , but their analysis is for a variant of GD with an inner optimization loop rather than GD itself . ( Bjorck et al. , 2018 ) observe that the higher learning rates enabled by BN empirically improves generalization . ( Arora et al. , 2019 ) prove that with certain mild assumption , ( S ) GD with BN finds approximate first order stationary point with any fixed learning rate . None of the above analyses incorporated weight decay , but ( Zhang et al. , 2019 ; Hoffer et al. , 2018a ; van Laarhoven , 2017 ) argued qualitatively that weight decay makes parameters have smaller norms , and thus the effective learning rate , ηw‖wt‖2 is larger . They described experiments showing this effect but didn ’ t have a closed form theoretical analysis like ours . None of the above analyses deals with momentum rigorously . 1.2 PRELIMINARIES AND NOTATIONS For batch B = { xi } Bi=1 , network parameter θ , we denote the network by fθ and the loss function at iteration t by Lt ( fθ ) = L ( fθ , Bt ) . When there ’ s no ambiguity , we also use Lt ( θ ) for convenience . We say a loss function L ( θ ) is scale invariant to its parameter θ is for any c ∈ R+ , L ( θ ) = L ( cθ ) . In practice , the source of scale invariance is usually different types of normalization layers , including Batch Normalization ( Ioffe & Szegedy , 2015 ) , Group Normalization ( Wu & He , 2018 ) , Layer Normalization ( Ba et al. , 2016 ) , Instance Norm ( Ulyanov et al. , 2016 ) , etc . Implementations of SGD with Momentum/Nesterov comes with subtle variations in literature . We adopt the variant from Sutskever et al . ( 2013 ) , also the default in PyTorch ( Paszke et al. , 2017 ) . L2 regularization ( a.k.a . Weight Decay ) is another common trick used in deep learning . Combining them together , we get the one of the mostly used optimization algorithms below . Definition 1.2 . [ SGD with Momentum and Weight Decay ] At iteration t , with randomly sampled batch Bt , update the parameters θt and momentum vt as following : θt =θt−1 − ηt−1vt ( 2 ) vt =γvt−1 +∇θ ( Lt ( θt−1 ) + λt−1 2 ‖θt−1‖2 ) , ( 3 ) where ηt is the learning rate at epoch t , γ is the momentum coefficient , and λ is the factor of weight decay . Usually , v0 is initialized to be 0 . For ease of analysis , we will use the following equivalent of Definition 1.2. θt − θt−1 ηt−1 = γ θt−1 − θt−2 ηt−2 −∇θ ( ( L ( θt−1 ) + λt−1 2 ‖θt−1‖22 ) , ( 4 ) where η−1 and θ−1 must be chosen in a way such that v0 = θ0−θ−1 η−1 is satisfied , e.g . when v0 = 0 , θ−1 = θ0 and η−1 could be arbitrary . A key source of intuition is the following simple lemma about scale-invariant networks Arora et al . ( 2019 ) . The first property ensures GD ( with momentum ) always increases the norm of the weight . ( See Lemma C.1 in Appendix C ) and the second property says that the gradients are smaller for parameteres with larger norm , thus stabilizing the trajectory from diverging to infinity . Lemma 1.3 ( Scale Invariance ) . If for any c ∈ R+ , L ( θ ) = L ( cθ ) , then ( 1 ) . 〈∇θL , θ〉 = 0 ; ( 2 ) . ∇θL ∣∣ θ=θ0 = c∇θL ∣∣ θ=cθ0 , for any c > 0 2 DERIVING EXPONENTIAL LEARNING RATE SCHEDULE As a warm-up in Section 2.1 we show that if momentum is turned off then Fixed LR + Fixed WD can be translated to an equivalent Exponential LR . In Section 2.2 we give a more general analysis on the equivalence between Fixed LR + Fixed WD + Fixed Momentum Factor and Exponential LR + Fixed Momentum Factor . While interesting , this still does completely apply to real-life deep learning where reaching full accuracy usually requires multiple phases in training where LR is fixed within a phase and reduced by some factor from one phase to the next . Section 2.3 shows how to interpret such a multi-phase LR schedule + WD + Momentum as a certain multi-phase exponential LR schedule with Momentum . 2.1 REPLACING WD BY EXPONENTIAL LR IN MOMENTUM-FREE SGD We use notation of Section 1.2 and assume LR is fixed over iterations , i.e . ηt = η0 , and γ ( momentum factor ) is set as 0 . We also use λ to denote WD factor and θ0 to denote the initial parameters . The intuition should be clear from Lemma 1.3 , which says that shrinking parameter weights by factor ρ ( where ρ < 1 ) amounts to making the gradient ρ−1 times larger without changing its direction . Thus in order to restore the ratio between original parameter and its update ( LR×Gradient ) , the easiest way would be scaling LR by ρ2 . This suggests that scaling the parameter θ by ρ at each step is equivalent to scaling the LR η by ρ−2 . To prove this formally we use the following formalism . We ’ ll refer to the vector ( θ , η ) the state of a training algorithm and study how this evolves under various combinations of parameter changes . We will think of each step in training as a mapping from one state to another . Since mappings can be composed , any finite number of steps also correspond to a mapping . The following are some basic mappings used in the proof . 1 . Run GD with WD for a step : GDρt ( θ , η ) = ( ρθ − η∇Lt ( θ ) , η ) ; 2 . Scale the parameter θ : Πc1 ( θ , η ) = ( cθ , η ) ; 3 . Scale the LR η : Πc2 ( θ , η ) = ( θ , cη ) . For example , when ρ = 1 , GD1t is vanilla GD update without WD , also abbreviated as GDt . When ρ = 1−λη0 , GD1−λη0t is GD update with WD λ and LR η0 . Here Lt is the loss function at iteration t , which is decided by the batch of the training samples Bt in tth iteration . Below is the main result of this subsection , showing our claim that GD + WD ⇔ GD+ Exp LR ( when Momentum is zero ) . It will be proved after a series of lemmas . Theorem 2.1 ( WD ⇔ Exp LR ) . For every ρ < 1 and positive integer t following holds : GDρt−1 ◦ · · · ◦ GD ρ 0 = [ Πρ t 1 ◦Π ρ2t 2 ] ◦Πρ −1 2 ◦ GDt−1 ◦Π ρ−2 2 ◦ · · · ◦ GD1 ◦Π ρ−2 2 ◦ GD0 ◦Π ρ−1 2 . With WD being λ , ρ is set as 1 − λη0 and thus the scaling factor of LR per iteration is ρ−2 = ( 1− λη0 ) −2 , except for the first iteration it ’ s ρ−1 = ( 1− λη0 ) −1 . We first show how to write GD update with WD as a composition of above defined basic maps . Lemma 2.2 . GDρt = Π ρ 2 ◦Π ρ 1 ◦ GDt ◦Π ρ−1 2 . Below we will define the proper notion of equivalence such that ( 1 ) . Πρ1 ∼ Π ρ−2 2 , which implies GDρt ∼ Π ρ−1 2 ◦ GDt ◦Π ρ−1 2 ; ( 2 ) the equivalence is preserved under future GD updates . We first extend the equivalence between weights ( same direction ) to that between states , with additional requirement that the ratio between the size of GD update and that of parameter are the same among all equivalent states , which yields the notion of Equivalent Scaling . Definition 2.3 ( Equivalent States ) . ( θ , η ) is equivalent to ( θ′ , η′ ) iff ∃c > 0 , ( θ̃ , η̃ ) = [ Πc1 ◦ Πc 2 2 ] ( θ , η ) = ( cθ , c 2η ) , which is also denoted by ( θ̃ , η̃ ) c∼ ( θ , η ) . Πc1 ◦ Πc 2 2 is called Equivalent Scaling for all c > 0 . The following lemma shows that equivalent scaling commutes with GD update with WD , implying that equivalence is preserved under GD update ( Lemma 2.4 ) . This anchors the notion of equivalence — we could insert equivalent scaling anywhere in a sequence of basic maps ( GD update , LR/parameter scaling ) , without changing the final network . Lemma 2.4 . For any constant c , ρ > 0 and t ≥ 0 , GDρt ◦ [ Πc1 ◦Πc 2 2 ] = [ Π c 1 ◦Πc 2 2 ] ◦ GD ρ t .
This exciting and insightful paper presents theorems (and illustrating examples and experiments) describing an equivalence of commonly used learning rate schedules and weight decay settings with an exponentially increasing learning rate schedule and no weight decay, for neural networks with scale-invariant weights. Hence, the results apply to a large set of commonly employed settings. The paper contains an interesting example of a neural network for which gradient descent converges if with batch normalization as well as with L2 regularization, but not when both are used.
SP:b619dae0690930ba616bfeb3e32e89de6e798993