paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Neural networks with motivation
1 INTRODUCTION . Motivation is a cognitive process that propels an individual ’ s behavior towards or away from a particular object , perceived event , or outcome ( Zhang et al. , 2009 ) . Mathematically , motivation can be viewed as subjective modulation of the perceived reward value before the reward is received . Therefore , it reflects an organism ’ s wanting of the reward before the outcome is actually achieved . Computational models for motivated behavior , which are best represented by reinforcement learning ( RL ) models , are mostly concerned with the learning aspect of behavior . However , fluctuations in physiological states , such as confidence and motivation , can also profoundly affect behavior ( Zhang et al. , 2009 ) . Modeling such factors is thus an important goal in computational neuroscience and is in the early stages of mathematical description ( Berridge , 2012 ) . Here we build a neural network theory for motivational modulation of behavior based on Q-learning and apply this theory to mice performing Pavlovian conditioning task in which experimental observations of neural responses obtained in the ventral pallidum ( VP ) are available . We show that our motivated RL model both learns to correctly predict motivation-dependent rewards in the Pavlovian conditioning task and is consistent with responses of neurons in the VP . In particular , we show that , similarly to the VP neurons , Q-learning neural networks contain two oppositely-tuned populations of neurons responsive to positive and negative rewards . In the model , these two populations form a push-pull network that helps maintain motivation-dependent variables when inputs are missing . Our RL-based model is both consistent with experimental data and predicts the structure of the VP networks . We thus argue that motivation leads to complex behaviors which may add an extra level of complexity to machine learning approaches and is consistent with biological data . 2 RESULTS . Motivation is defined mathematically as a need-dependent modulation of the perceived reward value depending on animal ’ s extrinsic or intrinsic conditions ( Zhang et al. , 2009 ) . Thus , rats , which are normally repelled by high levels of salt in their food , may become attracted to a salt-containing solution following salt-free diet ( Berridge , 2012 ) . To model this observation , Berridge & Schulkin ( 1989 ) have proposed that the perceived reward rt received at time t is not absolute , but is modulated by an internal variable reflecting the level of motivation , which we will call here µ . The perceived level of the reward r̃t as a function of motivation µ can be expressed by the following equation : r̃t = r̃ ( rt , µ ) ( 1 ) In the simplest example , the reward , associated with salt is given by r̃t = µrt . Baseline motivation towards salt can be defined by µ = −1 , leading to the perceived reward of r̃t = −rt < 0 . Thus , normally the presence of salt in the diet is undesired . In the salt-free condition , the motivation changes to µ = +1 , leading to the subjective reward of r̃t = +rt ≥ 0 . Thus salt-containing diet becomes attractive . In reality , the function r̃ ( ... ) defining the impact of motivation on a perceived reward is complex ( Zhang et al. , 2009 ) , including the dependence on multiple factors described by a motivation vector ~µ . Individual components of this vector describe various needs experienced by the organism , such as thirst ( e.g . µ1 ) , appetite ( µ2 ) , etc . In this study , we explore the computational impact of motivation vector in the context of RL and investigate the brain circuits that might implement these computations . Our approach to motivation is based on Q-learning ( Watkins & Dayan , 1992 ) , which relies on an agent estimating Q-function , defined as the sum of future rewards given an action at chosen in a state ~st at time point t : Q ( ~st , at ) = ∑∞ τ=0 r ( ~st+τ |at ) γτ ( here and below , we omit averaging for simplicity ) . Here 0 < γ ≤ 1 is the discounting factor that keeps the sum from diverging , and balances preference of short- versus long-term rewards . If a correct Q-function is known , a rational agent picks an action that maximizes future rewards : at ← argmaxaQ ( ~st , a ) . In case of motivation in equation 1 , as reward values are affected by the motivation vector ~µ , for the Q-function , we obtain : Q ( ~st , at , ~µ ) = ∞∑ τ=0 r̃ ( ~st+τ , ~µt+τ |at ) γτ ( 2 ) Here r̃ ( ~st+τ , ~µt+τ |at ) is the motivation ~µ-dependent perceived reward obtained in a state ~st+τ reached at time t+ τ given action at chosen at time t. The state of the agent ~st and its motivation ~µ are distinct . The motivation is a slowly changing variable , that on average is not affected substantially by a single action . For example , the animal ’ s appetite does not change substantially during a single trial . At the same time , the actions selected by the animal lead to immediate changes of the animal ’ s state ~st . Recent research in neuroscience suggests that motivation and state may be represented and computed separately in the mammalian brain . Whereas motivation is usually attributed to the regions of the reward system , such as the VP ( Berridge & Schulkin , 1989 ; Berridge , 2012 ) , the state is likely to be computed elsewhere , e.g . in the hippocampus ( Eichenbaum et al. , 1999 ) , or cortex . In RL , an agent ’ s state and motivation may have different mathematical representations . In the examples below , the state variable is given by a one-hot vector , while motivation is represented by a full vector . Two arguments of the Q-function , ~st and ~µ , are therefore distinct . Finally , in hierarchical RL implementation , motivation is provided by a higher level network , while information about the state is generated externally . Although the Q-function with motivation ( equation 2 ) is similar to the Q-function in goalconditioned RL ( Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ) , the underlying learning dynamics is different . Motivated behavior pursues multiple distributed sources of dynamic rewards . The Qfunction therefore accounts for the future motivation dynamics . This way , an agent with motivation chooses what reward to pursue – making it also different from RL with subgoals ( Sutton et al. , 1999 ) . Behavior with motivation therefore involves minimum to no handcrafted features , which suggests that motivation could provide a step towards general methods that leverage computation – a goal identified by Richard Sutton ( 2019 ) . As in the case of standard Q-learning , the action chosen by a rational agent maximizes the sum of the expected future perceived rewards , i.e . at ← argmaxaQ ( ~st , a , ~µ ) . To learn a correct Q-function , one can use the Temporal Difference ( TD ) method ( Sutton & Barto ( 1998 ) ) . If the Q-function is learned perfectly , it satisfies the recursive relationship Q ( ~st , at , ~µ ) = r̃ ( ~st , ~µt ) + γmaxat+1 Q ( ~st+1 , at+1 , ~µt+1 ) . For an incompletely learned motivation-dependent Q-function , the TD error δ is non-zero : δ = r̃ ( ~st , ~µt ) + γmax at+1 Q ( ~st+1 , at+1 , ~µt+1 ) −Q ( ~st , at , ~µt ) ( 3 ) TD error can be used to update motivation-dependent Q-function directly or to train neural networks to optimize their policy . Q-function depends on the new set of variables ~µ that evolve following their own rules . These variables reflect fluctuations in physiological or psychological states that substantially change the reward function and , therefore , can generate flexible behaviors dependent on animals ’ ongoing needs . We trained neural networks via backpropagation of the TD error ( equation 3 ) , an approach employed in deep Q-learning ( Mnih et al. , 2015 ) . Below we present several examples in which neural networks could be trained to solve motivation-dependent tasks . 2.1 THE FOUR DEMANDS TASK . Consider the example in Figure 1 . An agent navigates in a 6x6 square gridworld separated into four 3x3 subdivisions ( rooms ) ( Figure 1A ) . The environment was inspired by the work of Sutton et al . ( 1999 ) ; however , the task is different , as described below . In each room , the agent receives one and only one type of reward rn ( xt , yt ) , where n = 1 ... 4 ( Figure 1B ) . These rewards can be viewed as four different resources , such as water , food , sleep , and work . Motivation is described in this system by a 4D vector ~µ defining affinity of the agent for each of these resources . When the agent enters a room number n , the corresponding resource in the room is consumed , the agent receives rewards defined by r̃t = µn , and the corresponding component of the motivation vector µn is reset to zero ( Figure 1C ) . On the next time step , motivations in all four rooms are increased by one , i.e . µn ← µn+1 , which reflects additional “ wanting ” of the resource induced by the “ growing appetite ” . After a prolonged period of building up appetite , the motivation towards a resource saturates at a fixed maximum value of θ , which becomes a parameter of this model , determining the behavior . What are the potential behaviors of the agent ? Assume , that the maximum allowed motivation θ is large , and does not influence our results . If the agent always stays in the same room ( one-room binge strategy , Figure 1D ) , the rewards received by the agent consist of a sequence of zeros and ones , i.e . 0 , 1 , 0 , 1 , . . . ( in our model , after the motivation is set to zero , it is increased by one on the next time step ) . The average reward corresponding to this strategy is therefore r̄one−room binge = 1/2 . The average reward can be increased , if the agent jumps from room to room on each time step ( a two-room binge strategy , Figure 1E ) . In this case , the sequence of rewards received by the agent is described by the sequence of ones and the average reward is r̄two−room binge = 1 . Two-room binging therefore outperforms the one-room binge strategy . Finally , the agent can migrate by moving in a cycle through all four rooms ( Figure 1F ) . In this case , the agent spends three steps in each room and the overall period of migration is 12 steps . During these three steps , the agent receives the rewards of 9 ( the agent left this room nine steps ago ) , then 0 , and 1 ( r̄migration = 10/3 ) . Thus , migration strategy is more beneficial for the agent than both of the binging strategies . Migration , however , is affected by the maximum allowed motivation value θ . When θ < 9 , the benefits of migration strategy are reduced . For θ = 1 , for example , migration yields the reward rate of just r̄migration|θ=1 = 2/3 , which is below the return of the two-room binging . Thus , our model should display various behaviors depending on θ . We trained a simple feedforward neural network ( Figure 2A ) to generate behaviors using the state vector and the 4D vector of motivations as inputs . The network computed Q-values for five possible actions ( up , down , left , right , stay ) , using TD method and backpropagating the δ signal . The binary 36D ( 6x6 ) one-hot state vector represented the the agent ’ s position . The network was trained 41 times for different values of the maximum allowed motivation value θ . As expected , the behavior displayed by the network depended on this parameter . The phase diagram of the agent ’ s behaviors ( Figure 2B , blue circles ) shows that the agent successfully discovered the migration strategy and two-room binge strategies for high and low values of θ correspondingly . For intermediate values of θ ( 1.7 < θ < 3 ) , the network discovered a delayed two-room binging strategy , in which it spent an extra step in one of the room . The networks with motivation can also display a variety of complex behaviors for different motivation dynamics , such as binging , addiction , withdrawal , etc . In one example , by increasing the maximum motivatiuon value for one of the demands ( ” smoking ” ) , we trained networks to display ” smoking addiction ” ( Figure 3A , B ) . Does motivation contribute to learning optimal strategies ? To address this question , we performed a similar set of simulations , except the motivation input to the network was suppressed ( µ = 0 ) . Although the input to such “ non-motivated ” networks was sufficient to recover the optimal strategies , in most of the simulations the agents exercised two-room binging ( Figure 2B , yellow circles ) . The migration strategy , despite being optimal in 3/4 of the simulations , was successfully learned only by a single agent out of 41 . Moreover , the performance of the non-motivated networks often yielded that of the random walk ( Figure 2B , orange circles ) . We conclude that motivation may facilitate learning by providing additional cues for temporal credit assignment in the rewards . Overall , we suggest that motivation is helpful in generating complex ongoing behaviors based on simple conditions .
This paper presents a computational model of motivation for Q learning and relates it to biological models of motivation. Motivation is presented to the agent as a component of its inputs, and is encoded in a vectorised reward function where each component of the reward is weighted. This approach is explored in three domains: a modified four-room domain where each room represents a different reward in the reward vector, a route planning problem, and a pavlovian conditioning example where neuronal activations are compared to mice undergoing a similar conditioning.
SP:faca1e6eda4ad3b91ab99995e420398c01cc0e42
Neural networks with motivation
1 INTRODUCTION . Motivation is a cognitive process that propels an individual ’ s behavior towards or away from a particular object , perceived event , or outcome ( Zhang et al. , 2009 ) . Mathematically , motivation can be viewed as subjective modulation of the perceived reward value before the reward is received . Therefore , it reflects an organism ’ s wanting of the reward before the outcome is actually achieved . Computational models for motivated behavior , which are best represented by reinforcement learning ( RL ) models , are mostly concerned with the learning aspect of behavior . However , fluctuations in physiological states , such as confidence and motivation , can also profoundly affect behavior ( Zhang et al. , 2009 ) . Modeling such factors is thus an important goal in computational neuroscience and is in the early stages of mathematical description ( Berridge , 2012 ) . Here we build a neural network theory for motivational modulation of behavior based on Q-learning and apply this theory to mice performing Pavlovian conditioning task in which experimental observations of neural responses obtained in the ventral pallidum ( VP ) are available . We show that our motivated RL model both learns to correctly predict motivation-dependent rewards in the Pavlovian conditioning task and is consistent with responses of neurons in the VP . In particular , we show that , similarly to the VP neurons , Q-learning neural networks contain two oppositely-tuned populations of neurons responsive to positive and negative rewards . In the model , these two populations form a push-pull network that helps maintain motivation-dependent variables when inputs are missing . Our RL-based model is both consistent with experimental data and predicts the structure of the VP networks . We thus argue that motivation leads to complex behaviors which may add an extra level of complexity to machine learning approaches and is consistent with biological data . 2 RESULTS . Motivation is defined mathematically as a need-dependent modulation of the perceived reward value depending on animal ’ s extrinsic or intrinsic conditions ( Zhang et al. , 2009 ) . Thus , rats , which are normally repelled by high levels of salt in their food , may become attracted to a salt-containing solution following salt-free diet ( Berridge , 2012 ) . To model this observation , Berridge & Schulkin ( 1989 ) have proposed that the perceived reward rt received at time t is not absolute , but is modulated by an internal variable reflecting the level of motivation , which we will call here µ . The perceived level of the reward r̃t as a function of motivation µ can be expressed by the following equation : r̃t = r̃ ( rt , µ ) ( 1 ) In the simplest example , the reward , associated with salt is given by r̃t = µrt . Baseline motivation towards salt can be defined by µ = −1 , leading to the perceived reward of r̃t = −rt < 0 . Thus , normally the presence of salt in the diet is undesired . In the salt-free condition , the motivation changes to µ = +1 , leading to the subjective reward of r̃t = +rt ≥ 0 . Thus salt-containing diet becomes attractive . In reality , the function r̃ ( ... ) defining the impact of motivation on a perceived reward is complex ( Zhang et al. , 2009 ) , including the dependence on multiple factors described by a motivation vector ~µ . Individual components of this vector describe various needs experienced by the organism , such as thirst ( e.g . µ1 ) , appetite ( µ2 ) , etc . In this study , we explore the computational impact of motivation vector in the context of RL and investigate the brain circuits that might implement these computations . Our approach to motivation is based on Q-learning ( Watkins & Dayan , 1992 ) , which relies on an agent estimating Q-function , defined as the sum of future rewards given an action at chosen in a state ~st at time point t : Q ( ~st , at ) = ∑∞ τ=0 r ( ~st+τ |at ) γτ ( here and below , we omit averaging for simplicity ) . Here 0 < γ ≤ 1 is the discounting factor that keeps the sum from diverging , and balances preference of short- versus long-term rewards . If a correct Q-function is known , a rational agent picks an action that maximizes future rewards : at ← argmaxaQ ( ~st , a ) . In case of motivation in equation 1 , as reward values are affected by the motivation vector ~µ , for the Q-function , we obtain : Q ( ~st , at , ~µ ) = ∞∑ τ=0 r̃ ( ~st+τ , ~µt+τ |at ) γτ ( 2 ) Here r̃ ( ~st+τ , ~µt+τ |at ) is the motivation ~µ-dependent perceived reward obtained in a state ~st+τ reached at time t+ τ given action at chosen at time t. The state of the agent ~st and its motivation ~µ are distinct . The motivation is a slowly changing variable , that on average is not affected substantially by a single action . For example , the animal ’ s appetite does not change substantially during a single trial . At the same time , the actions selected by the animal lead to immediate changes of the animal ’ s state ~st . Recent research in neuroscience suggests that motivation and state may be represented and computed separately in the mammalian brain . Whereas motivation is usually attributed to the regions of the reward system , such as the VP ( Berridge & Schulkin , 1989 ; Berridge , 2012 ) , the state is likely to be computed elsewhere , e.g . in the hippocampus ( Eichenbaum et al. , 1999 ) , or cortex . In RL , an agent ’ s state and motivation may have different mathematical representations . In the examples below , the state variable is given by a one-hot vector , while motivation is represented by a full vector . Two arguments of the Q-function , ~st and ~µ , are therefore distinct . Finally , in hierarchical RL implementation , motivation is provided by a higher level network , while information about the state is generated externally . Although the Q-function with motivation ( equation 2 ) is similar to the Q-function in goalconditioned RL ( Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ) , the underlying learning dynamics is different . Motivated behavior pursues multiple distributed sources of dynamic rewards . The Qfunction therefore accounts for the future motivation dynamics . This way , an agent with motivation chooses what reward to pursue – making it also different from RL with subgoals ( Sutton et al. , 1999 ) . Behavior with motivation therefore involves minimum to no handcrafted features , which suggests that motivation could provide a step towards general methods that leverage computation – a goal identified by Richard Sutton ( 2019 ) . As in the case of standard Q-learning , the action chosen by a rational agent maximizes the sum of the expected future perceived rewards , i.e . at ← argmaxaQ ( ~st , a , ~µ ) . To learn a correct Q-function , one can use the Temporal Difference ( TD ) method ( Sutton & Barto ( 1998 ) ) . If the Q-function is learned perfectly , it satisfies the recursive relationship Q ( ~st , at , ~µ ) = r̃ ( ~st , ~µt ) + γmaxat+1 Q ( ~st+1 , at+1 , ~µt+1 ) . For an incompletely learned motivation-dependent Q-function , the TD error δ is non-zero : δ = r̃ ( ~st , ~µt ) + γmax at+1 Q ( ~st+1 , at+1 , ~µt+1 ) −Q ( ~st , at , ~µt ) ( 3 ) TD error can be used to update motivation-dependent Q-function directly or to train neural networks to optimize their policy . Q-function depends on the new set of variables ~µ that evolve following their own rules . These variables reflect fluctuations in physiological or psychological states that substantially change the reward function and , therefore , can generate flexible behaviors dependent on animals ’ ongoing needs . We trained neural networks via backpropagation of the TD error ( equation 3 ) , an approach employed in deep Q-learning ( Mnih et al. , 2015 ) . Below we present several examples in which neural networks could be trained to solve motivation-dependent tasks . 2.1 THE FOUR DEMANDS TASK . Consider the example in Figure 1 . An agent navigates in a 6x6 square gridworld separated into four 3x3 subdivisions ( rooms ) ( Figure 1A ) . The environment was inspired by the work of Sutton et al . ( 1999 ) ; however , the task is different , as described below . In each room , the agent receives one and only one type of reward rn ( xt , yt ) , where n = 1 ... 4 ( Figure 1B ) . These rewards can be viewed as four different resources , such as water , food , sleep , and work . Motivation is described in this system by a 4D vector ~µ defining affinity of the agent for each of these resources . When the agent enters a room number n , the corresponding resource in the room is consumed , the agent receives rewards defined by r̃t = µn , and the corresponding component of the motivation vector µn is reset to zero ( Figure 1C ) . On the next time step , motivations in all four rooms are increased by one , i.e . µn ← µn+1 , which reflects additional “ wanting ” of the resource induced by the “ growing appetite ” . After a prolonged period of building up appetite , the motivation towards a resource saturates at a fixed maximum value of θ , which becomes a parameter of this model , determining the behavior . What are the potential behaviors of the agent ? Assume , that the maximum allowed motivation θ is large , and does not influence our results . If the agent always stays in the same room ( one-room binge strategy , Figure 1D ) , the rewards received by the agent consist of a sequence of zeros and ones , i.e . 0 , 1 , 0 , 1 , . . . ( in our model , after the motivation is set to zero , it is increased by one on the next time step ) . The average reward corresponding to this strategy is therefore r̄one−room binge = 1/2 . The average reward can be increased , if the agent jumps from room to room on each time step ( a two-room binge strategy , Figure 1E ) . In this case , the sequence of rewards received by the agent is described by the sequence of ones and the average reward is r̄two−room binge = 1 . Two-room binging therefore outperforms the one-room binge strategy . Finally , the agent can migrate by moving in a cycle through all four rooms ( Figure 1F ) . In this case , the agent spends three steps in each room and the overall period of migration is 12 steps . During these three steps , the agent receives the rewards of 9 ( the agent left this room nine steps ago ) , then 0 , and 1 ( r̄migration = 10/3 ) . Thus , migration strategy is more beneficial for the agent than both of the binging strategies . Migration , however , is affected by the maximum allowed motivation value θ . When θ < 9 , the benefits of migration strategy are reduced . For θ = 1 , for example , migration yields the reward rate of just r̄migration|θ=1 = 2/3 , which is below the return of the two-room binging . Thus , our model should display various behaviors depending on θ . We trained a simple feedforward neural network ( Figure 2A ) to generate behaviors using the state vector and the 4D vector of motivations as inputs . The network computed Q-values for five possible actions ( up , down , left , right , stay ) , using TD method and backpropagating the δ signal . The binary 36D ( 6x6 ) one-hot state vector represented the the agent ’ s position . The network was trained 41 times for different values of the maximum allowed motivation value θ . As expected , the behavior displayed by the network depended on this parameter . The phase diagram of the agent ’ s behaviors ( Figure 2B , blue circles ) shows that the agent successfully discovered the migration strategy and two-room binge strategies for high and low values of θ correspondingly . For intermediate values of θ ( 1.7 < θ < 3 ) , the network discovered a delayed two-room binging strategy , in which it spent an extra step in one of the room . The networks with motivation can also display a variety of complex behaviors for different motivation dynamics , such as binging , addiction , withdrawal , etc . In one example , by increasing the maximum motivatiuon value for one of the demands ( ” smoking ” ) , we trained networks to display ” smoking addiction ” ( Figure 3A , B ) . Does motivation contribute to learning optimal strategies ? To address this question , we performed a similar set of simulations , except the motivation input to the network was suppressed ( µ = 0 ) . Although the input to such “ non-motivated ” networks was sufficient to recover the optimal strategies , in most of the simulations the agents exercised two-room binging ( Figure 2B , yellow circles ) . The migration strategy , despite being optimal in 3/4 of the simulations , was successfully learned only by a single agent out of 41 . Moreover , the performance of the non-motivated networks often yielded that of the random walk ( Figure 2B , orange circles ) . We conclude that motivation may facilitate learning by providing additional cues for temporal credit assignment in the rewards . Overall , we suggest that motivation is helpful in generating complex ongoing behaviors based on simple conditions .
The authors investigate mechanisms underlying action selection in artificial agents and mice. To achieve this goal, they use RL to train neural networks to choose actions that maximize their temporally discounted sum of future rewards. Importantly, these rewards depend on a motivation factor that is itself a function of time and action; this motivation factor is the key difference between the authors' approach and "vanilla" RL. In simple tasks, the RL agent learns effective strategies (i.e., migrating between rooms in Fig. 1, and minimizing path lengths for the vehicle routing problem in Fig. 5).
SP:faca1e6eda4ad3b91ab99995e420398c01cc0e42
Expected Information Maximization: Using the I-Projection for Mixture Density Estimation
1 INTRODUCTION . Learning the density of highly multi-modal distributions is a challenging machine learning problem relevant to many fields such as modelling human behavior ( Pentland & Liu , 1999 ) . Most common methods rely on maximizing the likelihood of the data . It is well known that the maximum likelihood solution corresponds to computing the M ( oment ) -projection of the data distribution to the parametric model distribution ( Bishop , 2006 ) . Yet , the M-projection averages over multiple modes in case the model distribution is not rich enough to fully represent the data ( Bishop , 2006 ) . This averaging effect can result in poor models , that put most of the probability mass in areas that are not covered by the data . The counterpart of the M-projection is the I ( nformation ) -projection . The I-projection concentrates on the modes the model is able to represent and ignores the remaining ones . Hence , it does not suffer from the averaging effect ( Bishop , 2006 ) . In this paper , we explore the I-projection for mixture models which are typically trained by maximizing the likelihood via expectation maximization ( EM ) ( Dempster et al. , 1977 ) . Despite the richness of mixture models , the averaging problem remains as we typically do not know the correct number of modes and it is hard to identify all modes of the data correctly . By the use of the I-projection , our mixture models do not suffer from this averaging effect and can generate more realistic samples that are less distinguishable from the data . In this paper we concentrate on learning Gaussian mixture models and conditional Gaussian mixtures of experts ( Jacobs et al. , 1991 ) where the mean and covariance matrix are generated by deep neural networks . We propose Expected Information Maximization ( EIM ) 1 , a novel approach capable of computing the I-projection between the model and the data . By exploiting the structure of the I-projection , we can derive a variational upper bound objective , which was previously used in the context of variational inference ( Maaløe et al. , 2016 ; Ranganath et al. , 2016 ; Arenz et al. , 2018 ) . In order to work with this upper bound objective based on samples , we use a discriminator to approximate the required density ratio , relating our approach to GANs ( Goodfellow et al. , 2014 ; Nowozin et al. , 2016 ; Uehara et al. , 2016 ) . The discriminator also allows us to use additional discriminative features to improve model quality . In our experiments , we demonstrate that EIM is much more effective in computing the I-projection than recent GAN approaches . We apply EIM to a synthetic obstacle avoidance task , an inverse kinematic task of a redundant robot arm as well as a pedestrian and car prediction task using the Stanford Drone Dataset ( Robicquet et al. , 2016 ) and a traffic dataset from the Next Generation Simulation program . 2 PRELIMINARIES . Our approach heavily builds on minimizing Kullback-Leibler divergences as well as the estimation of density ratios . We will therefore briefly review both concepts . Density Ratio Estimation . Our approach relies on estimating density ratios r ( x ) = q ( x ) /p ( x ) based on samples of q ( x ) and p ( x ) . Sugiyama et al . ( 2012 ) introduced a framework to estimate such density ratios based on the minimization of Bregman divergences ( Bregman , 1967 ) . For our work we employ one approach from this framework , namely density ratio estimation by binary logistic regression . Assume a logistic regressor C ( x ) = σ ( φ ( x ) ) with logits φ ( x ) and sigmoid activation function σ . Further , we train C ( x ) on predicting the probability that a given sample x was sampled from q ( x ) . It can be shown that such a logistic regressor using a cross-entropy loss is optimal for C ( x ) = q ( x ) / ( q ( x ) + p ( x ) ) . Using this relation , we can compute the log density ratio estimator by log q ( x ) p ( x ) = log q ( x ) / ( q ( x ) + p ( x ) ) p ( x ) / ( q ( x ) + p ( x ) ) = log C ( x ) 1− C ( x ) = σ −1 ( C ( x ) ) = φ ( x ) . The logistic regressor is trained by minimizing the binary cross-entropy argminφ ( x ) BCE ( φ ( x ) , p ( x ) , q ( x ) ) = −Eq ( x ) [ log ( σ ( φ ( x ) ) ) ] − Ep ( x ) [ log ( 1− σ ( φ ( x ) ) ) ] , where different regularization techniques such as ` 2 regularization or dropout ( Srivastava et al. , 2014 ) can be used to avoid overfitting . 1Code available at https : //github.com/pbecker93/ExpectedInformationMaximization Moment and Information Projection . The Kullback-Leibler divergence ( Kullback & Leibler , 1951 ) is a standard similarity measure for distributions . It is defined as KL ( p ( x ) ||q ( x ) ) =∫ p ( x ) log p ( x ) /q ( x ) dx . Due to its asymmetry , the Kullback-Leibler Divergence provides two different optimization problems ( Bishop , 2006 ) to fit a model distribution q ( x ) to a target distribution p ( x ) , namely argminq ( x ) KL ( p ( x ) ||q ( x ) ) ︸ ︷︷ ︸ Moment-projection and argminq ( x ) KL ( q ( x ) ||p ( x ) ) ︸ ︷︷ ︸ Information-projection . Here , we will assume that p ( x ) is the data distribution , i.e. , p ( x ) is unknown but we have access to samples from p ( x ) . It can easily be seen that computing the M-projection to the data distribution is equivalent to maximizing the likelihood ( ML ) of the model ( Bishop , 2006 ) . ML solutions match the moments of the model with the moments of the target distribution , which results in averaging over modes that can not be represented by the model . In contrast , the I-projection forces the learned generator q ( x ) to have low probability whenever p ( x ) has low probability , which is also called zero forcing . 3 RELATED WORK . We will now discuss competing methods for computing the I-projection that are based on GANs . Those are , to the best of our knowledge , the only other approaches capable of computing the Iprojection solely based on samples of the target distribution . Furthermore , we will distinguish our approach from approaches based on variational inference that also use the I-projection . Variational Inference . The I-projection is a common objective in Variational Inference ( Opper & Saad , 2001 ; Bishop , 2006 ; Kingma & Welling , 2013 ) . Those methods aim to fit tractable approximations to intractable distributions of which the unnormalized density is available . EIM , on the other hand , does not assume access to the unnormalized density of the target distributions but only to samples . Hence , it is not a variational inference approach , but a density estimation approach . However , our approach uses an upper bound that has been previously applied to variational inference ( Maaløe et al. , 2016 ; Ranganath et al. , 2016 ; Arenz et al. , 2018 ) . EIM is especially related to the VIPS algorithm ( Arenz et al. , 2018 ) , which we extend from the variational inference case to the density estimation case . Additionally , we introduce conditional latent variable models into the approach . Generative Adversarial Networks . While the original GAN approach minimizes the JensenShannon Divergence ( Goodfellow et al. , 2014 ) , GANs have since been adapted to a variety of other distance measures between distributions , such as the Wasserstein distance ( Arjovsky et al. , 2017 ) , symmetric KL ( Chen et al. , 2018 ) and arbitrary f -divergences ( Ali & Silvey , 1966 ; Nowozin et al. , 2016 ; Uehara et al. , 2016 ; Poole et al. , 2016 ) . Since the I-projection is a special case of an f - divergence , those approaches are of particular relevance to our work . Nowozin et al . ( 2016 ) use a variational bound for f -divergences ( Nguyen et al. , 2010 ) to derive their approach , the f -GAN . Uehara et al . ( 2016 ) use a bound that directly follows from the density ratio estimation under Bregman divergences framework introduced by Sugiyama et al . ( 2012 ) to obtain their b-GAN . While the b-GAN ’ s discriminator directly estimates the density ratio , the f -GAN ’ s discriminator estimates an invertible mapping of the density ratio . Yet , in the case of the I-projection , both the f -GAN and the b-GAN yield the same objective , as we show in Appendix C.2 . For both the f -GAN and b-GAN the desired f -divergence determines the discriminator objective . Uehara et al . ( 2016 ) note that the discriminator objective , implied by the I-projection , is unstable . As both approaches are formulated in a general way to minimize any f -divergence , they do not exploit the special structure of the Iprojection . Exploiting this structure permits us to apply a tight upper bound of the I-projection for latent variable models , which results in a higher quality of the estimated models . Li et al . ( 2019 ) introduce an adversarial approach to compute the I-projection based on density ratios , estimated by logistic regression . Yet , their approach assumes access to the unnormalized target density , i.e. , they are working in a variational inference setting . The most important difference to GANs is that we do not base EIM on an adversarial formulation and no adversarial game has to be solved . This removes a major source of instability in the training process , which we discuss in more detail in Section 4.3 . 4 EXPECTED INFORMATION MAXIMIZATION . Expected Information Maximization ( EIM ) is a general algorithm for minimizing the I-projection for any latent variable model . We first derive EIM for general marginal latent variable models , i.e. , q ( x ) = ∫ q ( x|z ) q ( z ) dz and subsequently extend our derivations to conditional latent variable mod- els , i.e. , q ( x|y ) = ∫ q ( x|z , y ) q ( z|y ) dz . EIM uses an upper bound for the objective of the marginal distribution . Similar to Expectation-Maximization ( EM ) , our algorithm iterates between an M-step and an E-step . In the corresponding M-step , we minimize the upper bound and in the E-step we tighten it using a variational distribution . 4.1 EIM FOR LATENT VARIABLE MODELS . The I-projection can be simplified using a ( tight ) variational upper bound ( Arenz et al. , 2018 ) which can be obtained by introducing an auxiliary distribution q̃ ( z|x ) and using Bayes rule KL ( q ( x ) ||p ( x ) ) = Uq̃ , p ( q ) ︸ ︷︷ ︸ upper bound −Eq ( x ) [ KL ( q ( z|x ) ||q̃ ( z|x ) ) ︸ ︷︷ ︸ ≥0 ] , where Uq̃ , p ( q ) = ∫∫ q ( x|z ) q ( z ) ( log q ( x|z ) q ( z ) p ( x ) − log q̃ ( z|x ) ) dzdx . ( 1 ) The derivation of the bound is given in Appendix B . It is easy to see that Uq̃ , p ( q ) is an upper bound as the expected KL term is always non-negative . In the corresponding E-step , the model from the previous iteration , which we denote as qt ( x ) , is used to tighten the bound by setting q̃ ( z|x ) = qt ( x|z ) qt ( z ) /qt ( x ) . In the M-step , we update the model distribution by minimizing the upper bound Uq̃ , p ( q ) . Yet , opposed to Arenz et al . ( 2018 ) , we can not work directly with the upper bound since it still depends on log p ( x ) , which we can not evaluate . However , we can reformulate the upper bound by setting the given relation for q̃ ( z|x ) of the E-step into Eq . 1 , Uqt , p ( q ) = ∫ q ( z ) ( ∫ q ( x|z ) log qt ( x ) p ( x ) dx+ KL ( q ( x|z ) ||qt ( x|z ) ) ) dz + KL ( q ( z ) ||qt ( z ) ) . ( 2 ) The upper bound now contains a density ratio between the old model distribution and the data . This density ratio can be estimated using samples of qt and p , for example , by using logistic regression as shown in Section 2 . We can use the logits φ ( x ) of such a logistic regressor to estimate the log density ratio log ( qt ( x ) /p ( x ) ) in Equation 2 . This yields an upper bound Uqt , φ ( q ) that depends on φ ( x ) instead of p ( x ) . Optimizing this bound corresponds to the M-step of our approach . In the E-step , we set qt to the newly obtained q and retrain the density ratio estimator φ ( x ) . Both steps formally result in the following bilevel optimization problem qt+1 ∈ argminq ( x ) Uqt , φ∗ ( q ) s.t . φ∗ ( x ) ∈ argminφ ( x ) BCE ( φ ( x ) , p ( x ) , qt ( x ) ) . Using a discriminator also comes with the advantage that we can use additional discriminative features g ( x ) as input to our discriminator that are not directly available for the generator . For example , if x models trajectories of pedestrians , g ( x ) could indicate whether the trajectory reaches any positions that are not plausible such as rooftops or trees . These features simplify the discrimination task and can therefore improve our model accuracy which is not possible with M-projection based algorithms such as EM .
The paper presents an algorithm to match two distributions with latent variables, named expected information maximization (EIM). Specifically, EIM is based on the I-Projection, which basically is equivalent to minimizing the reverse KL divergence (i.e. min KL[p_model || p_data]); to handle latent variables, an upper-bound is derived, which is the corresponding reverse KL divergence in the joint space. To minimize that joint reverse KL, a specific procedure is developed, leading to the presented EIM. EIM variants for different applications are discussed. Fancy robot-related experiments are used to evaluate the presented algorithm.
SP:5ca4c62eae1c6a5a870524715c3be44c40383f98
Expected Information Maximization: Using the I-Projection for Mixture Density Estimation
1 INTRODUCTION . Learning the density of highly multi-modal distributions is a challenging machine learning problem relevant to many fields such as modelling human behavior ( Pentland & Liu , 1999 ) . Most common methods rely on maximizing the likelihood of the data . It is well known that the maximum likelihood solution corresponds to computing the M ( oment ) -projection of the data distribution to the parametric model distribution ( Bishop , 2006 ) . Yet , the M-projection averages over multiple modes in case the model distribution is not rich enough to fully represent the data ( Bishop , 2006 ) . This averaging effect can result in poor models , that put most of the probability mass in areas that are not covered by the data . The counterpart of the M-projection is the I ( nformation ) -projection . The I-projection concentrates on the modes the model is able to represent and ignores the remaining ones . Hence , it does not suffer from the averaging effect ( Bishop , 2006 ) . In this paper , we explore the I-projection for mixture models which are typically trained by maximizing the likelihood via expectation maximization ( EM ) ( Dempster et al. , 1977 ) . Despite the richness of mixture models , the averaging problem remains as we typically do not know the correct number of modes and it is hard to identify all modes of the data correctly . By the use of the I-projection , our mixture models do not suffer from this averaging effect and can generate more realistic samples that are less distinguishable from the data . In this paper we concentrate on learning Gaussian mixture models and conditional Gaussian mixtures of experts ( Jacobs et al. , 1991 ) where the mean and covariance matrix are generated by deep neural networks . We propose Expected Information Maximization ( EIM ) 1 , a novel approach capable of computing the I-projection between the model and the data . By exploiting the structure of the I-projection , we can derive a variational upper bound objective , which was previously used in the context of variational inference ( Maaløe et al. , 2016 ; Ranganath et al. , 2016 ; Arenz et al. , 2018 ) . In order to work with this upper bound objective based on samples , we use a discriminator to approximate the required density ratio , relating our approach to GANs ( Goodfellow et al. , 2014 ; Nowozin et al. , 2016 ; Uehara et al. , 2016 ) . The discriminator also allows us to use additional discriminative features to improve model quality . In our experiments , we demonstrate that EIM is much more effective in computing the I-projection than recent GAN approaches . We apply EIM to a synthetic obstacle avoidance task , an inverse kinematic task of a redundant robot arm as well as a pedestrian and car prediction task using the Stanford Drone Dataset ( Robicquet et al. , 2016 ) and a traffic dataset from the Next Generation Simulation program . 2 PRELIMINARIES . Our approach heavily builds on minimizing Kullback-Leibler divergences as well as the estimation of density ratios . We will therefore briefly review both concepts . Density Ratio Estimation . Our approach relies on estimating density ratios r ( x ) = q ( x ) /p ( x ) based on samples of q ( x ) and p ( x ) . Sugiyama et al . ( 2012 ) introduced a framework to estimate such density ratios based on the minimization of Bregman divergences ( Bregman , 1967 ) . For our work we employ one approach from this framework , namely density ratio estimation by binary logistic regression . Assume a logistic regressor C ( x ) = σ ( φ ( x ) ) with logits φ ( x ) and sigmoid activation function σ . Further , we train C ( x ) on predicting the probability that a given sample x was sampled from q ( x ) . It can be shown that such a logistic regressor using a cross-entropy loss is optimal for C ( x ) = q ( x ) / ( q ( x ) + p ( x ) ) . Using this relation , we can compute the log density ratio estimator by log q ( x ) p ( x ) = log q ( x ) / ( q ( x ) + p ( x ) ) p ( x ) / ( q ( x ) + p ( x ) ) = log C ( x ) 1− C ( x ) = σ −1 ( C ( x ) ) = φ ( x ) . The logistic regressor is trained by minimizing the binary cross-entropy argminφ ( x ) BCE ( φ ( x ) , p ( x ) , q ( x ) ) = −Eq ( x ) [ log ( σ ( φ ( x ) ) ) ] − Ep ( x ) [ log ( 1− σ ( φ ( x ) ) ) ] , where different regularization techniques such as ` 2 regularization or dropout ( Srivastava et al. , 2014 ) can be used to avoid overfitting . 1Code available at https : //github.com/pbecker93/ExpectedInformationMaximization Moment and Information Projection . The Kullback-Leibler divergence ( Kullback & Leibler , 1951 ) is a standard similarity measure for distributions . It is defined as KL ( p ( x ) ||q ( x ) ) =∫ p ( x ) log p ( x ) /q ( x ) dx . Due to its asymmetry , the Kullback-Leibler Divergence provides two different optimization problems ( Bishop , 2006 ) to fit a model distribution q ( x ) to a target distribution p ( x ) , namely argminq ( x ) KL ( p ( x ) ||q ( x ) ) ︸ ︷︷ ︸ Moment-projection and argminq ( x ) KL ( q ( x ) ||p ( x ) ) ︸ ︷︷ ︸ Information-projection . Here , we will assume that p ( x ) is the data distribution , i.e. , p ( x ) is unknown but we have access to samples from p ( x ) . It can easily be seen that computing the M-projection to the data distribution is equivalent to maximizing the likelihood ( ML ) of the model ( Bishop , 2006 ) . ML solutions match the moments of the model with the moments of the target distribution , which results in averaging over modes that can not be represented by the model . In contrast , the I-projection forces the learned generator q ( x ) to have low probability whenever p ( x ) has low probability , which is also called zero forcing . 3 RELATED WORK . We will now discuss competing methods for computing the I-projection that are based on GANs . Those are , to the best of our knowledge , the only other approaches capable of computing the Iprojection solely based on samples of the target distribution . Furthermore , we will distinguish our approach from approaches based on variational inference that also use the I-projection . Variational Inference . The I-projection is a common objective in Variational Inference ( Opper & Saad , 2001 ; Bishop , 2006 ; Kingma & Welling , 2013 ) . Those methods aim to fit tractable approximations to intractable distributions of which the unnormalized density is available . EIM , on the other hand , does not assume access to the unnormalized density of the target distributions but only to samples . Hence , it is not a variational inference approach , but a density estimation approach . However , our approach uses an upper bound that has been previously applied to variational inference ( Maaløe et al. , 2016 ; Ranganath et al. , 2016 ; Arenz et al. , 2018 ) . EIM is especially related to the VIPS algorithm ( Arenz et al. , 2018 ) , which we extend from the variational inference case to the density estimation case . Additionally , we introduce conditional latent variable models into the approach . Generative Adversarial Networks . While the original GAN approach minimizes the JensenShannon Divergence ( Goodfellow et al. , 2014 ) , GANs have since been adapted to a variety of other distance measures between distributions , such as the Wasserstein distance ( Arjovsky et al. , 2017 ) , symmetric KL ( Chen et al. , 2018 ) and arbitrary f -divergences ( Ali & Silvey , 1966 ; Nowozin et al. , 2016 ; Uehara et al. , 2016 ; Poole et al. , 2016 ) . Since the I-projection is a special case of an f - divergence , those approaches are of particular relevance to our work . Nowozin et al . ( 2016 ) use a variational bound for f -divergences ( Nguyen et al. , 2010 ) to derive their approach , the f -GAN . Uehara et al . ( 2016 ) use a bound that directly follows from the density ratio estimation under Bregman divergences framework introduced by Sugiyama et al . ( 2012 ) to obtain their b-GAN . While the b-GAN ’ s discriminator directly estimates the density ratio , the f -GAN ’ s discriminator estimates an invertible mapping of the density ratio . Yet , in the case of the I-projection , both the f -GAN and the b-GAN yield the same objective , as we show in Appendix C.2 . For both the f -GAN and b-GAN the desired f -divergence determines the discriminator objective . Uehara et al . ( 2016 ) note that the discriminator objective , implied by the I-projection , is unstable . As both approaches are formulated in a general way to minimize any f -divergence , they do not exploit the special structure of the Iprojection . Exploiting this structure permits us to apply a tight upper bound of the I-projection for latent variable models , which results in a higher quality of the estimated models . Li et al . ( 2019 ) introduce an adversarial approach to compute the I-projection based on density ratios , estimated by logistic regression . Yet , their approach assumes access to the unnormalized target density , i.e. , they are working in a variational inference setting . The most important difference to GANs is that we do not base EIM on an adversarial formulation and no adversarial game has to be solved . This removes a major source of instability in the training process , which we discuss in more detail in Section 4.3 . 4 EXPECTED INFORMATION MAXIMIZATION . Expected Information Maximization ( EIM ) is a general algorithm for minimizing the I-projection for any latent variable model . We first derive EIM for general marginal latent variable models , i.e. , q ( x ) = ∫ q ( x|z ) q ( z ) dz and subsequently extend our derivations to conditional latent variable mod- els , i.e. , q ( x|y ) = ∫ q ( x|z , y ) q ( z|y ) dz . EIM uses an upper bound for the objective of the marginal distribution . Similar to Expectation-Maximization ( EM ) , our algorithm iterates between an M-step and an E-step . In the corresponding M-step , we minimize the upper bound and in the E-step we tighten it using a variational distribution . 4.1 EIM FOR LATENT VARIABLE MODELS . The I-projection can be simplified using a ( tight ) variational upper bound ( Arenz et al. , 2018 ) which can be obtained by introducing an auxiliary distribution q̃ ( z|x ) and using Bayes rule KL ( q ( x ) ||p ( x ) ) = Uq̃ , p ( q ) ︸ ︷︷ ︸ upper bound −Eq ( x ) [ KL ( q ( z|x ) ||q̃ ( z|x ) ) ︸ ︷︷ ︸ ≥0 ] , where Uq̃ , p ( q ) = ∫∫ q ( x|z ) q ( z ) ( log q ( x|z ) q ( z ) p ( x ) − log q̃ ( z|x ) ) dzdx . ( 1 ) The derivation of the bound is given in Appendix B . It is easy to see that Uq̃ , p ( q ) is an upper bound as the expected KL term is always non-negative . In the corresponding E-step , the model from the previous iteration , which we denote as qt ( x ) , is used to tighten the bound by setting q̃ ( z|x ) = qt ( x|z ) qt ( z ) /qt ( x ) . In the M-step , we update the model distribution by minimizing the upper bound Uq̃ , p ( q ) . Yet , opposed to Arenz et al . ( 2018 ) , we can not work directly with the upper bound since it still depends on log p ( x ) , which we can not evaluate . However , we can reformulate the upper bound by setting the given relation for q̃ ( z|x ) of the E-step into Eq . 1 , Uqt , p ( q ) = ∫ q ( z ) ( ∫ q ( x|z ) log qt ( x ) p ( x ) dx+ KL ( q ( x|z ) ||qt ( x|z ) ) ) dz + KL ( q ( z ) ||qt ( z ) ) . ( 2 ) The upper bound now contains a density ratio between the old model distribution and the data . This density ratio can be estimated using samples of qt and p , for example , by using logistic regression as shown in Section 2 . We can use the logits φ ( x ) of such a logistic regressor to estimate the log density ratio log ( qt ( x ) /p ( x ) ) in Equation 2 . This yields an upper bound Uqt , φ ( q ) that depends on φ ( x ) instead of p ( x ) . Optimizing this bound corresponds to the M-step of our approach . In the E-step , we set qt to the newly obtained q and retrain the density ratio estimator φ ( x ) . Both steps formally result in the following bilevel optimization problem qt+1 ∈ argminq ( x ) Uqt , φ∗ ( q ) s.t . φ∗ ( x ) ∈ argminφ ( x ) BCE ( φ ( x ) , p ( x ) , qt ( x ) ) . Using a discriminator also comes with the advantage that we can use additional discriminative features g ( x ) as input to our discriminator that are not directly available for the generator . For example , if x models trajectories of pedestrians , g ( x ) could indicate whether the trajectory reaches any positions that are not plausible such as rooftops or trees . These features simplify the discrimination task and can therefore improve our model accuracy which is not possible with M-projection based algorithms such as EM .
This paper propose EIM an analog to EM but to perform the I-projection (i.e. reverse-KL) instead of the usual M-projection for EM. The motivation is that the reverse-KL is mode-seeking in contrast to the forward-KL which is mode-covering. The authors argue that in the case that the model is mis-specified, I-projection is sometimes desired as to avoid putting mass on very unlikely regions of the space under the target p.
SP:5ca4c62eae1c6a5a870524715c3be44c40383f98
The Usual Suspects? Reassessing Blame for VAE Posterior Collapse
1 INTRODUCTION . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) represents a powerful generative model of data points that are assumed to possess some complex yet unknown latent structure . This assumption is instantiated via the marginalized distribution pθ ( x ) = ∫ pθ ( x|z ) p ( z ) dz , ( 1 ) which forms the basis of prevailing VAE models . Here z ∈ Rκ is a collection of unobservable latent factors of variation that , when drawn from the prior p ( z ) , are colloquially said to generate an observed data point x ∈ Rd through the conditional distribution pθ ( x|z ) . The latter is controlled by parameters θ that can , at least conceptually speaking , be optimized by maximum likelihood over pθ ( x ) given available training examples . In particular , assuming n training points X = [ x ( 1 ) , . . . , x ( n ) ] , maximum likelihood estimation is tantamount to minimizing the negative log-likelihood expression 1n ∑ i− log [ pθ ( x ( i ) ) ] . Proceeding further , because the marginalization over z in ( 1 ) is often intractable , the VAE instead minimizes a convenient variational upper bound given by L ( θ , φ ) , 1 n n∑ i=1 { −Eqφ ( z|x ( i ) ) [ log pθ ( x ( i ) |z ) ] + KL [ qφ ( z|x ( i ) ||p ( z ) ] } ≥ 1n n∑ i=1 − log [ pθ ( x ( i ) ) ] , ( 2 ) with equality iff qφ ( z|x ( i ) ) = pθ ( z|x ( i ) ) for all i . The additional parameters φ govern the shape of the variational distribution qφ ( z|x ) that is designed to approximate the true but often intractable latent posterior pθ ( z|x ) . The VAE energy from ( 2 ) is composed of two terms , a data-fitting loss that borrows the basic structure of an autoencoder ( AE ) , and a KL-divergence-based regularization factor . The former incentivizes assigning high probability to latent codes z that facilitate accurate reconstructions of each x ( i ) . In fact , if qφ ( z|x ) is a Dirac delta function , this term is exactly equivalent to a deterministic AE with data reconstruction loss defined by− log pθ ( x|z ) . Overall , it is because of this association that qφ ( z|x ) is generally referred to as the encoder distribution , while pθ ( x|z ) denotes the decoder distribution . Additionally , the KL regularizer KL [ qφ ( z|x ) ||p ( z ) ] pushes the encoder distribution towards the prior without violating the variational bound . For continuous data , which will be our primary focus herein , it is typical to assume that p ( z ) = N ( z|0 , I ) , pθ ( x|z ) = N ( x|µx , γI ) , and qφ ( z|x ) = N ( z|µz , Σz ) , ( 3 ) where γ > 0 is a scalar variance parameter , while the Gaussian moments µx ≡ µx ( z ; θ ) , µz ≡ µz ( x ; φ ) , and Σz ≡ diag [ σz ( x ; φ ) ] 2 are computed via feedforward neural network layers . The encoder network parameterized by φ takes x as an input and outputs µz and Σz . Similarly the decoder network parameterized by θ converts a latent code z into µx . Given these assumptions , the generic VAE objective from ( 2 ) can be refined to L ( θ , φ ) = 1n n∑ i=1 { Eqφ ( z|x ( i ) ) [ 1 γ ‖x ( i ) − µx ( z ; θ ) ‖22 ] ( 4 ) + d log γ + ∥∥∥σz ( x ( i ) ; φ ) ∥∥∥2 2 − log ∣∣∣∣diag [ σz ( x ( i ) ; φ ) ] 2∣∣∣∣+ ∥∥∥µz ( x ( i ) ; φ ) ∥∥∥2 2 } , excluding an inconsequential factor of 1/2 . This expression can be optimized over using SGD and a simple reparameterization strategy ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) to produce parameter estimates { θ∗ , φ∗ } . Among other things , new samples approximating the training data can then be generated via the ancestral process znew ∼ N ( z|0 , I ) and xnew ∼ pθ∗ ( x|znew ) . Although it has been argued that global minima of ( 4 ) may correspond with the optimal recovery of ground truth distributions in certain asymptotic settings ( Dai & Wipf , 2019 ) , it is well known that in practice , VAE models are at risk of converging to degenerate solutions where , for example , it may be that qφ ( z|x ) = p ( z ) . This phenomena , commonly referred to as VAE posterior collapse ( He et al. , 2019 ; Razavi et al. , 2019 ) , has been acknowledged and analyzed from a variety of different perspectives as we detail in Section 2 . That being said , we would argue that there remains lingering ambiguity regarding the different types and respective causes of posterior collapse . Consequently , Section 3 provides a useful taxonomy that will serve to contextualize our main technical contributions . These include the following : • Building upon existing analysis of affine VAE decoder models , in Section 4 we prove that even arbitrarily small nonlinear activations can introduce suboptimal local minima exhibiting posterior collapse . • We demonstrate in Section 5 that if the encoder/decoder networks are incapable of sufficiently reducing the VAE reconstruction errors , even in a deterministic setting with no KL-divergence regularizer , there will exist an implicit lower bound on the optimal value of γ . Moreover , we prove that if this γ is sufficiently large , the VAE will behave like an aggressive thresholding operator , enforcing exact posterior collapse , i.e. , qφ ( z|x ) = p ( z ) . • Based on these observations , we present experiments in Section 6 establishing that as network depth/capacity is increased , even for deterministic AE models with no regularization , reconstruction errors become worse . This bounds the effective VAE trade-off parameter γ such that posterior collapse is essentially inevitable . Collectively then , we provide convincing evidence that posterior collapse is , at least in certain settings , the fault of deep AE local minima , and need not be exclusively a consequence of usual suspects such as the KL-divergence term . We conclude in Section 7 with practical take-home messages , and motivate the search for improved AE architectures and training regimes that might be leveraged by analogous VAE models . 2 RECENT WORK AND THE USUAL SUSPECTS FOR INSTIGATING COLLAPSE . Posterior collapse under various guises is one of the most frequently addressed topics related to VAE performance . Depending on the context , arguably the most common and seemingly transparent suspect for causing collapse is the KL regularization factor that is obviously minimized by qφ ( z|x ) = p ( z ) . This perception has inspired various countermeasures , including heuristic annealing of the KL penalty or KL warm-start ( Bowman et al. , 2015 ; Huang et al. , 2018 ; Sønderby et al. , 2016 ) , tighter bounds on the log-likelihood ( Burda et al. , 2015 ; Rezende & Mohamed , 2015 ) , more complex priors ( Bauer & Mnih , 2018 ; Tomczak & Welling , 2018 ) , modified decoder architectures ( Cai et al. , 2017 ; Dieng et al. , 2018 ; Yeung et al. , 2017 ) , or efforts to explicitly disallow the prior from ever equaling the variational distribution ( Razavi et al. , 2019 ) . Thus far though , most published results do not indicate success generating high-resolution images , and in the majority of cases , evaluations are limited to small images and/or relatively shallow networks . This suggests that there may be more nuance involved in pinpointing the causes and potential remedies of posterior collapse . One notable exception though is the BIVA model from ( Maaløe et al. , 2019 ) , which employs a bidirectional hierarchy of latent variables , in part to combat posterior collapse . While improvements in NLL scores have been demonstrated with BIVA using relatively deep encoder/decoders , this model is significantly more complex and difficult to analyze . On the analysis side , there have been various efforts to explicitly characterize posterior collapse in restricted settings . For example , Lucas et al . ( 2019 ) demonstrate that if γ is fixed to a sufficiently large value , then a VAE energy function with an affine decoder mean will have minima that overprune latent dimensions . A related linearized approximation to the VAE objective is analyzed in ( Rolinek et al. , 2019 ) ; however , collapsed latent dimensions are excluded and it remains somewhat unclear how the surrogate objective relates to the original . Posterior collapse has also been associated with data-dependent decoder covariance networks Σx ( z ; θ ) 6= γI ( Mattei & Frellsen , 2018 ) , which allows for degenerate solutions assigning infinite density to a single data point and a diffuse , collapsed density everywhere else . Finally , from the perspective of training dynamics , ( He et al. , 2019 ) argue that a lagging inference network can also lead to posterior collapse . 3 TAXONOMY OF POSTERIOR COLLAPSE . Although there is now a vast literature on the various potential causes of posterior collapse , there remains ambiguity as to exactly what this phenomena is referring to . In this regard , we believe that it is critical to differentiate five subtle yet quite distinct scenarios that could reasonably fall under the generic rubric of posterior collapse : ( i ) Latent dimensions of z that are not needed for providing good reconstructions of the training data are set to the prior , meaning qφ ( zj |x ) ≈ p ( zj ) = N ( 0 , 1 ) at any superfluous dimension j . Along other dimensions σ2z will be near zero and µz will provide a usable predictive signal leading to accurate reconstructions of the training data . This case can actually be viewed as a desirable form of selective posterior collapse that , as argued in ( Dai & Wipf , 2019 ) , is a necessary ( albeit not sufficient ) condition for generating good samples . ( ii ) The decoder variance γ is not learned but fixed to a large value1 such that the KL term from ( 2 ) is overly dominant , forcing most or all dimensions of z to follow the prior N ( 0 , 1 ) . In this scenario , the actual global optimum of the VAE energy ( conditioned on γ being fixed ) will lead to deleterious posterior collapse and the model reconstructions of the training data will be poor . In fact , even the original marginal log-likelihood can potentially default to a trivial/useless solution if γ is fixed too large , assigning a small marginal likelihood to the training data , provably so in the affine case ( Lucas et al. , 2019 ) . ( iii ) As mentioned previously , if the Gaussian decoder covariance is learned as a separate network structure ( instead of simply Σx ( z ; θ ) = γI ) , there can exist degenerate solutions that assign infinite density to a single data point and a diffuse , isotropic Gaussian elsewhere ( Mattei & Frellsen , 2018 ) . This implies that ( 4 ) can be unbounded from below at what amounts to a posterior collapsed solution and bad reconstructions almost everywhere . ( iv ) When powerful non-Gaussian decoders are used , and in particular those that can parameterize complex distributions regardless of the value of z ( e.g. , PixelCNN-based ( Van den Oord et al. , 2016 ) ) , it is possible for the VAE to assign high-probability to the training data even if qφ ( z|x ) = p ( z ) ( Alemi et al. , 2017 ; Bowman et al. , 2015 ; Chen et al. , 2016 ) . This category of posterior collapse is quite distinct from categories ( ii ) and ( iii ) above in that , although the reconstructions are similarly poor , the associated NLL scores can still be good . ( v ) The previous four categories of posterior collapse can all be directly associated with emergent properties of the VAE global minimum under various modeling conditions . In contrast , a fifth type of collapse exists that is the explicit progeny of bad VAE local minima . More 1Or equivalently , a KL scaling parameter such as used by the β-VAE ( Higgins et al. , 2017 ) is set too large . specifically , as we will argue shortly , when deeper encoder/decoder networks are used , the risk of converging to bad , overregularized solutions increases . The remainder of this paper will primarily focus on category ( v ) , with brief mention of the other types for comparison purposes where appropriate . Our rationale for this selection bias is that , unlike the others , category ( i ) collapse is actually advantageous and hence need not be mitigated . In contrast , while category ( ii ) is undesirable , it be can be avoided by learning γ . As for category ( iii ) , this represents an unavoidable consequence of models with flexible decoder covariances capable of detecting outliers ( Dai et al. , 2019 ) . In fact , even simpler inlier/outlier decomposition models such as robust PCA are inevitably at risk for this phenomena ( Candès et al. , 2011 ) . Regardless , when Σz ( x ; θ ) = γI this problem goes away . And finally , we do not address category ( iv ) in depth simply because it is unrelated to the canonical Gaussian VAE models of continuous data that we have chosen to examine herein . Regardless , it is still worthwhile to explicitly differentiate these five types and bare them in mind when considering attempts to both explain and improve VAE models .
The paper theoretically investigates the role of “local optima” of the variational objective in ignoring latent variables (leading to posterior collapse) in variational autoencoders. The paper first discusses various potential causes for posterior collapse before diving deeper into a particular cause: local optima. The paper considers a class of near-affine decoders and characterise the relationship between the variance (gamma) in the likelihood and local optima. The paper then extends this discussion for deeper architecture and vanilla autoencoders and illustrate how this can arise when the reconstruction cost is high. The paper considers several experiments to illustrate this issue.
SP:311d2ebcdc0f71789d6c46d23451657519495119
The Usual Suspects? Reassessing Blame for VAE Posterior Collapse
1 INTRODUCTION . The variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) represents a powerful generative model of data points that are assumed to possess some complex yet unknown latent structure . This assumption is instantiated via the marginalized distribution pθ ( x ) = ∫ pθ ( x|z ) p ( z ) dz , ( 1 ) which forms the basis of prevailing VAE models . Here z ∈ Rκ is a collection of unobservable latent factors of variation that , when drawn from the prior p ( z ) , are colloquially said to generate an observed data point x ∈ Rd through the conditional distribution pθ ( x|z ) . The latter is controlled by parameters θ that can , at least conceptually speaking , be optimized by maximum likelihood over pθ ( x ) given available training examples . In particular , assuming n training points X = [ x ( 1 ) , . . . , x ( n ) ] , maximum likelihood estimation is tantamount to minimizing the negative log-likelihood expression 1n ∑ i− log [ pθ ( x ( i ) ) ] . Proceeding further , because the marginalization over z in ( 1 ) is often intractable , the VAE instead minimizes a convenient variational upper bound given by L ( θ , φ ) , 1 n n∑ i=1 { −Eqφ ( z|x ( i ) ) [ log pθ ( x ( i ) |z ) ] + KL [ qφ ( z|x ( i ) ||p ( z ) ] } ≥ 1n n∑ i=1 − log [ pθ ( x ( i ) ) ] , ( 2 ) with equality iff qφ ( z|x ( i ) ) = pθ ( z|x ( i ) ) for all i . The additional parameters φ govern the shape of the variational distribution qφ ( z|x ) that is designed to approximate the true but often intractable latent posterior pθ ( z|x ) . The VAE energy from ( 2 ) is composed of two terms , a data-fitting loss that borrows the basic structure of an autoencoder ( AE ) , and a KL-divergence-based regularization factor . The former incentivizes assigning high probability to latent codes z that facilitate accurate reconstructions of each x ( i ) . In fact , if qφ ( z|x ) is a Dirac delta function , this term is exactly equivalent to a deterministic AE with data reconstruction loss defined by− log pθ ( x|z ) . Overall , it is because of this association that qφ ( z|x ) is generally referred to as the encoder distribution , while pθ ( x|z ) denotes the decoder distribution . Additionally , the KL regularizer KL [ qφ ( z|x ) ||p ( z ) ] pushes the encoder distribution towards the prior without violating the variational bound . For continuous data , which will be our primary focus herein , it is typical to assume that p ( z ) = N ( z|0 , I ) , pθ ( x|z ) = N ( x|µx , γI ) , and qφ ( z|x ) = N ( z|µz , Σz ) , ( 3 ) where γ > 0 is a scalar variance parameter , while the Gaussian moments µx ≡ µx ( z ; θ ) , µz ≡ µz ( x ; φ ) , and Σz ≡ diag [ σz ( x ; φ ) ] 2 are computed via feedforward neural network layers . The encoder network parameterized by φ takes x as an input and outputs µz and Σz . Similarly the decoder network parameterized by θ converts a latent code z into µx . Given these assumptions , the generic VAE objective from ( 2 ) can be refined to L ( θ , φ ) = 1n n∑ i=1 { Eqφ ( z|x ( i ) ) [ 1 γ ‖x ( i ) − µx ( z ; θ ) ‖22 ] ( 4 ) + d log γ + ∥∥∥σz ( x ( i ) ; φ ) ∥∥∥2 2 − log ∣∣∣∣diag [ σz ( x ( i ) ; φ ) ] 2∣∣∣∣+ ∥∥∥µz ( x ( i ) ; φ ) ∥∥∥2 2 } , excluding an inconsequential factor of 1/2 . This expression can be optimized over using SGD and a simple reparameterization strategy ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) to produce parameter estimates { θ∗ , φ∗ } . Among other things , new samples approximating the training data can then be generated via the ancestral process znew ∼ N ( z|0 , I ) and xnew ∼ pθ∗ ( x|znew ) . Although it has been argued that global minima of ( 4 ) may correspond with the optimal recovery of ground truth distributions in certain asymptotic settings ( Dai & Wipf , 2019 ) , it is well known that in practice , VAE models are at risk of converging to degenerate solutions where , for example , it may be that qφ ( z|x ) = p ( z ) . This phenomena , commonly referred to as VAE posterior collapse ( He et al. , 2019 ; Razavi et al. , 2019 ) , has been acknowledged and analyzed from a variety of different perspectives as we detail in Section 2 . That being said , we would argue that there remains lingering ambiguity regarding the different types and respective causes of posterior collapse . Consequently , Section 3 provides a useful taxonomy that will serve to contextualize our main technical contributions . These include the following : • Building upon existing analysis of affine VAE decoder models , in Section 4 we prove that even arbitrarily small nonlinear activations can introduce suboptimal local minima exhibiting posterior collapse . • We demonstrate in Section 5 that if the encoder/decoder networks are incapable of sufficiently reducing the VAE reconstruction errors , even in a deterministic setting with no KL-divergence regularizer , there will exist an implicit lower bound on the optimal value of γ . Moreover , we prove that if this γ is sufficiently large , the VAE will behave like an aggressive thresholding operator , enforcing exact posterior collapse , i.e. , qφ ( z|x ) = p ( z ) . • Based on these observations , we present experiments in Section 6 establishing that as network depth/capacity is increased , even for deterministic AE models with no regularization , reconstruction errors become worse . This bounds the effective VAE trade-off parameter γ such that posterior collapse is essentially inevitable . Collectively then , we provide convincing evidence that posterior collapse is , at least in certain settings , the fault of deep AE local minima , and need not be exclusively a consequence of usual suspects such as the KL-divergence term . We conclude in Section 7 with practical take-home messages , and motivate the search for improved AE architectures and training regimes that might be leveraged by analogous VAE models . 2 RECENT WORK AND THE USUAL SUSPECTS FOR INSTIGATING COLLAPSE . Posterior collapse under various guises is one of the most frequently addressed topics related to VAE performance . Depending on the context , arguably the most common and seemingly transparent suspect for causing collapse is the KL regularization factor that is obviously minimized by qφ ( z|x ) = p ( z ) . This perception has inspired various countermeasures , including heuristic annealing of the KL penalty or KL warm-start ( Bowman et al. , 2015 ; Huang et al. , 2018 ; Sønderby et al. , 2016 ) , tighter bounds on the log-likelihood ( Burda et al. , 2015 ; Rezende & Mohamed , 2015 ) , more complex priors ( Bauer & Mnih , 2018 ; Tomczak & Welling , 2018 ) , modified decoder architectures ( Cai et al. , 2017 ; Dieng et al. , 2018 ; Yeung et al. , 2017 ) , or efforts to explicitly disallow the prior from ever equaling the variational distribution ( Razavi et al. , 2019 ) . Thus far though , most published results do not indicate success generating high-resolution images , and in the majority of cases , evaluations are limited to small images and/or relatively shallow networks . This suggests that there may be more nuance involved in pinpointing the causes and potential remedies of posterior collapse . One notable exception though is the BIVA model from ( Maaløe et al. , 2019 ) , which employs a bidirectional hierarchy of latent variables , in part to combat posterior collapse . While improvements in NLL scores have been demonstrated with BIVA using relatively deep encoder/decoders , this model is significantly more complex and difficult to analyze . On the analysis side , there have been various efforts to explicitly characterize posterior collapse in restricted settings . For example , Lucas et al . ( 2019 ) demonstrate that if γ is fixed to a sufficiently large value , then a VAE energy function with an affine decoder mean will have minima that overprune latent dimensions . A related linearized approximation to the VAE objective is analyzed in ( Rolinek et al. , 2019 ) ; however , collapsed latent dimensions are excluded and it remains somewhat unclear how the surrogate objective relates to the original . Posterior collapse has also been associated with data-dependent decoder covariance networks Σx ( z ; θ ) 6= γI ( Mattei & Frellsen , 2018 ) , which allows for degenerate solutions assigning infinite density to a single data point and a diffuse , collapsed density everywhere else . Finally , from the perspective of training dynamics , ( He et al. , 2019 ) argue that a lagging inference network can also lead to posterior collapse . 3 TAXONOMY OF POSTERIOR COLLAPSE . Although there is now a vast literature on the various potential causes of posterior collapse , there remains ambiguity as to exactly what this phenomena is referring to . In this regard , we believe that it is critical to differentiate five subtle yet quite distinct scenarios that could reasonably fall under the generic rubric of posterior collapse : ( i ) Latent dimensions of z that are not needed for providing good reconstructions of the training data are set to the prior , meaning qφ ( zj |x ) ≈ p ( zj ) = N ( 0 , 1 ) at any superfluous dimension j . Along other dimensions σ2z will be near zero and µz will provide a usable predictive signal leading to accurate reconstructions of the training data . This case can actually be viewed as a desirable form of selective posterior collapse that , as argued in ( Dai & Wipf , 2019 ) , is a necessary ( albeit not sufficient ) condition for generating good samples . ( ii ) The decoder variance γ is not learned but fixed to a large value1 such that the KL term from ( 2 ) is overly dominant , forcing most or all dimensions of z to follow the prior N ( 0 , 1 ) . In this scenario , the actual global optimum of the VAE energy ( conditioned on γ being fixed ) will lead to deleterious posterior collapse and the model reconstructions of the training data will be poor . In fact , even the original marginal log-likelihood can potentially default to a trivial/useless solution if γ is fixed too large , assigning a small marginal likelihood to the training data , provably so in the affine case ( Lucas et al. , 2019 ) . ( iii ) As mentioned previously , if the Gaussian decoder covariance is learned as a separate network structure ( instead of simply Σx ( z ; θ ) = γI ) , there can exist degenerate solutions that assign infinite density to a single data point and a diffuse , isotropic Gaussian elsewhere ( Mattei & Frellsen , 2018 ) . This implies that ( 4 ) can be unbounded from below at what amounts to a posterior collapsed solution and bad reconstructions almost everywhere . ( iv ) When powerful non-Gaussian decoders are used , and in particular those that can parameterize complex distributions regardless of the value of z ( e.g. , PixelCNN-based ( Van den Oord et al. , 2016 ) ) , it is possible for the VAE to assign high-probability to the training data even if qφ ( z|x ) = p ( z ) ( Alemi et al. , 2017 ; Bowman et al. , 2015 ; Chen et al. , 2016 ) . This category of posterior collapse is quite distinct from categories ( ii ) and ( iii ) above in that , although the reconstructions are similarly poor , the associated NLL scores can still be good . ( v ) The previous four categories of posterior collapse can all be directly associated with emergent properties of the VAE global minimum under various modeling conditions . In contrast , a fifth type of collapse exists that is the explicit progeny of bad VAE local minima . More 1Or equivalently , a KL scaling parameter such as used by the β-VAE ( Higgins et al. , 2017 ) is set too large . specifically , as we will argue shortly , when deeper encoder/decoder networks are used , the risk of converging to bad , overregularized solutions increases . The remainder of this paper will primarily focus on category ( v ) , with brief mention of the other types for comparison purposes where appropriate . Our rationale for this selection bias is that , unlike the others , category ( i ) collapse is actually advantageous and hence need not be mitigated . In contrast , while category ( ii ) is undesirable , it be can be avoided by learning γ . As for category ( iii ) , this represents an unavoidable consequence of models with flexible decoder covariances capable of detecting outliers ( Dai et al. , 2019 ) . In fact , even simpler inlier/outlier decomposition models such as robust PCA are inevitably at risk for this phenomena ( Candès et al. , 2011 ) . Regardless , when Σz ( x ; θ ) = γI this problem goes away . And finally , we do not address category ( iv ) in depth simply because it is unrelated to the canonical Gaussian VAE models of continuous data that we have chosen to examine herein . Regardless , it is still worthwhile to explicitly differentiate these five types and bare them in mind when considering attempts to both explain and improve VAE models .
This paper is clearly written and well structured. After categorizing difference causes of posterior collapse, the authors present a theoretical analysis of one such cause extending beyond the linear case covered in existing work. The authors then extended further to the deep VAE setting and showed that issues with the VAE may be accounted for by issues in the network architecture itself which would present when training an autoencoder.
SP:311d2ebcdc0f71789d6c46d23451657519495119
Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?
1 INTRODUCTION . Many real-world tasks may be decomposed into natural hierarchical structures . To navigate a large building , one first needs to learn how to walk and turn before combining these behaviors to achieve robust navigation ; to wash dishes , one first needs to learn basic object grasping and handling before composing a sequence of these primitives to successfully clean a collection of plates . Accordingly , hierarchy is an important topic in the context of reinforcement learning ( RL ) , in which an agent learns to solve tasks from trial-and-error experience , and the use of hierarchical reinforcement learning ( HRL ) has long held the promise to elevate the capabilities of RL agents to more complex tasks ( Dayan & Hinton , 1993 ; Schmidhuber , 1993 ; Parr & Russell , 1998 ; Barto & Mahadevan , 2003 ) . Recent work has made much progress towards delivering on this promise ( Levy et al. , 2017 ; Frans et al. , 2018 ; Vezhnevets et al. , 2017 ; Nachum et al. , 2019 ) . For example , Nachum et al . ( 2018a ; b ; 2019 ) use HRL to solve both simulated and real-world quadrupedal manipulation tasks , whereas state-of-the-art non-hierarchical methods are shown to make negligible progress on the same tasks . Levy et al . ( 2017 ) demonstrate similar results on complex navigation tasks , showing that HRL can find good policies with 3-5x fewer environment interactions than non-hierarchical methods . While the empirical success of HRL is clear , the underlying reasons for this success are more difficult to explain . Prior works have motivated the use of HRL with a number of intuitive arguments : high-level actions are proposed at a lower temporal frequency than the atomic actions of the environment , effectively shortening the length of episodes ; high-level actions often correspond to more semantically meaningful behaviors than the atomic actions of the environment , so both exploration and learning in this high-level action space is easier ; and so on . These claims are easy to understand intuitively , and some may even be theoretically motivated ( e.g. , shorter episodes are indeed easier to learn ; see Strehl et al . ( 2009 ) ; Azar et al . ( 2017 ) ) . On the other hand , the gap between any theoretical setting and the empirical settings in which these hierarchical systems excel is wide . Furthermore , in Markovian systems , there is no theoretical representational benefit to imposing temporally extended , hierarchical structures , since non-hierarchical policies that make a decision at every step can be optimal ( Puterman , 2014 ) . Nevertheless , the empirical advantages of hierarchy are self-evident in a number of recent works , which raises the question , why is hierarchy beneficial in these settings ? Which of the claimed benefits of hierarchy contribute to its empirical successes ? In this work , we answer these questions via empirical analysis on a suite of tasks encompassing locomotion , navigation , and manipulation . We devise a series of experiments to isolate and evaluate the claimed benefits of HRL . Surprisingly , we find that most of the empirical benefit of hierarchy in our considered settings can be attributed to improved exploration . Given this observation , we propose a number of exploration methods that are inspired by hierarchy but are much simpler to use and implement . These proposed exploration methods enable non-hierarchical RL agents to achieve performance competitive with state-of-the-art HRL . Although our analysis is empirical and thus our conclusions are limited to the tasks we consider , we believe that our findings are important to the field of HRL . Our findings reveal that only a subset of the claimed benefits of hierarchy are achievable by current state-of-the-art methods , even on tasks that were previously believed to be approachable only by HRL methods . Thus , more work must be done to devise hierarchical systems that achieve all of the claimed benefits . We also hope that our findings can provide useful insights for future research on exploration in RL . Our findings show that exploration research can be informed by successful techniques in HRL to realize more temporally extended and semantically meaningful exploration strategies . 2 RELATED WORK . Due to its intuitive and biological appeal ( Badre & Frank , 2011 ; Botvinick , 2012 ) , the field of HRL has been an active research topic in the machine learning community for many years . A number of different architectures for HRL have been proposed in the literature ( Dayan & Hinton , 1993 ; Kaelbling , 1993 ; Parr & Russell , 1998 ; Sutton et al. , 1999 ; Dietterich , 2000 ; Florensa et al. , 2017 ; Heess et al. , 2017 ) . We consider two paradigms specifically – the options framework ( Precup , 2000 ) and goal-conditioned hierarchies ( Nachum et al. , 2018b ) , due to their impressive success in recent work ( Frans et al. , 2018 ; Levy et al. , 2017 ; Nachum et al. , 2018a ; 2019 ) , though an examination of other architectures is an important direction for future research . One traditional approach to better understanding and justifying the use of an algorithm is through theoretical analysis . In tabular environments , there exist bounds on the sample complexity of learning a near-optimal policy dependent on the number of actions and effective episode horizon ( Brunskill & Li , 2014 ) . This bound can be used to motivate HRL when the high-level action space is smaller than the atomic action space ( smaller number of actions ) or the higher-level policy operates at a temporal abstraction greater than one ( shorter effective horizon ) . Previous work has also analyzed HRL ( specifically , the options framework ) in the more general setting of continuous states ( Mann & Mannor , 2014 ) . However , these theoretical statements rely on having access to near-optimal options , which are typically not available in practice . Moreover , while simple synthetic tasks can be constructed to demonstrate these theoretical benefits , it is unclear if any of these benefits actually play a role in empirical successes demonstrated in more complex environments . In contrast , our empirical analysis is specifically devised to isolate and evaluate the observed practical benefits of HRL . Our approach to isolating and evaluating the benefits of hierarchy via empirical analysis is partly inspired by previous empirical analysis on the benefits of options ( Jong et al. , 2008 ) . Following a previous flurry of research , empirical demonstrations , and claimed intuitive benefits of options in the early 2000 ’ s , Jong et al . ( 2008 ) set out to systematically evaluate these techniques . Similar to our findings , exploration was identified as a key benefit , although realizing this benefit relied on the use of specially designed options and excessive prior knowledge of the task . Most of the remaining observed empirical benefits were found to be due to the use of experience replay ( Lin , 1992 ) , and the same performance could be achieved with experience replay alone on a non-hierarchical agent . Nowadays , experience replay is an ubiquitous component of RL algorithms . Moreover , the hierarchical paradigms of today are largely model-free and achieve more impressive practical results than the gridworld tasks evaluated by Jong et al . ( 2008 ) . Therefore , we present our work as a recalibration of the field ’ s understanding with regards to current state-of-the-art hierarchical methods . 3 HIERARCHICAL REINFORCEMENT LEARNING . We briefly summarize the HRL methods and environments we evaluate on . We consider the typical two-layer hierarchical design , in which a higher-level policy solves a task by directing one or more lower-level policies . In the simplest case , the higher-level policy chooses a new high-level action every c timesteps.1 In the options framework , the high-level action is a discrete choice , indicating which of m lower-level policies ( called options ) to activate for the next c steps . In goalconditioned hierarchies , there is a single goal-conditioned lower-level policy , and the high-level action is a continuous-valued goal state which the lower-level is directed to reach . Lower-level policy training operates differently in each of the HRL paradigms . For the options framework , we follow Bacon et al . ( 2017 ) ; Frans et al . ( 2018 ) , training each lower-level policy to maximize environment reward . We train m separate Q-value functions to minimize errors , E ( st , at , Rt , st+1 ) = ( Qlo , m ( st , at ) −Rt − γQlo , m ( st+1 , πlo , m ( st+1 ) ) 2 , ( 1 ) over single-step transitions , and the m option policies are learned to maximize this Q-value Qlo , m ( st , πlo , m ( st ) ) . In contrast , for HIRO ( Nachum et al. , 2018a ) and HAC ( Levy et al. , 2017 ) , the lower-level policy and Q-function are goal-conditioned . That is , a Q-function is learned to minimize errors , E ( st , gt , at , rt , st+1 , gt+1 ) = ( Qlo ( st , gt , at ) − rt − γQlo ( st+1 , gt+1 , πlo ( st+1 , gt+1 ) ) 2 , ( 2 ) over single-step transitions , where gt is the current goal ( high-level action updated every c steps ) and rt is an intrinsic reward measuring negative L2 distance to the goal . The lower-level policy is then trained to maximize the Q-value Qlo ( st , gt , πlo ( st , gt ) ) . For higher-level training we follow Nachum et al . ( 2018a ) ; Frans et al . ( 2018 ) and train based on temporally-extended c-step transitions ( st , gt , Rt : t+c−1 , st+c ) , where gt is a high-level action ( discrete identifier for options , goal for goal-conditioned hierarchies ) and Rt : t+c−1 = ∑c−1 k=0Rt+k is the c-step sum of environment rewards . That is , a Q-value function is learned to minimize errors , E ( st , gt , Rt : t+c−1 , st+c ) = ( Qhi ( st , gt ) −Rt : t+c−1 − γQhi ( st+c , πhi ( st+c ) ) ) 2 . ( 3 ) In the options framework where high-level actions are discrete , the higher-level policy is simply πhi ( s ) : = argmaxg Qhi ( s , g ) . In goal-conditioned HRL where high-level actions are continuous , the higher-level policy is learned to maximize the Q-value Qhi ( s , πhi ( s ) ) . Note that higher-level training in HRL is distinct from the use of multi-step rewards or n-step returns ( Hessel et al. , 2018 ) , which proposes to train a non-hierarchical agent with respect to transi- 1We restrict our analysis to hierarchies using fixed c , although evaluating variable-length temporal abstractions are an important avenue for future work . tions ( st , at , Rt : t+crew−1 , st+crew ) ; i.e. , the Q-value of a non-HRL policy is learned to minimize , E ( st , at , Rt : t+c−1 , st+c ) = ( Q ( st , at ) −Rt : t+crew−1 − γQ ( st+crew , π ( st+crew ) ) ) 2 , ( 4 ) while the policy is learned to choose atomic actions to maximize Q ( s , π ( s ) ) . In contrast , in HRL both the rewards and the actions gt used in the Q-value regression loss are temporally extended . However , as we will see in Section 5.2 , the use of multi-step rewards alone can achieve almost all of the benefits associated with hierarchical training ( controlling for exploration benefits ) . For our empirical analysis , we consider four difficult tasks involving simulated robot locomotion , navigation , and object manipulation ( see Figure 1 ) . To alleviate issues of goal representation learning in goal-conditioned HRL , we fix the goals to be relative x , y coordinates of the agent , which are a naturally good representation for our considered tasks . We note that this is only done to better control our empirical analysis , and that goal-conditioned HRL can achieve good performance on our considered tasks without this prior knowledge ( Nachum et al. , 2018b ) . We present the results of two goal-conditioned HRL methods : HIRO ( Nachum et al. , 2018a ) and HIRO with goal relabelling ( inspired by HAC ; Levy et al . ( 2017 ) ) and an options implementation based on Frans et al . ( 2018 ) in Figure 1 . HRL methods can achieve strong performance on these tasks , while non-hierarchical methods struggle to make any progress at all . In this work , we strive to isolate and evaluate the key properties of HRL which lead to this stark difference .
This is an interesting paper, as it tries to understand the role of hierarchical methods (such as Options, higher level controllers etc) in RL. The core contribution of the paper is understand and evaluate the claimed benefits often proposed by hierarchical methods, and finds that the core benefit in fact comes from exploration. The paper studies hierarchical methods to eventually draw the conclusion that HRL in fact leads to better exploration based behaviour in complex tasks.
SP:c8c5809f731c2f0c6bf01e24bc4d9eb7cf924ccd
Why Does Hierarchy (Sometimes) Work So Well in Reinforcement Learning?
1 INTRODUCTION . Many real-world tasks may be decomposed into natural hierarchical structures . To navigate a large building , one first needs to learn how to walk and turn before combining these behaviors to achieve robust navigation ; to wash dishes , one first needs to learn basic object grasping and handling before composing a sequence of these primitives to successfully clean a collection of plates . Accordingly , hierarchy is an important topic in the context of reinforcement learning ( RL ) , in which an agent learns to solve tasks from trial-and-error experience , and the use of hierarchical reinforcement learning ( HRL ) has long held the promise to elevate the capabilities of RL agents to more complex tasks ( Dayan & Hinton , 1993 ; Schmidhuber , 1993 ; Parr & Russell , 1998 ; Barto & Mahadevan , 2003 ) . Recent work has made much progress towards delivering on this promise ( Levy et al. , 2017 ; Frans et al. , 2018 ; Vezhnevets et al. , 2017 ; Nachum et al. , 2019 ) . For example , Nachum et al . ( 2018a ; b ; 2019 ) use HRL to solve both simulated and real-world quadrupedal manipulation tasks , whereas state-of-the-art non-hierarchical methods are shown to make negligible progress on the same tasks . Levy et al . ( 2017 ) demonstrate similar results on complex navigation tasks , showing that HRL can find good policies with 3-5x fewer environment interactions than non-hierarchical methods . While the empirical success of HRL is clear , the underlying reasons for this success are more difficult to explain . Prior works have motivated the use of HRL with a number of intuitive arguments : high-level actions are proposed at a lower temporal frequency than the atomic actions of the environment , effectively shortening the length of episodes ; high-level actions often correspond to more semantically meaningful behaviors than the atomic actions of the environment , so both exploration and learning in this high-level action space is easier ; and so on . These claims are easy to understand intuitively , and some may even be theoretically motivated ( e.g. , shorter episodes are indeed easier to learn ; see Strehl et al . ( 2009 ) ; Azar et al . ( 2017 ) ) . On the other hand , the gap between any theoretical setting and the empirical settings in which these hierarchical systems excel is wide . Furthermore , in Markovian systems , there is no theoretical representational benefit to imposing temporally extended , hierarchical structures , since non-hierarchical policies that make a decision at every step can be optimal ( Puterman , 2014 ) . Nevertheless , the empirical advantages of hierarchy are self-evident in a number of recent works , which raises the question , why is hierarchy beneficial in these settings ? Which of the claimed benefits of hierarchy contribute to its empirical successes ? In this work , we answer these questions via empirical analysis on a suite of tasks encompassing locomotion , navigation , and manipulation . We devise a series of experiments to isolate and evaluate the claimed benefits of HRL . Surprisingly , we find that most of the empirical benefit of hierarchy in our considered settings can be attributed to improved exploration . Given this observation , we propose a number of exploration methods that are inspired by hierarchy but are much simpler to use and implement . These proposed exploration methods enable non-hierarchical RL agents to achieve performance competitive with state-of-the-art HRL . Although our analysis is empirical and thus our conclusions are limited to the tasks we consider , we believe that our findings are important to the field of HRL . Our findings reveal that only a subset of the claimed benefits of hierarchy are achievable by current state-of-the-art methods , even on tasks that were previously believed to be approachable only by HRL methods . Thus , more work must be done to devise hierarchical systems that achieve all of the claimed benefits . We also hope that our findings can provide useful insights for future research on exploration in RL . Our findings show that exploration research can be informed by successful techniques in HRL to realize more temporally extended and semantically meaningful exploration strategies . 2 RELATED WORK . Due to its intuitive and biological appeal ( Badre & Frank , 2011 ; Botvinick , 2012 ) , the field of HRL has been an active research topic in the machine learning community for many years . A number of different architectures for HRL have been proposed in the literature ( Dayan & Hinton , 1993 ; Kaelbling , 1993 ; Parr & Russell , 1998 ; Sutton et al. , 1999 ; Dietterich , 2000 ; Florensa et al. , 2017 ; Heess et al. , 2017 ) . We consider two paradigms specifically – the options framework ( Precup , 2000 ) and goal-conditioned hierarchies ( Nachum et al. , 2018b ) , due to their impressive success in recent work ( Frans et al. , 2018 ; Levy et al. , 2017 ; Nachum et al. , 2018a ; 2019 ) , though an examination of other architectures is an important direction for future research . One traditional approach to better understanding and justifying the use of an algorithm is through theoretical analysis . In tabular environments , there exist bounds on the sample complexity of learning a near-optimal policy dependent on the number of actions and effective episode horizon ( Brunskill & Li , 2014 ) . This bound can be used to motivate HRL when the high-level action space is smaller than the atomic action space ( smaller number of actions ) or the higher-level policy operates at a temporal abstraction greater than one ( shorter effective horizon ) . Previous work has also analyzed HRL ( specifically , the options framework ) in the more general setting of continuous states ( Mann & Mannor , 2014 ) . However , these theoretical statements rely on having access to near-optimal options , which are typically not available in practice . Moreover , while simple synthetic tasks can be constructed to demonstrate these theoretical benefits , it is unclear if any of these benefits actually play a role in empirical successes demonstrated in more complex environments . In contrast , our empirical analysis is specifically devised to isolate and evaluate the observed practical benefits of HRL . Our approach to isolating and evaluating the benefits of hierarchy via empirical analysis is partly inspired by previous empirical analysis on the benefits of options ( Jong et al. , 2008 ) . Following a previous flurry of research , empirical demonstrations , and claimed intuitive benefits of options in the early 2000 ’ s , Jong et al . ( 2008 ) set out to systematically evaluate these techniques . Similar to our findings , exploration was identified as a key benefit , although realizing this benefit relied on the use of specially designed options and excessive prior knowledge of the task . Most of the remaining observed empirical benefits were found to be due to the use of experience replay ( Lin , 1992 ) , and the same performance could be achieved with experience replay alone on a non-hierarchical agent . Nowadays , experience replay is an ubiquitous component of RL algorithms . Moreover , the hierarchical paradigms of today are largely model-free and achieve more impressive practical results than the gridworld tasks evaluated by Jong et al . ( 2008 ) . Therefore , we present our work as a recalibration of the field ’ s understanding with regards to current state-of-the-art hierarchical methods . 3 HIERARCHICAL REINFORCEMENT LEARNING . We briefly summarize the HRL methods and environments we evaluate on . We consider the typical two-layer hierarchical design , in which a higher-level policy solves a task by directing one or more lower-level policies . In the simplest case , the higher-level policy chooses a new high-level action every c timesteps.1 In the options framework , the high-level action is a discrete choice , indicating which of m lower-level policies ( called options ) to activate for the next c steps . In goalconditioned hierarchies , there is a single goal-conditioned lower-level policy , and the high-level action is a continuous-valued goal state which the lower-level is directed to reach . Lower-level policy training operates differently in each of the HRL paradigms . For the options framework , we follow Bacon et al . ( 2017 ) ; Frans et al . ( 2018 ) , training each lower-level policy to maximize environment reward . We train m separate Q-value functions to minimize errors , E ( st , at , Rt , st+1 ) = ( Qlo , m ( st , at ) −Rt − γQlo , m ( st+1 , πlo , m ( st+1 ) ) 2 , ( 1 ) over single-step transitions , and the m option policies are learned to maximize this Q-value Qlo , m ( st , πlo , m ( st ) ) . In contrast , for HIRO ( Nachum et al. , 2018a ) and HAC ( Levy et al. , 2017 ) , the lower-level policy and Q-function are goal-conditioned . That is , a Q-function is learned to minimize errors , E ( st , gt , at , rt , st+1 , gt+1 ) = ( Qlo ( st , gt , at ) − rt − γQlo ( st+1 , gt+1 , πlo ( st+1 , gt+1 ) ) 2 , ( 2 ) over single-step transitions , where gt is the current goal ( high-level action updated every c steps ) and rt is an intrinsic reward measuring negative L2 distance to the goal . The lower-level policy is then trained to maximize the Q-value Qlo ( st , gt , πlo ( st , gt ) ) . For higher-level training we follow Nachum et al . ( 2018a ) ; Frans et al . ( 2018 ) and train based on temporally-extended c-step transitions ( st , gt , Rt : t+c−1 , st+c ) , where gt is a high-level action ( discrete identifier for options , goal for goal-conditioned hierarchies ) and Rt : t+c−1 = ∑c−1 k=0Rt+k is the c-step sum of environment rewards . That is , a Q-value function is learned to minimize errors , E ( st , gt , Rt : t+c−1 , st+c ) = ( Qhi ( st , gt ) −Rt : t+c−1 − γQhi ( st+c , πhi ( st+c ) ) ) 2 . ( 3 ) In the options framework where high-level actions are discrete , the higher-level policy is simply πhi ( s ) : = argmaxg Qhi ( s , g ) . In goal-conditioned HRL where high-level actions are continuous , the higher-level policy is learned to maximize the Q-value Qhi ( s , πhi ( s ) ) . Note that higher-level training in HRL is distinct from the use of multi-step rewards or n-step returns ( Hessel et al. , 2018 ) , which proposes to train a non-hierarchical agent with respect to transi- 1We restrict our analysis to hierarchies using fixed c , although evaluating variable-length temporal abstractions are an important avenue for future work . tions ( st , at , Rt : t+crew−1 , st+crew ) ; i.e. , the Q-value of a non-HRL policy is learned to minimize , E ( st , at , Rt : t+c−1 , st+c ) = ( Q ( st , at ) −Rt : t+crew−1 − γQ ( st+crew , π ( st+crew ) ) ) 2 , ( 4 ) while the policy is learned to choose atomic actions to maximize Q ( s , π ( s ) ) . In contrast , in HRL both the rewards and the actions gt used in the Q-value regression loss are temporally extended . However , as we will see in Section 5.2 , the use of multi-step rewards alone can achieve almost all of the benefits associated with hierarchical training ( controlling for exploration benefits ) . For our empirical analysis , we consider four difficult tasks involving simulated robot locomotion , navigation , and object manipulation ( see Figure 1 ) . To alleviate issues of goal representation learning in goal-conditioned HRL , we fix the goals to be relative x , y coordinates of the agent , which are a naturally good representation for our considered tasks . We note that this is only done to better control our empirical analysis , and that goal-conditioned HRL can achieve good performance on our considered tasks without this prior knowledge ( Nachum et al. , 2018b ) . We present the results of two goal-conditioned HRL methods : HIRO ( Nachum et al. , 2018a ) and HIRO with goal relabelling ( inspired by HAC ; Levy et al . ( 2017 ) ) and an options implementation based on Frans et al . ( 2018 ) in Figure 1 . HRL methods can achieve strong performance on these tasks , while non-hierarchical methods struggle to make any progress at all . In this work , we strive to isolate and evaluate the key properties of HRL which lead to this stark difference .
This paper evaluates the benefits of using hierarchical RL (HRL) methods compared to regular shallow RL methods for fully observed MDPs. The goal of the work is to isolate and evaluate the benefits of using HRL on different control tasks (AntMaze, AntPush, AntBlock, AntBlockMaze). They find that the major benefit of HRL comes in the form of better exploration, compared to the ease of learning policies. They claim that the use of multi-step rewards alone is sufficient to provide the benefits associated with HRL. They also provide two exploration methods that are not hierarchical in nature but achieve similar performance: a) Explore and Exploit and b) Switching Ensemble.
SP:c8c5809f731c2f0c6bf01e24bc4d9eb7cf924ccd
Robust Learning with Jacobian Regularization
1 INTRODUCTION . Stability analysis lies at the heart of many scientific and engineering disciplines . In an unstable system , infinitesimal perturbations amplify and have substantial impacts on the performance of the system . It is especially critical to perform a thorough stability analysis on complex engineered systems deployed in practice , or else what may seem like innocuous perturbations can lead to catastrophic consequences such as the Tacoma Narrows Bridge collapse ( Amman et al. , 1941 ) and the Space Shuttle Challenger disaster ( Feynman and Leighton , 2001 ) . As a rule of thumb , well-engineered systems should be robust against any input shifts – expected or unexpected . Most models in machine learning are complex nonlinear systems and thus no exception to this rule . For instance , a reliable model must withstand shifts from training data to unseen test data , bridging the so-called generalization gap . This problem is severe especially when training data are strongly biased with respect to test data , as in domain-adaptation tasks , or when only sparse sampling of a true underlying distribution is available , as in few-shot learning . Any instability in the system can further be exploited by adversaries to render trained models utterly useless ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016a ; Kurakin et al. , 2016 ; Madry et al. , 2017 ; Carlini and Wagner , 2017 ; Gilmer et al. , 2018 ) . It is thus of utmost importance to ensure that models be stable against perturbations in the input space . Various regularization schemes have been proposed to improve the stability of models . For linear classifiers and support vector machines ( Cortes and Vapnik , 1995 ) , this goal is attained via an L2 regularization which maximizes classification margins and reduces overfitting to the training data . This regularization technique has been widely used for neural networks as well and shown to promote generalization ( Hinton , 1987 ; Krogh and Hertz , 1992 ; Zhang et al. , 2018 ) . However , it remains unclear whether or not L2 regularization increases classification margins and stability of a network , especially for deep architectures with intertwining nonlinearity . In this paper , we suggest ensuring robustness of nonlinear models via a Jacobian regularization scheme . We illustrate the intuition behind our regularization approach by visualizing the classification margins of a simple MNIST digit classifier in Figure 1 ( see Appendix A for more ) . Decision cells of a neural network , trained without regularization , are very rugged and can be unpredictably unstable ( Figure 1a ) . On average , L2 regularization smooths out these rugged boundaries but does not necessarily increase the size of decision cells , i.e. , does not increase classification margins ( Figure 1b ) . In contrast , Jacobian regularization pushes decision boundaries farther away from each training data point , enlarging decision cells and reducing instability ( Figure 1c ) . The goal of the paper is to promote Jacobian regularization as a generic scheme for increasing robustness while also being agnostic to the architecture , domain , or task to which it is applied . In support of this , after presenting the Jacobian regularizer , we evaluate its effect both in isolation as well as in combination with multiple existing approaches that are intended to promote robustness and generalization . Our intention is to showcase the ease of use and complimentary nature of our proposed regularization . Domain experts in each field should be able to quickly incorporate our regularizer into their learning pipeline as a simple way of improving the performance of their state-of-the-art system . The rest of the paper is structured as follows . In Section 2 we motivate the usage of Jacobian regularization and develop a computationally efficient algorithm for its implementation . Next , the effectiveness of this regularizer is empirically studied in Section 3 . As regularlizers constrain the learning problem , we first verify that the introduction of our regularizer does not adversely affect learning in the case when input data remain unperturbed . Robustness against both random and adversarial perturbations is then evaluated and shown to receive significant improvements from the Jacobian regularizer . We contrast our work with the literature in Section 4 and conclude in Section 5 . 2 METHOD . Here we introduce a scheme for minimizing the norm of an input-output Jacobian matrix as a technique for regularizing learning with stochastic gradient descent ( SGD ) . We begin by formally defining the input-output Jacobian and then explain an efficient algorithm for computing the Jacobian regularizer using standard machine learning frameworks . 2.1 STABILITY ANALYSIS AND INPUT-OUTPUT JACOBIAN . Let us consider the set of classification functions , f , which take a vectorized sensory signal , x ∈ RI , as input and outputs a score vector , z = f ( x ) ∈ RC , where each element , zc , is associated with likelihood that the input is from category , c.1 In this work , we focus on learning this classification function as a neural network with model parameters θ , though our findings should generalize to any parameterized function . Our goal is to learn the model parameters that minimize the classification objective on the available training data while also being stable against perturbations in the input space so as to increase classification margins . 1Throughout the paper , the vector z denotes the logit before applying a softmax layer . The probabilistic output of the softmax pc relates to zc via pc ≡ e zc/T∑ c′ e z c′/T with temperature T , typically set to unity . The input-output Jacobian matrix naturally emerges in the stability analysis of the model predictions against input perturbations . Let us consider a small perturbation vector , ∈ RI , of the same dimension as the input . For a perturbed input x̃ = x+ , the corresponding output values shift to z̃c = fc ( x+ ) = fc ( x ) + I∑ i=1 i · ∂fc ∂xi ( x ) +O ( 2 ) = zc + I∑ i=1 Jc ; i ( x ) · i +O ( 2 ) ( 1 ) where in the second equality the function was Taylor-expanded with respect to the input perturbation and in the third equality the input-output Jacobian matrix , Jc ; i ( x ) ≡ ∂fc ∂xi ( x ) , ( 2 ) was introduced . As the function f is typically almost everywhere analytic , for sufficiently small perturbations the higher-order terms can be neglected and the stability of the prediction is governed by the input-output Jacobian . 2.2 ROBUSTNESS THROUGH INPUT-OUTPUT JACOBIAN MINIMIZATION . From Equation ( 1 ) , it is straightforward to see that the larger the components of the Jacobian are , the more unstable the model prediction is with respect to input perturbations . A natural way to reduce this instability then is to decrease the magnitude for each component of the Jacobian matrix , which can be realized by minimizing the square of the Frobenius norm of the input-output Jacobian,2 ||J ( x ) ||2F ≡ ∑ i , c [ Jc ; i ( x ) ] 2 . ( 3 ) For linear models , this reduces exactly to L2 regularization that increases classification margins of these models . For nonlinear models , however , Jacobian regularization does not equate to L2 regularization , and we expect these schemes to affect models differently . In particular , predictions made by models trained with the Jacobian regularization do not vary much as inputs get perturbed and hence decision cells enlarge on average . This increase in stability granted by the Jacobian regularization is visualized in Figure 1 , which depicts a cross section of the decision cells for the MNIST digit classification problem using a nonlinear neural network ( LeCun et al. , 1998 ) . The Jacobian regularizer in Equation ( 3 ) can be combined with any loss objective used for training parameterized models . Concretely , consider a supervised learning problem modeled by a neural network and optimized with SGD . At each iteration , a mini-batch B consists of a set of labeled examples , { xα , yα } α∈B , and a supervised loss function , Lsuper , is optimized possibly together with some other regularizerR ( θ ) – such as L2 regularizer λWD2 θ 2 – over the function parameter space , by minimizing the following bare loss function Lbare ( { xα , yα } α∈B ; θ ) = 1 |B| ∑ α∈B Lsuper [ f ( xα ) ; yα ] +R ( θ ) . ( 4 ) To integrate our Jacobian regularizer into training , one instead optimizes the following joint loss LBjoint ( θ ) = Lbare ( { xα , yα } α∈B ; θ ) + λJR 2 [ 1 |B| ∑ α∈B ||J ( xα ) ||2F ] , ( 5 ) where λJR is a hyperparameter that determines the relative importance of the Jacobian regularizer . By minimizing this joint loss with sufficient training data and a properly chosen λJR , we expect models to learn both correctly and robustly . 2Minimizing the Frobenius norm will also reduce the L1-norm , since these norms satisfy the inequalities ||J ( x ) ||F ≤ ∑ i , c ∣∣Jc ; i ( x ) ∣∣ ≤ √IC||J ( x ) ||F . We prefer to minimize the Frobenius norm over the L1-norm because the ability to express the former as a trace leads to an efficient algorithm [ see Equations ( 6 ) through ( 8 ) ] . 2.3 EFFICIENT APPROXIMATE ALGORITHM . In the previous section we have argued for minimizing the Frobenius norm of the input-output Jacobian to improve robustness during learning . The main question that follows is how to efficiently compute and implement this regularizer in such a way that its optimization can seamlessly be incorporated into any existing learning paradigm . Recently , Sokolić et al . ( 2017 ) also explored the idea of regularizing the Jacobian matrix during learning , but only provided an inefficient algorithm requiring an increase in computational cost that scales linearly with the number of output classes , C , compared to the bare optimization problem ( see explanation below ) . In practice , such an overhead will be prohibitively expensive for many large-scale learning problems , e.g . ImageNet classification has C = 1000 target classes ( Deng et al. , 2009 ) . ( Our scheme , in contrast , can be used for ImageNet : see Appendix H. ) Here , we offer a different solution that makes use of random projections to efficiently approximate the Frobenius norm of the Jacobian.3 This only introduces a constant time overhead and can be made very small in practice . When considering such an approximate algorithm , one naively must trade off efficiency against accuracy for computing the Jacobian , which ultimately trades computation time for robustness . Prior work by Varga et al . ( 2017 ) briefly considers an approach based on random projection , but without providing any analysis on the quality of the Jacobian approximation . Here , we describe our algorithm , analyze theoretical convergence guarantees , and verify empirically that there is only a negligible difference in model solution quality between training with the exact computation of the Jacobian as compared to training with the approximate algorithm , even when using a single random projection ( see Figure 2 ) . Given that optimization is commonly gradient based , it is essential to efficiently compute gradients of the joint loss in Equation ( 5 ) and in particular of the squared Frobenius norm of the Jacobian . First , we note that automatic differentiation systems implement a function that computes the derivative of a vector such as z with respect to any variables on which it depends , if the vector is first contracted with another fixed vector . To take advantage of this functionality , we rewrite the squared Frobienus norm as ||J ( x ) ||2F = Tr ( JJT ) = ∑ { e } eJJTeT = ∑ { e } [ ∂ ( e · z ) ∂x ] 2 , ( 6 ) where a constant orthonormal basis , { e } , of the C-dimensional output space was inserted in the second equality and the last equality follows from definition ( 2 ) and moving the constant vector inside the derivative . For each basis vector e , the quantity in the last parenthesis can then be efficiently computed by differentiating the product , e · z , with respect to input parameters , x . Recycling that computational graph , the derivative of the squared Frobenius norm with respect to the model parameters , θ , can be computed through backpropagation with any use of automatic differentiation . Sokolić et al . ( 2017 ) essentially considers this exact computation , which requires backpropagating gradients through the model C times to iterate over the C orthonormal basis vectors { e } . Ultimately , this incurs computational overhead that scales linearly with the output dimension C. Instead , we further rewrite Equation ( 6 ) in terms of the expectation of an unbiased estimator ||J ( x ) ||2F = C Ev̂∼SC−1 [ ||v̂ · J ||2 ] , ( 7 ) where the random vector v̂ is drawn from the ( C − 1 ) -dimensional unit sphere SC−1 . Using this relationship , we can use samples of nproj random vectors v̂µ to estimate the square of the norm as ||J ( x ) ||2F ≈ 1 nproj nproj∑ µ=1 [ ∂ ( v̂µ · z ) ∂x ] 2 , ( 8 ) which converges to the true value as O ( n−1/2proj ) . The derivation of Equation ( 7 ) and the calculation of its convergence make use of random-matrix techniques and are provided in Appendix B . Finally , we expect that the fluctuations of our estimator can be suppressed by cancellations within a mini-batch . With nearly independent and identically distributed samples in a mini-batch of size 3In Appendix C , we give an alternative method for computing gradients of the Jacobian regularizer by using an analytically derived formula . Algorithm 1 Efficient computation of the approximate gradient of the Jacobian regularizer . Inputs : mini-batch of |B| examples xα , model outputs zα , and number of projections nproj . Outputs : Square of the Frobenius norm of the Jacobian JF and its gradient∇θJF . JF = 0 for i = 1 to nproj do { vαc } ∼ N ( 0 , I ) . ( |B| , C ) -dim tensor with each element sampled from a standard normal . v̂α = vα/||vα|| . Uniform sampling from the unit sphere for each α. zflat = Flatten ( { zα } ) ; vflat = Flatten ( { v̂α } ) . Flatten for parallelism . Jv = ∂ ( zflat · vflat ) /∂xα JF += C||Jv||2/ ( nproj|B| ) end for ∇θJF = ∂JF /∂θ return JF , ∇θJF |B| 1 , we expect the error in our estimate to be of order ( nproj|B| ) −1/2 . In fact , as shown in Figure 2 , with a mini-batch size of |B| = 100 , single projection yields model performance that is nearly identical to the exact method , with computational cost being reduced by orders of magnitude . The complete algorithm is presented in Algorithm 1 . With a straightforward implementation in PyTorch ( Paszke et al. , 2017 ) and nproj = 1 , we observed the computational cost of the training with the Jacobian regularization to be only ≈ 1.3 times that of the standard SGD computation cost , while retaining all the practical benefits of the expensive exact method.4
Stability is one of the important aspects of machine learning. This paper views Jacobian regularization as a scheme to improve the stability, and studies the behavior of Jacobian regularization under random input perturbations, adversarial input perturbations, train/test distribution shift, and simply as a regularization tool for the classical setting without any distribution shifts nor perturbations. There are already several related works that propose to use Jacobian regularization, but previous works didn’t have an efficient algorithm and also did not have theoretical convergence guarantee. This paper offers a solution that efficiently approximate the Frobenius norm of the Jacobian and also show the optimal convergence rate for the proposed method. Various experiments show that the behavior of Jacobian regularization and show that it is robust.
SP:385a392e6d055abd65a737f3c5be58105778ac11
Robust Learning with Jacobian Regularization
1 INTRODUCTION . Stability analysis lies at the heart of many scientific and engineering disciplines . In an unstable system , infinitesimal perturbations amplify and have substantial impacts on the performance of the system . It is especially critical to perform a thorough stability analysis on complex engineered systems deployed in practice , or else what may seem like innocuous perturbations can lead to catastrophic consequences such as the Tacoma Narrows Bridge collapse ( Amman et al. , 1941 ) and the Space Shuttle Challenger disaster ( Feynman and Leighton , 2001 ) . As a rule of thumb , well-engineered systems should be robust against any input shifts – expected or unexpected . Most models in machine learning are complex nonlinear systems and thus no exception to this rule . For instance , a reliable model must withstand shifts from training data to unseen test data , bridging the so-called generalization gap . This problem is severe especially when training data are strongly biased with respect to test data , as in domain-adaptation tasks , or when only sparse sampling of a true underlying distribution is available , as in few-shot learning . Any instability in the system can further be exploited by adversaries to render trained models utterly useless ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016a ; Kurakin et al. , 2016 ; Madry et al. , 2017 ; Carlini and Wagner , 2017 ; Gilmer et al. , 2018 ) . It is thus of utmost importance to ensure that models be stable against perturbations in the input space . Various regularization schemes have been proposed to improve the stability of models . For linear classifiers and support vector machines ( Cortes and Vapnik , 1995 ) , this goal is attained via an L2 regularization which maximizes classification margins and reduces overfitting to the training data . This regularization technique has been widely used for neural networks as well and shown to promote generalization ( Hinton , 1987 ; Krogh and Hertz , 1992 ; Zhang et al. , 2018 ) . However , it remains unclear whether or not L2 regularization increases classification margins and stability of a network , especially for deep architectures with intertwining nonlinearity . In this paper , we suggest ensuring robustness of nonlinear models via a Jacobian regularization scheme . We illustrate the intuition behind our regularization approach by visualizing the classification margins of a simple MNIST digit classifier in Figure 1 ( see Appendix A for more ) . Decision cells of a neural network , trained without regularization , are very rugged and can be unpredictably unstable ( Figure 1a ) . On average , L2 regularization smooths out these rugged boundaries but does not necessarily increase the size of decision cells , i.e. , does not increase classification margins ( Figure 1b ) . In contrast , Jacobian regularization pushes decision boundaries farther away from each training data point , enlarging decision cells and reducing instability ( Figure 1c ) . The goal of the paper is to promote Jacobian regularization as a generic scheme for increasing robustness while also being agnostic to the architecture , domain , or task to which it is applied . In support of this , after presenting the Jacobian regularizer , we evaluate its effect both in isolation as well as in combination with multiple existing approaches that are intended to promote robustness and generalization . Our intention is to showcase the ease of use and complimentary nature of our proposed regularization . Domain experts in each field should be able to quickly incorporate our regularizer into their learning pipeline as a simple way of improving the performance of their state-of-the-art system . The rest of the paper is structured as follows . In Section 2 we motivate the usage of Jacobian regularization and develop a computationally efficient algorithm for its implementation . Next , the effectiveness of this regularizer is empirically studied in Section 3 . As regularlizers constrain the learning problem , we first verify that the introduction of our regularizer does not adversely affect learning in the case when input data remain unperturbed . Robustness against both random and adversarial perturbations is then evaluated and shown to receive significant improvements from the Jacobian regularizer . We contrast our work with the literature in Section 4 and conclude in Section 5 . 2 METHOD . Here we introduce a scheme for minimizing the norm of an input-output Jacobian matrix as a technique for regularizing learning with stochastic gradient descent ( SGD ) . We begin by formally defining the input-output Jacobian and then explain an efficient algorithm for computing the Jacobian regularizer using standard machine learning frameworks . 2.1 STABILITY ANALYSIS AND INPUT-OUTPUT JACOBIAN . Let us consider the set of classification functions , f , which take a vectorized sensory signal , x ∈ RI , as input and outputs a score vector , z = f ( x ) ∈ RC , where each element , zc , is associated with likelihood that the input is from category , c.1 In this work , we focus on learning this classification function as a neural network with model parameters θ , though our findings should generalize to any parameterized function . Our goal is to learn the model parameters that minimize the classification objective on the available training data while also being stable against perturbations in the input space so as to increase classification margins . 1Throughout the paper , the vector z denotes the logit before applying a softmax layer . The probabilistic output of the softmax pc relates to zc via pc ≡ e zc/T∑ c′ e z c′/T with temperature T , typically set to unity . The input-output Jacobian matrix naturally emerges in the stability analysis of the model predictions against input perturbations . Let us consider a small perturbation vector , ∈ RI , of the same dimension as the input . For a perturbed input x̃ = x+ , the corresponding output values shift to z̃c = fc ( x+ ) = fc ( x ) + I∑ i=1 i · ∂fc ∂xi ( x ) +O ( 2 ) = zc + I∑ i=1 Jc ; i ( x ) · i +O ( 2 ) ( 1 ) where in the second equality the function was Taylor-expanded with respect to the input perturbation and in the third equality the input-output Jacobian matrix , Jc ; i ( x ) ≡ ∂fc ∂xi ( x ) , ( 2 ) was introduced . As the function f is typically almost everywhere analytic , for sufficiently small perturbations the higher-order terms can be neglected and the stability of the prediction is governed by the input-output Jacobian . 2.2 ROBUSTNESS THROUGH INPUT-OUTPUT JACOBIAN MINIMIZATION . From Equation ( 1 ) , it is straightforward to see that the larger the components of the Jacobian are , the more unstable the model prediction is with respect to input perturbations . A natural way to reduce this instability then is to decrease the magnitude for each component of the Jacobian matrix , which can be realized by minimizing the square of the Frobenius norm of the input-output Jacobian,2 ||J ( x ) ||2F ≡ ∑ i , c [ Jc ; i ( x ) ] 2 . ( 3 ) For linear models , this reduces exactly to L2 regularization that increases classification margins of these models . For nonlinear models , however , Jacobian regularization does not equate to L2 regularization , and we expect these schemes to affect models differently . In particular , predictions made by models trained with the Jacobian regularization do not vary much as inputs get perturbed and hence decision cells enlarge on average . This increase in stability granted by the Jacobian regularization is visualized in Figure 1 , which depicts a cross section of the decision cells for the MNIST digit classification problem using a nonlinear neural network ( LeCun et al. , 1998 ) . The Jacobian regularizer in Equation ( 3 ) can be combined with any loss objective used for training parameterized models . Concretely , consider a supervised learning problem modeled by a neural network and optimized with SGD . At each iteration , a mini-batch B consists of a set of labeled examples , { xα , yα } α∈B , and a supervised loss function , Lsuper , is optimized possibly together with some other regularizerR ( θ ) – such as L2 regularizer λWD2 θ 2 – over the function parameter space , by minimizing the following bare loss function Lbare ( { xα , yα } α∈B ; θ ) = 1 |B| ∑ α∈B Lsuper [ f ( xα ) ; yα ] +R ( θ ) . ( 4 ) To integrate our Jacobian regularizer into training , one instead optimizes the following joint loss LBjoint ( θ ) = Lbare ( { xα , yα } α∈B ; θ ) + λJR 2 [ 1 |B| ∑ α∈B ||J ( xα ) ||2F ] , ( 5 ) where λJR is a hyperparameter that determines the relative importance of the Jacobian regularizer . By minimizing this joint loss with sufficient training data and a properly chosen λJR , we expect models to learn both correctly and robustly . 2Minimizing the Frobenius norm will also reduce the L1-norm , since these norms satisfy the inequalities ||J ( x ) ||F ≤ ∑ i , c ∣∣Jc ; i ( x ) ∣∣ ≤ √IC||J ( x ) ||F . We prefer to minimize the Frobenius norm over the L1-norm because the ability to express the former as a trace leads to an efficient algorithm [ see Equations ( 6 ) through ( 8 ) ] . 2.3 EFFICIENT APPROXIMATE ALGORITHM . In the previous section we have argued for minimizing the Frobenius norm of the input-output Jacobian to improve robustness during learning . The main question that follows is how to efficiently compute and implement this regularizer in such a way that its optimization can seamlessly be incorporated into any existing learning paradigm . Recently , Sokolić et al . ( 2017 ) also explored the idea of regularizing the Jacobian matrix during learning , but only provided an inefficient algorithm requiring an increase in computational cost that scales linearly with the number of output classes , C , compared to the bare optimization problem ( see explanation below ) . In practice , such an overhead will be prohibitively expensive for many large-scale learning problems , e.g . ImageNet classification has C = 1000 target classes ( Deng et al. , 2009 ) . ( Our scheme , in contrast , can be used for ImageNet : see Appendix H. ) Here , we offer a different solution that makes use of random projections to efficiently approximate the Frobenius norm of the Jacobian.3 This only introduces a constant time overhead and can be made very small in practice . When considering such an approximate algorithm , one naively must trade off efficiency against accuracy for computing the Jacobian , which ultimately trades computation time for robustness . Prior work by Varga et al . ( 2017 ) briefly considers an approach based on random projection , but without providing any analysis on the quality of the Jacobian approximation . Here , we describe our algorithm , analyze theoretical convergence guarantees , and verify empirically that there is only a negligible difference in model solution quality between training with the exact computation of the Jacobian as compared to training with the approximate algorithm , even when using a single random projection ( see Figure 2 ) . Given that optimization is commonly gradient based , it is essential to efficiently compute gradients of the joint loss in Equation ( 5 ) and in particular of the squared Frobenius norm of the Jacobian . First , we note that automatic differentiation systems implement a function that computes the derivative of a vector such as z with respect to any variables on which it depends , if the vector is first contracted with another fixed vector . To take advantage of this functionality , we rewrite the squared Frobienus norm as ||J ( x ) ||2F = Tr ( JJT ) = ∑ { e } eJJTeT = ∑ { e } [ ∂ ( e · z ) ∂x ] 2 , ( 6 ) where a constant orthonormal basis , { e } , of the C-dimensional output space was inserted in the second equality and the last equality follows from definition ( 2 ) and moving the constant vector inside the derivative . For each basis vector e , the quantity in the last parenthesis can then be efficiently computed by differentiating the product , e · z , with respect to input parameters , x . Recycling that computational graph , the derivative of the squared Frobenius norm with respect to the model parameters , θ , can be computed through backpropagation with any use of automatic differentiation . Sokolić et al . ( 2017 ) essentially considers this exact computation , which requires backpropagating gradients through the model C times to iterate over the C orthonormal basis vectors { e } . Ultimately , this incurs computational overhead that scales linearly with the output dimension C. Instead , we further rewrite Equation ( 6 ) in terms of the expectation of an unbiased estimator ||J ( x ) ||2F = C Ev̂∼SC−1 [ ||v̂ · J ||2 ] , ( 7 ) where the random vector v̂ is drawn from the ( C − 1 ) -dimensional unit sphere SC−1 . Using this relationship , we can use samples of nproj random vectors v̂µ to estimate the square of the norm as ||J ( x ) ||2F ≈ 1 nproj nproj∑ µ=1 [ ∂ ( v̂µ · z ) ∂x ] 2 , ( 8 ) which converges to the true value as O ( n−1/2proj ) . The derivation of Equation ( 7 ) and the calculation of its convergence make use of random-matrix techniques and are provided in Appendix B . Finally , we expect that the fluctuations of our estimator can be suppressed by cancellations within a mini-batch . With nearly independent and identically distributed samples in a mini-batch of size 3In Appendix C , we give an alternative method for computing gradients of the Jacobian regularizer by using an analytically derived formula . Algorithm 1 Efficient computation of the approximate gradient of the Jacobian regularizer . Inputs : mini-batch of |B| examples xα , model outputs zα , and number of projections nproj . Outputs : Square of the Frobenius norm of the Jacobian JF and its gradient∇θJF . JF = 0 for i = 1 to nproj do { vαc } ∼ N ( 0 , I ) . ( |B| , C ) -dim tensor with each element sampled from a standard normal . v̂α = vα/||vα|| . Uniform sampling from the unit sphere for each α. zflat = Flatten ( { zα } ) ; vflat = Flatten ( { v̂α } ) . Flatten for parallelism . Jv = ∂ ( zflat · vflat ) /∂xα JF += C||Jv||2/ ( nproj|B| ) end for ∇θJF = ∂JF /∂θ return JF , ∇θJF |B| 1 , we expect the error in our estimate to be of order ( nproj|B| ) −1/2 . In fact , as shown in Figure 2 , with a mini-batch size of |B| = 100 , single projection yields model performance that is nearly identical to the exact method , with computational cost being reduced by orders of magnitude . The complete algorithm is presented in Algorithm 1 . With a straightforward implementation in PyTorch ( Paszke et al. , 2017 ) and nproj = 1 , we observed the computational cost of the training with the Jacobian regularization to be only ≈ 1.3 times that of the standard SGD computation cost , while retaining all the practical benefits of the expensive exact method.4
The main contribution of this paper is that it proposed an estimator of Jacobian regularization term for neural networks to reduce the computational cost reduced by orders of magnitude, and the estimator is mathematically proved unbiased. In details, the time consumed for the application of Jacobian regularizer and the unbiasedness of the proposed estimator are proved mathematically. Then the author experimentally demonstrated that the proposed regularization term retains all the practical benefits of the exact method but with a low computation cost. Quantitative experiments are provided to illustrate that the proposed Jacobian regularizer does not adversely affect the model, can be used simultaneously with other regularizers and effectively improve the model's robustness against random and adversarial input perturbations.
SP:385a392e6d055abd65a737f3c5be58105778ac11
Multichannel Generative Language Models
1 INTRODUCTION . A natural way to consider two parallel sentences in different languages is that each language is expressing the same underlying meaning under a different viewpoint . Each language can be thought of as a transformation that maps an underlying concept into a view that we collectively agree is determined as ‘ English ’ or ‘ French ’ . Similarly , an image of a cat and the word ‘ cat ’ are expressing two views of the same underlying concept . In this case , the image corresponds to a high bandwidth channel and the word ‘ cat ’ to a low bandwidth channel . This way of conceptualizing parallel viewpoints naturally leads to the formulation of a fully generative model over each instance , where the transformation corresponds to a particular generation of the underlying view . We define each of these views as a channel . As a concrete example , given a parallel corpus of English and French sentences , English and French become two channels and the corresponding generative model becomes p ( English , French ) . One key advantage to this formulation is that single model can be trained that can capture the full expressivity of the underlying concept , allowing us to compute conditionals and marginals along with the joint . In the case of parallel sentences , the conditionals correspond to translations from one channel to another while the marginals correspond to standard monolingual language models . In this work , we present a general framework for modeling the joint distribution p ( x1 , ... , xk ) over k channels . Our framework marginalizes over all possible factorizations of the joint distribution . Subsequently , this allows our framework to perform , 1 ) unconditional generation and 2 ) conditional generation . We harness existing recent work on insertion-based methods that utilize semi-autoregressive models that are permutation-invariant to the joint factorization . Specifically , we show a proof-of-concept multichannel modeling by extending KERMIT ( Chan et al. , 2019 ) to model the joint distribution over multiple sequence channels . Specifically , we train KERMIT on the Multi30K ( Elliott et al. , 2016 ) machine translation task , consisting of four lan- guages : English ( EN ) , French ( FR ) , Czech ( CS ) , and German ( DE ) . One advantage of multilingual KERMIT is during inference , we can generate translation for a single target language , or generate translations for k− 1 languages in parallel in logarithmic time in the token length per language . We illustrate qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages . The key contributions in this work are : 1 . We present MGLM , a multichannel generative modeling framework . MGLM models the joint distribution p ( x1 , . . . , xk ) over k channels . 2 . We demonstrate both conditional generation ( i.e. , machine translation ) and unconditional sampling from MGLM . 3 . In the case of conditional generation over multiple languages , we show that not only we are competitive in BLEU , but also with significant advantages in inference time and model memory savings . 4 . We analyze the Quality-Diversity tradeoff from sampling MGLM and prior work . We highlight that while we focus on languages as a specific instantiation of a channel , our framework can generalize to any arbitrary specification , such as other types of languages or other modalities . 2 BACKGROUND . Traditional autoregressive sequence frameworks ( Sutskever et al. , 2014 ; Cho et al. , 2014 ) model the conditional probability p ( y | x ) of an output sequence y conditioned on the input sequence x with a left-to-right factorization . The model decomposes p ( y | x ) as predicting one output token at time , conditioning on the previously generated output tokens y < t and the input sequence x : p ( y | x ) = ∏ t p ( yt | , y < t ) ( 1 ) Recent encoder-decoder models with attention such as Transformer ( Vaswani et al. , 2017 ) have been successfully applied to various domains , including machine translation . If we were to apply this left-to-right autoregressive approach towards multichannel modeling , we would require to choose a particular factorization order , such as p ( w , x , y ) = p ( w ) p ( x|w ) p ( y|x , w ) . Instead of assuming a fixed left-to-right decomposition , recent autoregressive insertion-based conditional modeling frameworks ( Stern et al. , 2019 ; Welleck et al. , 2019 ; Gu et al. , 2019 ) consider arbitrary factorization of the output sequence by using insertion operation , which predicts both ( 1 ) content token c ∈ C from the vocabulary , and ( 2 ) location l insert , relative to the current partial output ŷt : p ( c , l|x , ŷt ) = InsertionTransformer ( x , ŷt ) ( 2 ) Subsequent work , KERMIT ( Chan et al. , 2019 ) , simplified the Insertion Transformer model by removing the encoder and only having a decoder , and the trick is to concatenate the original input and output sequence as one single sequence and optimize over all possible factorizations . Consequently , KERMIT is able to model the joint p ( x , y ) , conditionals p ( x | y ) , p ( y | x ) , as well as the marginals p ( x ) , p ( y ) . Unlike with the left-to-right autoregressive approach , the exact computation of the log-likelihood equation 3 is not possible due to the intractable marginalization over the generation order z , where Sn denotes the set of all possible permutations on n elements . However , we can lower bound the log-likelihood using Jensen ’ s inequality : log p ( x ) = log ∑ z∈Sn p ( z ) p ( x | z ) ( 3 ) ≥ ∑ z∈Sn p ( z ) log p ( x | z ) = : L ( x ) ( 4 ) The loss term can be simplified by changing the summation and careful decomposition of the permutation , leading to : L ( x ) = ∑ z∈Sn p ( z ) log n∏ i=1 p ( ( czi , l z i ) | x z , i−1 1 : i−1 ) = n∑ i=1 ∑ z1 : i−1 p ( z1 : i−1 ) ∑ zi p ( zi | z1 : i−1 ) log p ( ( czi , lzi ) | x z , i−1 1 : i−1 ) Inference can be autoregressive via greedy decoding : ( ĉ , l̂ ) = argmax c , l p ( c , l|x̂t ) , ( 5 ) or partially autoregressive via parallel decoding : ĉl = argmax c p ( c | l , x̂t ) , ( 6 ) which is achieved by inserting at all non-finished slots . Stern et al . ( 2019 ) has shown that using a binary tree prior for p ( z ) led to ≈ log2 n iterations for n token generation . 3 MULTICHANNEL GENERATIVE LANGUAGE MODELS . In multichannel generative language modeling , our goal is to learn a generative model given a dataset consisting of a set of sequences { x ( i ) 1 , . . . , x ( i ) k } Mi=1 from up to k channels , where x ( i ) k = [ x ( i ) j,1 , . . . , x ( i ) j , n ] represents a sequence of tokens from the j-th channel for the i-th example . The resulting MGLM models a joint generative distribution over multiple channels . While there are many possible implementation of Multichannel Generative Language Models , we chose to extended the work of Chan et al . ( 2019 ) to investigate applying the KERMIT objective on tasks with more than 2 sequences , in order to learn the joint distribution p ( x1 , . . . , xk ) over k channel sequences . For example , these channel sequences can denote different languages , such as learning p ( EN , FR , CS , DE ) . Data for Model TargetSource Bilingual ( Uni-direction ) Joint [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] Multi-Target ( Any to Rest ) [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] Legend [ SEP ] [ SEP ] EN FR FR [ SEP ] [ SEP ] EN [ SEP ] [ SEP ] [ SEP ] EN DE [ SEP ] [ SEP ] [ SEP ] EN [ SEP ] [ SEP ] EN [ SEP ] [ SEP ] EN DE [ SEP ] [ SEP ] EN [ SEP ] Training Sets Inference D ec od in g Ite ra tio n [ SEP ] EN [ SEP ] [ SEP ] [ SEP ] EN [ SEP ] D ec od in g Ite ra tio n We illustrate an example data input consisting of 3 channels in Figure 1 ( left ) . We concatenate the sequences together from all channels for each example , separate by a SEP token . Even with shared vocabulary , each channel results in a different token embedding , via an addition of a channel-specific ( learnable ) embedding , or simply having a separately learned token embedding per channel . After passing through the dense self-attention layers as in per Transformer architecture , the contextualized representation at each output time step predicts the possible tokens to be inserted to the left of the current input token . At inference ( generation ) time , we can generate unconditionally by seeding the canvas with the [ SEP ] token and predicting the first actual token , or provide as much , or as little , partial/complete sequence in each channel . Figure 1 ( right ) shows two possible decoding inference modes : a single target language channel ( top ) , or multiple target language channels in parallel ( bottom ) . 4 EXPERIMENTS . We experiment on a multilingual dataset to demonstrate that we can learn MGLM . We perform both qualitative and quantitative experiments . We highlight the model ’ s capabilities ranging from conditional generation ( i.e. , machine translation ) to unconditional sampling the joint distribution over multiple languages . We experiment on the Multi30k ( Elliott et al. , 2016 ; 2017 ; Barrault et al. , 2018 ) , a multilingual dataset which consists of 29000 parallel training sentences in English ( EN ) , French ( FR ) , Czech ( CS ) , and German ( DE ) sentences . We use Multi30k because multiple high quality channels ( multilingual translations in this case ) is readily available to highlight our framework . We implement MGLM as a base Transformer decoder , without any causal masking , with 6 hidden layers and 1024 dimensional hidden representation . We concatenate all 4 language raw text training examples and use SentencePiece ( Kudo & Richardson , 2018 ) to learn an universal subword unigram ( Kudo , 2018 ) tokenizer with a shared 32K vocabulary size . We follow a similar training set up to BERT ( Devlin et al. , 2019 ) , using Adam ( Kingma & Ba , 2015 ) optimizer with learning rate of 1e-4 , warmup over the first 10 % of the total training iterations varying between 10k to 50k iterations . We can train 3 different variants of MGLM by altering the sampling ratio of training data seen by the model : 1 . Bilingual ( e.g. , EN→ FR ) . We give the model a fully observed source ( e.g. , EN ) , and ask the model to infill the target ( e.g. , FR ) . 2 . Multi-target ( e.g. , any 1→ Rest ) . We give the model a fully observed source ( e.g. , EN ) , and ask the model to infill the rest of the targets ( e.g. , DE , FR , CS ) . 3 . Joint . We ask the model to infill all the targets , consequently we learn a joint distribution over all the languages p ( en , fr , de , cs ) .
This paper proposes a multichannel generative language model (MGLM), which models the joint distribution p(channel_1, ..., channel_k) over k channels. MGLM can be used for both conditional generation (e.g., machine translation) and unconditional sampling. In the experiments, MGLM uses the Multi30k dataset where multiple high quality channels are available, in the form of multilingual translations.
SP:da1e92e9459d9f305f206e309faa8e9bbf8e6afa
Multichannel Generative Language Models
1 INTRODUCTION . A natural way to consider two parallel sentences in different languages is that each language is expressing the same underlying meaning under a different viewpoint . Each language can be thought of as a transformation that maps an underlying concept into a view that we collectively agree is determined as ‘ English ’ or ‘ French ’ . Similarly , an image of a cat and the word ‘ cat ’ are expressing two views of the same underlying concept . In this case , the image corresponds to a high bandwidth channel and the word ‘ cat ’ to a low bandwidth channel . This way of conceptualizing parallel viewpoints naturally leads to the formulation of a fully generative model over each instance , where the transformation corresponds to a particular generation of the underlying view . We define each of these views as a channel . As a concrete example , given a parallel corpus of English and French sentences , English and French become two channels and the corresponding generative model becomes p ( English , French ) . One key advantage to this formulation is that single model can be trained that can capture the full expressivity of the underlying concept , allowing us to compute conditionals and marginals along with the joint . In the case of parallel sentences , the conditionals correspond to translations from one channel to another while the marginals correspond to standard monolingual language models . In this work , we present a general framework for modeling the joint distribution p ( x1 , ... , xk ) over k channels . Our framework marginalizes over all possible factorizations of the joint distribution . Subsequently , this allows our framework to perform , 1 ) unconditional generation and 2 ) conditional generation . We harness existing recent work on insertion-based methods that utilize semi-autoregressive models that are permutation-invariant to the joint factorization . Specifically , we show a proof-of-concept multichannel modeling by extending KERMIT ( Chan et al. , 2019 ) to model the joint distribution over multiple sequence channels . Specifically , we train KERMIT on the Multi30K ( Elliott et al. , 2016 ) machine translation task , consisting of four lan- guages : English ( EN ) , French ( FR ) , Czech ( CS ) , and German ( DE ) . One advantage of multilingual KERMIT is during inference , we can generate translation for a single target language , or generate translations for k− 1 languages in parallel in logarithmic time in the token length per language . We illustrate qualitative examples for parallel greedy decoding across languages and sampling from the joint distribution of the 4 languages . The key contributions in this work are : 1 . We present MGLM , a multichannel generative modeling framework . MGLM models the joint distribution p ( x1 , . . . , xk ) over k channels . 2 . We demonstrate both conditional generation ( i.e. , machine translation ) and unconditional sampling from MGLM . 3 . In the case of conditional generation over multiple languages , we show that not only we are competitive in BLEU , but also with significant advantages in inference time and model memory savings . 4 . We analyze the Quality-Diversity tradeoff from sampling MGLM and prior work . We highlight that while we focus on languages as a specific instantiation of a channel , our framework can generalize to any arbitrary specification , such as other types of languages or other modalities . 2 BACKGROUND . Traditional autoregressive sequence frameworks ( Sutskever et al. , 2014 ; Cho et al. , 2014 ) model the conditional probability p ( y | x ) of an output sequence y conditioned on the input sequence x with a left-to-right factorization . The model decomposes p ( y | x ) as predicting one output token at time , conditioning on the previously generated output tokens y < t and the input sequence x : p ( y | x ) = ∏ t p ( yt | , y < t ) ( 1 ) Recent encoder-decoder models with attention such as Transformer ( Vaswani et al. , 2017 ) have been successfully applied to various domains , including machine translation . If we were to apply this left-to-right autoregressive approach towards multichannel modeling , we would require to choose a particular factorization order , such as p ( w , x , y ) = p ( w ) p ( x|w ) p ( y|x , w ) . Instead of assuming a fixed left-to-right decomposition , recent autoregressive insertion-based conditional modeling frameworks ( Stern et al. , 2019 ; Welleck et al. , 2019 ; Gu et al. , 2019 ) consider arbitrary factorization of the output sequence by using insertion operation , which predicts both ( 1 ) content token c ∈ C from the vocabulary , and ( 2 ) location l insert , relative to the current partial output ŷt : p ( c , l|x , ŷt ) = InsertionTransformer ( x , ŷt ) ( 2 ) Subsequent work , KERMIT ( Chan et al. , 2019 ) , simplified the Insertion Transformer model by removing the encoder and only having a decoder , and the trick is to concatenate the original input and output sequence as one single sequence and optimize over all possible factorizations . Consequently , KERMIT is able to model the joint p ( x , y ) , conditionals p ( x | y ) , p ( y | x ) , as well as the marginals p ( x ) , p ( y ) . Unlike with the left-to-right autoregressive approach , the exact computation of the log-likelihood equation 3 is not possible due to the intractable marginalization over the generation order z , where Sn denotes the set of all possible permutations on n elements . However , we can lower bound the log-likelihood using Jensen ’ s inequality : log p ( x ) = log ∑ z∈Sn p ( z ) p ( x | z ) ( 3 ) ≥ ∑ z∈Sn p ( z ) log p ( x | z ) = : L ( x ) ( 4 ) The loss term can be simplified by changing the summation and careful decomposition of the permutation , leading to : L ( x ) = ∑ z∈Sn p ( z ) log n∏ i=1 p ( ( czi , l z i ) | x z , i−1 1 : i−1 ) = n∑ i=1 ∑ z1 : i−1 p ( z1 : i−1 ) ∑ zi p ( zi | z1 : i−1 ) log p ( ( czi , lzi ) | x z , i−1 1 : i−1 ) Inference can be autoregressive via greedy decoding : ( ĉ , l̂ ) = argmax c , l p ( c , l|x̂t ) , ( 5 ) or partially autoregressive via parallel decoding : ĉl = argmax c p ( c | l , x̂t ) , ( 6 ) which is achieved by inserting at all non-finished slots . Stern et al . ( 2019 ) has shown that using a binary tree prior for p ( z ) led to ≈ log2 n iterations for n token generation . 3 MULTICHANNEL GENERATIVE LANGUAGE MODELS . In multichannel generative language modeling , our goal is to learn a generative model given a dataset consisting of a set of sequences { x ( i ) 1 , . . . , x ( i ) k } Mi=1 from up to k channels , where x ( i ) k = [ x ( i ) j,1 , . . . , x ( i ) j , n ] represents a sequence of tokens from the j-th channel for the i-th example . The resulting MGLM models a joint generative distribution over multiple channels . While there are many possible implementation of Multichannel Generative Language Models , we chose to extended the work of Chan et al . ( 2019 ) to investigate applying the KERMIT objective on tasks with more than 2 sequences , in order to learn the joint distribution p ( x1 , . . . , xk ) over k channel sequences . For example , these channel sequences can denote different languages , such as learning p ( EN , FR , CS , DE ) . Data for Model TargetSource Bilingual ( Uni-direction ) Joint [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] Multi-Target ( Any to Rest ) [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] [ SEP ] [ SEP ] EN FR CS [ SEP ] DE [ SEP ] Legend [ SEP ] [ SEP ] EN FR FR [ SEP ] [ SEP ] EN [ SEP ] [ SEP ] [ SEP ] EN DE [ SEP ] [ SEP ] [ SEP ] EN [ SEP ] [ SEP ] EN [ SEP ] [ SEP ] EN DE [ SEP ] [ SEP ] EN [ SEP ] Training Sets Inference D ec od in g Ite ra tio n [ SEP ] EN [ SEP ] [ SEP ] [ SEP ] EN [ SEP ] D ec od in g Ite ra tio n We illustrate an example data input consisting of 3 channels in Figure 1 ( left ) . We concatenate the sequences together from all channels for each example , separate by a SEP token . Even with shared vocabulary , each channel results in a different token embedding , via an addition of a channel-specific ( learnable ) embedding , or simply having a separately learned token embedding per channel . After passing through the dense self-attention layers as in per Transformer architecture , the contextualized representation at each output time step predicts the possible tokens to be inserted to the left of the current input token . At inference ( generation ) time , we can generate unconditionally by seeding the canvas with the [ SEP ] token and predicting the first actual token , or provide as much , or as little , partial/complete sequence in each channel . Figure 1 ( right ) shows two possible decoding inference modes : a single target language channel ( top ) , or multiple target language channels in parallel ( bottom ) . 4 EXPERIMENTS . We experiment on a multilingual dataset to demonstrate that we can learn MGLM . We perform both qualitative and quantitative experiments . We highlight the model ’ s capabilities ranging from conditional generation ( i.e. , machine translation ) to unconditional sampling the joint distribution over multiple languages . We experiment on the Multi30k ( Elliott et al. , 2016 ; 2017 ; Barrault et al. , 2018 ) , a multilingual dataset which consists of 29000 parallel training sentences in English ( EN ) , French ( FR ) , Czech ( CS ) , and German ( DE ) sentences . We use Multi30k because multiple high quality channels ( multilingual translations in this case ) is readily available to highlight our framework . We implement MGLM as a base Transformer decoder , without any causal masking , with 6 hidden layers and 1024 dimensional hidden representation . We concatenate all 4 language raw text training examples and use SentencePiece ( Kudo & Richardson , 2018 ) to learn an universal subword unigram ( Kudo , 2018 ) tokenizer with a shared 32K vocabulary size . We follow a similar training set up to BERT ( Devlin et al. , 2019 ) , using Adam ( Kingma & Ba , 2015 ) optimizer with learning rate of 1e-4 , warmup over the first 10 % of the total training iterations varying between 10k to 50k iterations . We can train 3 different variants of MGLM by altering the sampling ratio of training data seen by the model : 1 . Bilingual ( e.g. , EN→ FR ) . We give the model a fully observed source ( e.g. , EN ) , and ask the model to infill the target ( e.g. , FR ) . 2 . Multi-target ( e.g. , any 1→ Rest ) . We give the model a fully observed source ( e.g. , EN ) , and ask the model to infill the rest of the targets ( e.g. , DE , FR , CS ) . 3 . Joint . We ask the model to infill all the targets , consequently we learn a joint distribution over all the languages p ( en , fr , de , cs ) .
This work is an extension of KERMIT (Chan et al., 2019) to multiple languages and the proposed model is called “multichannel generative language models”. KERMIT is an extension of “Insertion Transformer” (Stern et. al, 2019), a non-autoregressive model that can jointly determine which word and which place the translated words should be inserted. KERMIT shares the encoder and decoder of insertion Transformer, and the source sentence and target sentence are concatenated to train a generative model (also, various loss functions are included). In this work, parallel sentences from more than two languages are concatenated together and fed into KERMIT. Each language is associated with a language embedding. This work demonstrates that a joint distribution p(x1, . . . , xk) over k channels/languages can be properly modeled through a single model. The authors carry out experiments on multi30k dataset.
SP:da1e92e9459d9f305f206e309faa8e9bbf8e6afa
Convergence of Gradient Methods on Bilinear Zero-Sum Games
1 INTRODUCTION . Min-max optimization has received significant attention recently due to the popularity of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) , adversarial training ( Madry et al. , 2018 ) and reinforcement learning ( Du et al. , 2017 ; Dai et al. , 2018 ) , just to name some examples . Formally , given a bivariate function f ( x , y ) , we aim to find a saddle point ( x∗ , y∗ ) such that f ( x∗ , y ) ≤ f ( x∗ , y∗ ) ≤ f ( x , y∗ ) , ∀x ∈ Rn , ∀y ∈ Rn . ( 1.1 ) Since the beginning of game theory , various algorithms have been proposed for finding saddle points ( Arrow et al. , 1958 ; Dem ’ yanov & Pevnyi , 1972 ; Gol ’ shtein , 1972 ; Korpelevich , 1976 ; Rockafellar , 1976 ; Bruck , 1977 ; Lions , 1978 ; Nemirovski & Yudin , 1983 ; Freund & Schapire , 1999 ) . Due to its recent resurgence in ML , new algorithms specifically designed for training GANs were proposed ( Daskalakis et al. , 2018 ; Kingma & Ba , 2015 ; Gidel et al. , 2019b ; Mescheder et al. , 2017 ) . However , due to the inherent non-convexity in deep learning formulations , our current understanding of the convergence behaviour of new and classic gradient algorithms is still quite limited , and existing analysis mostly focused on bilinear games or strongly-convex-strongly-concave games ( Tseng , 1995 ; Daskalakis et al. , 2018 ; Gidel et al. , 2019b ; Liang & Stokes , 2019 ; Mokhtari et al. , 2019b ) . Nonzero-sum bilinear games , on the other hand , are known to be PPAD-complete ( Chen et al. , 2009 ) ( for finding approximate Nash equilibria , see e.g . Deligkas et al . ( 2017 ) ) . In this work , we study bilinear zero-sum games as a first step towards understanding general min-max optimization , although our results apply to some simple GAN settings ( Gidel et al. , 2019a ) . It is well-known that certain gradient algorithms converge linearly on bilinear zero-sum games ( Liang & Stokes , 2019 ; Mokhtari et al. , 2019b ; Rockafellar , 1976 ; Korpelevich , 1976 ) . These iterative algorithms usually come with two versions : Jacobi style updates or Gauss–Seidel ( GS ) style . In a Jacobi style , we update the two sets of parameters ( i.e. , x and y ) simultaneously whereas in a GS style we update them alternatingly ( i.e. , one after the other ) . Thus , Jacobi style updates are naturally amenable to parallelization while GS style updates have to be sequential , although the latter is usually found to converge faster ( and more stable ) . In numerical linear algebra , the celebrated Stein–Rosenberg theorem ( Stein & Rosenberg , 1948 ) formally proves that in solving certain linear systems , GS updates converge strictly faster than their Jacobi counterparts , and often with a larger set of convergent instances . However , this result does not readily apply to bilinear zero-sum games . Our main goal here is to answer the following questions about solving bilinear zero-sum games : • When exactly does a gradient-type algorithm converge ? Contributions We summarize our main results from §3 and §4 in Table 1 and 2 respectively , with supporting experiments given in §5 . We use σ1 and σn to denote the largest and the smallest singular values of matrixE ( see equation 2.1 ) , and κ : = σ1/σn denotes the condition number . The algorithms will be introduced in §2 . Note that we generalize gradient-type algorithms but retain the same names . Table 1 shows that in most cases that we study , whenever Jacobi updates converge , the corresponding GS updates converge as well ( usually with a faster rate ) , but the converse is not true ( §3 ) . This extends the well-known Stein–Rosenberg theorem to bilinear games . Furthermore , Table 2 tells us that by generalizing existing gradient algorithms , we can obtain faster convergence rates . 2 PRELIMINARIES . In the study of GAN training , bilinear games are often regarded as an important simple example for theoretically analyzing and understanding new algorithms and techniques ( e.g . Daskalakis et al. , 2018 ; Gidel et al. , 2019a ; b ; Liang & Stokes , 2019 ) . It captures the difficulty in GAN training and can represent some simple GAN formulations ( Arjovsky et al. , 2017 ; Daskalakis et al. , 2018 ; Gidel et al. , 2019a ; Mescheder et al. , 2018 ) . Mathematically , bilinear zero-sum games can be formulated as the following min-max problem : minx∈Rn maxy∈Rn x > Ey + b > x+ c > y . ( 2.1 ) The set of all saddle points ( see definition in eq . ( 1.1 ) ) is : { ( x , y ) |Ey + b = 0 , E > x+ c = 0 } . ( 2.2 ) Throughout , for simplicity we assume E to be invertible , whereas the seemingly general case with non-invertible E is treated in Appendix G. The linear terms are not essential in our analysis and we take b = c = 0 throughout the paper1 . In this case , the only saddle point is ( 0,0 ) . For bilinear games , it is well-known that simultaneous gradient descent ascent does not converge ( Nemirovski & Yudin , 1983 ) and other gradient-based algorithms tailored for min-max optimization have been proposed ( Korpelevich , 1976 ; Daskalakis et al. , 2018 ; Gidel et al. , 2019a ; Mescheder et al. , 2017 ) . These iterative algorithms all belong to the class of general linear dynamical systems ( LDS , a.k.a . 1If they are not zero , one can translate x and y to cancel the linear terms , see e.g . Gidel et al . ( 2019b ) . matrix iterative processes ) . Using state augmentation z ( t ) : = ( x ( t ) , y ( t ) ) we define a general k-step LDS as follows : z ( t ) = ∑k i=1Aiz ( t−i ) + d , ( 2.3 ) where the matrices Ai and vector d depend on the gradient algorithm ( examples can be found in Appendix C.1 ) . Define the characteristic polynomial , withA0 = −I : p ( λ ) : = det ( ∑k i=0Aiλ k−i ) . ( 2.4 ) The following well-known result decides when such a k-step LDS converges for any initialization : Theorem 2.1 ( e.g . Gohberg et al . ( 1982 ) ) . The LDS in eq . ( 2.3 ) converges for any initialization ( z ( 0 ) , . . . , z ( k−1 ) ) iff the spectral radius r : = max { |λ| : p ( λ ) = 0 } < 1 , in which case { z ( t ) } converges linearly with an ( asymptotic ) exponent r. Therefore , understanding the bilinear game dynamics reduces to spectral analysis . The ( sufficient and necessary ) convergence condition reduces to that all roots of p ( λ ) lie in the ( open ) unit disk , which can be conveniently analyzed through the celebrated Schur ’ s theorem ( Schur , 1917 ) : Theorem 2.2 ( Schur ( 1917 ) ) . The roots of a real polynomial p ( λ ) = a0λn + a1λn−1 + · · ·+ an are within the ( open ) unit disk of the complex plane iff ∀k ∈ { 1 , 2 , . . . , n } , det ( PkP > k −Q > kQk ) > 0 , where Pk , Qk are k × k matrices defined as : [ Pk ] i , j = ai−j1i≥j , [ Qk ] i , j = an−i+j1i≤j . In the theorem above , we denoted 1S as the indicator function of the event S , i.e . 1S = 1 if S holds and 1S = 0 otherwise . For a nice summary of related stability tests , see Mansour ( 2011 ) . We therefore define Schur stable polynomials to be those polynomials whose roots all lie within the ( open ) unit disk of the complex plane . Schur ’ s theorem has the following corollary ( proof included in Appendix B.2 for the sake of completeness ) : Corollary 2.1 ( e.g . Mansour ( 2011 ) ) . A real quadratic polynomial λ2 + aλ + b is Schur stable iff b < 1 , |a| < 1 + b ; A real cubic polynomial λ3 + aλ2 + bλ + c is Schur stable iff |c| < 1 , |a+ c| < 1 + b , b− ac < 1− c2 ; A real quartic polynomial λ4 + aλ3 + bλ2 + cλ+ d is Schur stable iff |c− ad| < 1− d2 , |a+ c| < b+ d+ 1 , and b < ( 1 + d ) + ( c− ad ) ( a− c ) / ( d− 1 ) 2 . Let us formally define Jacobi and GS updates : Jacobi updates take the form x ( t ) = T1 ( x ( t−1 ) , y ( t−1 ) , . . . , x ( t−k ) , y ( t−k ) ) , y ( t ) = T2 ( x ( t−1 ) , y ( t−1 ) , . . . , x ( t−k ) , y ( t−k ) ) , while Gauss–Seidel updates replace x ( t−i ) with the more recent x ( t−i+1 ) in operator T2 , where T1 , T2 : Rnk × Rnk → Rn can be any update functions . For LDS updates in eq . ( 2.3 ) we find a nice relation between the characteristic polynomials of Jacobi and GS updates in Theorem 2.3 ( proof in Appendix B.1 ) , which turns out to greatly simplify our subsequent analyses : Theorem 2.3 ( Jacobi vs. Gauss–Seidel ) . Let p ( λ , γ ) = det ( ∑k i=0 ( γLi +Ui ) λ k−i ) , whereAi = Li + Ui and Li is strictly lower block triangular . Then , the characteristic polynomial of Jacobi updates is p ( λ , 1 ) while that of Gauss–Seidel updates is p ( λ , λ ) . Compared to the Jacobi update , in some sense the Gauss–Seidel update amounts to shifting the strictly lower block triangular matrices Li one step to the left , as p ( λ , λ ) can be rewritten as det ( ∑k i=0 ( Li+1 +Ui ) λ k−i ) , with Lk+1 : = 0 . This observation will significantly simplify our comparison between Jacobi and Gauss–Seidel updates . Next , we define some popular gradient algorithms for finding saddle points in the min-max problem min x max y f ( x , y ) . ( 2.5 ) We present the algorithms for a general ( bivariate ) function f although our main results will specialize f to the bilinear case in eq . ( 2.1 ) . Note that we introduced more “ step sizes ” for our refined analysis , as we find that the enlarged parameter space often contains choices for faster linear convergence ( see §4 ) . We only define the Jacobi updates , while the GS counterparts can be easily inferred . We always use α1 and α2 to define step sizes ( or learning rates ) which are positive . Gradient descent ( GD ) The generalized GD update has the following form : x ( t+1 ) = x ( t ) − α1∇xf ( x ( t ) , y ( t ) ) , y ( t+1 ) = y ( t ) + α2∇yf ( x ( t ) , y ( t ) ) . ( 2.6 ) When α1 = α2 , the convergence of averaged iterates ( a.k.a . Cesari convergence ) for convex-concave games is analyzed in ( Bruck , 1977 ; Nemirovski & Yudin , 1978 ; Nedić & Ozdaglar , 2009 ) . Recent progress on interpreting GD with dynamical systems can be seen in , e.g. , Mertikopoulos et al . ( 2018 ) ; Bailey et al . ( 2019 ) ; Bailey & Piliouras ( 2018 ) . Extra-gradient ( EG ) We study a generalized version of EG , defined as follows : x ( t+1/2 ) = x ( t ) − γ1∇xf ( x ( t ) , y ( t ) ) , y ( t+1/2 ) = y ( t ) + γ2∇yf ( x ( t ) , y ( t ) ) ; ( 2.7 ) x ( t+1 ) = x ( t ) − α1∇xf ( x ( t+1/2 ) , y ( t+1/2 ) ) , y ( t+1 ) = y ( t ) + α2∇yf ( x ( t+1/2 ) , y ( t+1/2 ) ) . ( 2.8 ) EG was first proposed in Korpelevich ( 1976 ) with the restriction α1 = α2 = γ1 = γ2 , under which linear convergence was proved for bilinear games . Convergence of EG on convex-concave games was analyzed in Nemirovski ( 2004 ) ; Monteiro & Svaiter ( 2010 ) , and Mertikopoulos et al . ( 2019 ) provides convergence guarantees for specific non-convex-non-concave problems . For bilinear games , a slightly more generalized version was proposed in Liang & Stokes ( 2019 ) where α1 = α2 , γ1 = γ2 , with linear convergence proved . For later convenience we define β1 = α2γ1 and β2 = α1γ2 . Optimistic gradient descent ( OGD ) We study a generalized version of OGD , defined as follows : x ( t+1 ) = x ( t ) − α1∇xf ( x ( t ) , y ( t ) ) + β1∇xf ( x ( t−1 ) , y ( t−1 ) ) , ( 2.9 ) y ( t+1 ) = y ( t ) + α2∇yf ( x ( t ) , y ( t ) ) − β2∇yf ( x ( t−1 ) , y ( t−1 ) ) . ( 2.10 ) The original version of OGD was given in Popov ( 1980 ) with α1 = α2 = 2β1 = 2β2 and rediscovered in the GAN literature ( Daskalakis et al. , 2018 ) . Its linear convergence for bilinear games was proved in Liang & Stokes ( 2019 ) . A slightly more generalized version with α1 = α2 and β1 = β2 was analyzed in Peng et al . ( 2019 ) ; Mokhtari et al . ( 2019b ) , again with linear convergence proved . The stochastic case was analyzed in Hsieh et al . ( 2019 ) . Momentum method Generalized heavy ball method was analyzed in Gidel et al . ( 2019b ) : x ( t+1 ) = x ( t ) − α1∇xf ( x ( t ) , y ( t ) ) + β1 ( x ( t ) − x ( t−1 ) ) , ( 2.11 ) y ( t+1 ) = y ( t ) + α2∇yf ( x ( t ) , y ( t ) ) + β2 ( y ( t ) − y ( t−1 ) ) . ( 2.12 ) This is a modification of Polyak ’ s heavy ball ( HB ) ( Polyak , 1964 ) , which also motivated Nesterov ’ s accelerated gradient algorithm ( NAG ) ( Nesterov , 1983 ) . Note that for both x-update and the y-update , we add a scale multiple of the successive difference ( e.g . proxy of the momentum ) . For this algorithm our result below improves those obtained in Gidel et al . ( 2019b ) , as will be discussed in §3 . EG and OGD as approximations of proximal point algorithm It has been observed recently in Mokhtari et al . ( 2019b ) that for convex-concave games , EG ( α1 = α2 = γ1 = γ2 = η ) and OGD ( α1/2 = α2/2 = β1 = β2 = η ) can be treated as approximations of the proximal point algorithm ( Martinet , 1970 ; Rockafellar , 1976 ) when η is small . With this result , one can show that EG and OGD converge to saddle points sublinearly for smooth convex-concave games ( Mokhtari et al. , 2019a ) . We give a brief introduction of the proximal point algorithm in Appendix A ( including a linear convergence result for the slightly generalized version ) . The above algorithms , when specialized to a bilinear function f ( see eq . ( 2.1 ) ) , can be rewritten as a 1-step or 2-step LDS ( see . eq . ( 2.3 ) ) . See Appendix C.1 for details .
This paper studies the convergence of multiple methods (Gradient, extragradient, optimistic and momentum) on a bilinear minmax game. More precisely, this paper uses spectral condition to study the difference between simultaneous (Jacobi) and alternating (Gau\ss-Seidel) updates. The analysis is based on Schur theorem and give necessary and sufficient condition for convergence.
SP:69704bad659d8cc6e35dc5b7f372bf2e39805f4f
Convergence of Gradient Methods on Bilinear Zero-Sum Games
1 INTRODUCTION . Min-max optimization has received significant attention recently due to the popularity of generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) , adversarial training ( Madry et al. , 2018 ) and reinforcement learning ( Du et al. , 2017 ; Dai et al. , 2018 ) , just to name some examples . Formally , given a bivariate function f ( x , y ) , we aim to find a saddle point ( x∗ , y∗ ) such that f ( x∗ , y ) ≤ f ( x∗ , y∗ ) ≤ f ( x , y∗ ) , ∀x ∈ Rn , ∀y ∈ Rn . ( 1.1 ) Since the beginning of game theory , various algorithms have been proposed for finding saddle points ( Arrow et al. , 1958 ; Dem ’ yanov & Pevnyi , 1972 ; Gol ’ shtein , 1972 ; Korpelevich , 1976 ; Rockafellar , 1976 ; Bruck , 1977 ; Lions , 1978 ; Nemirovski & Yudin , 1983 ; Freund & Schapire , 1999 ) . Due to its recent resurgence in ML , new algorithms specifically designed for training GANs were proposed ( Daskalakis et al. , 2018 ; Kingma & Ba , 2015 ; Gidel et al. , 2019b ; Mescheder et al. , 2017 ) . However , due to the inherent non-convexity in deep learning formulations , our current understanding of the convergence behaviour of new and classic gradient algorithms is still quite limited , and existing analysis mostly focused on bilinear games or strongly-convex-strongly-concave games ( Tseng , 1995 ; Daskalakis et al. , 2018 ; Gidel et al. , 2019b ; Liang & Stokes , 2019 ; Mokhtari et al. , 2019b ) . Nonzero-sum bilinear games , on the other hand , are known to be PPAD-complete ( Chen et al. , 2009 ) ( for finding approximate Nash equilibria , see e.g . Deligkas et al . ( 2017 ) ) . In this work , we study bilinear zero-sum games as a first step towards understanding general min-max optimization , although our results apply to some simple GAN settings ( Gidel et al. , 2019a ) . It is well-known that certain gradient algorithms converge linearly on bilinear zero-sum games ( Liang & Stokes , 2019 ; Mokhtari et al. , 2019b ; Rockafellar , 1976 ; Korpelevich , 1976 ) . These iterative algorithms usually come with two versions : Jacobi style updates or Gauss–Seidel ( GS ) style . In a Jacobi style , we update the two sets of parameters ( i.e. , x and y ) simultaneously whereas in a GS style we update them alternatingly ( i.e. , one after the other ) . Thus , Jacobi style updates are naturally amenable to parallelization while GS style updates have to be sequential , although the latter is usually found to converge faster ( and more stable ) . In numerical linear algebra , the celebrated Stein–Rosenberg theorem ( Stein & Rosenberg , 1948 ) formally proves that in solving certain linear systems , GS updates converge strictly faster than their Jacobi counterparts , and often with a larger set of convergent instances . However , this result does not readily apply to bilinear zero-sum games . Our main goal here is to answer the following questions about solving bilinear zero-sum games : • When exactly does a gradient-type algorithm converge ? Contributions We summarize our main results from §3 and §4 in Table 1 and 2 respectively , with supporting experiments given in §5 . We use σ1 and σn to denote the largest and the smallest singular values of matrixE ( see equation 2.1 ) , and κ : = σ1/σn denotes the condition number . The algorithms will be introduced in §2 . Note that we generalize gradient-type algorithms but retain the same names . Table 1 shows that in most cases that we study , whenever Jacobi updates converge , the corresponding GS updates converge as well ( usually with a faster rate ) , but the converse is not true ( §3 ) . This extends the well-known Stein–Rosenberg theorem to bilinear games . Furthermore , Table 2 tells us that by generalizing existing gradient algorithms , we can obtain faster convergence rates . 2 PRELIMINARIES . In the study of GAN training , bilinear games are often regarded as an important simple example for theoretically analyzing and understanding new algorithms and techniques ( e.g . Daskalakis et al. , 2018 ; Gidel et al. , 2019a ; b ; Liang & Stokes , 2019 ) . It captures the difficulty in GAN training and can represent some simple GAN formulations ( Arjovsky et al. , 2017 ; Daskalakis et al. , 2018 ; Gidel et al. , 2019a ; Mescheder et al. , 2018 ) . Mathematically , bilinear zero-sum games can be formulated as the following min-max problem : minx∈Rn maxy∈Rn x > Ey + b > x+ c > y . ( 2.1 ) The set of all saddle points ( see definition in eq . ( 1.1 ) ) is : { ( x , y ) |Ey + b = 0 , E > x+ c = 0 } . ( 2.2 ) Throughout , for simplicity we assume E to be invertible , whereas the seemingly general case with non-invertible E is treated in Appendix G. The linear terms are not essential in our analysis and we take b = c = 0 throughout the paper1 . In this case , the only saddle point is ( 0,0 ) . For bilinear games , it is well-known that simultaneous gradient descent ascent does not converge ( Nemirovski & Yudin , 1983 ) and other gradient-based algorithms tailored for min-max optimization have been proposed ( Korpelevich , 1976 ; Daskalakis et al. , 2018 ; Gidel et al. , 2019a ; Mescheder et al. , 2017 ) . These iterative algorithms all belong to the class of general linear dynamical systems ( LDS , a.k.a . 1If they are not zero , one can translate x and y to cancel the linear terms , see e.g . Gidel et al . ( 2019b ) . matrix iterative processes ) . Using state augmentation z ( t ) : = ( x ( t ) , y ( t ) ) we define a general k-step LDS as follows : z ( t ) = ∑k i=1Aiz ( t−i ) + d , ( 2.3 ) where the matrices Ai and vector d depend on the gradient algorithm ( examples can be found in Appendix C.1 ) . Define the characteristic polynomial , withA0 = −I : p ( λ ) : = det ( ∑k i=0Aiλ k−i ) . ( 2.4 ) The following well-known result decides when such a k-step LDS converges for any initialization : Theorem 2.1 ( e.g . Gohberg et al . ( 1982 ) ) . The LDS in eq . ( 2.3 ) converges for any initialization ( z ( 0 ) , . . . , z ( k−1 ) ) iff the spectral radius r : = max { |λ| : p ( λ ) = 0 } < 1 , in which case { z ( t ) } converges linearly with an ( asymptotic ) exponent r. Therefore , understanding the bilinear game dynamics reduces to spectral analysis . The ( sufficient and necessary ) convergence condition reduces to that all roots of p ( λ ) lie in the ( open ) unit disk , which can be conveniently analyzed through the celebrated Schur ’ s theorem ( Schur , 1917 ) : Theorem 2.2 ( Schur ( 1917 ) ) . The roots of a real polynomial p ( λ ) = a0λn + a1λn−1 + · · ·+ an are within the ( open ) unit disk of the complex plane iff ∀k ∈ { 1 , 2 , . . . , n } , det ( PkP > k −Q > kQk ) > 0 , where Pk , Qk are k × k matrices defined as : [ Pk ] i , j = ai−j1i≥j , [ Qk ] i , j = an−i+j1i≤j . In the theorem above , we denoted 1S as the indicator function of the event S , i.e . 1S = 1 if S holds and 1S = 0 otherwise . For a nice summary of related stability tests , see Mansour ( 2011 ) . We therefore define Schur stable polynomials to be those polynomials whose roots all lie within the ( open ) unit disk of the complex plane . Schur ’ s theorem has the following corollary ( proof included in Appendix B.2 for the sake of completeness ) : Corollary 2.1 ( e.g . Mansour ( 2011 ) ) . A real quadratic polynomial λ2 + aλ + b is Schur stable iff b < 1 , |a| < 1 + b ; A real cubic polynomial λ3 + aλ2 + bλ + c is Schur stable iff |c| < 1 , |a+ c| < 1 + b , b− ac < 1− c2 ; A real quartic polynomial λ4 + aλ3 + bλ2 + cλ+ d is Schur stable iff |c− ad| < 1− d2 , |a+ c| < b+ d+ 1 , and b < ( 1 + d ) + ( c− ad ) ( a− c ) / ( d− 1 ) 2 . Let us formally define Jacobi and GS updates : Jacobi updates take the form x ( t ) = T1 ( x ( t−1 ) , y ( t−1 ) , . . . , x ( t−k ) , y ( t−k ) ) , y ( t ) = T2 ( x ( t−1 ) , y ( t−1 ) , . . . , x ( t−k ) , y ( t−k ) ) , while Gauss–Seidel updates replace x ( t−i ) with the more recent x ( t−i+1 ) in operator T2 , where T1 , T2 : Rnk × Rnk → Rn can be any update functions . For LDS updates in eq . ( 2.3 ) we find a nice relation between the characteristic polynomials of Jacobi and GS updates in Theorem 2.3 ( proof in Appendix B.1 ) , which turns out to greatly simplify our subsequent analyses : Theorem 2.3 ( Jacobi vs. Gauss–Seidel ) . Let p ( λ , γ ) = det ( ∑k i=0 ( γLi +Ui ) λ k−i ) , whereAi = Li + Ui and Li is strictly lower block triangular . Then , the characteristic polynomial of Jacobi updates is p ( λ , 1 ) while that of Gauss–Seidel updates is p ( λ , λ ) . Compared to the Jacobi update , in some sense the Gauss–Seidel update amounts to shifting the strictly lower block triangular matrices Li one step to the left , as p ( λ , λ ) can be rewritten as det ( ∑k i=0 ( Li+1 +Ui ) λ k−i ) , with Lk+1 : = 0 . This observation will significantly simplify our comparison between Jacobi and Gauss–Seidel updates . Next , we define some popular gradient algorithms for finding saddle points in the min-max problem min x max y f ( x , y ) . ( 2.5 ) We present the algorithms for a general ( bivariate ) function f although our main results will specialize f to the bilinear case in eq . ( 2.1 ) . Note that we introduced more “ step sizes ” for our refined analysis , as we find that the enlarged parameter space often contains choices for faster linear convergence ( see §4 ) . We only define the Jacobi updates , while the GS counterparts can be easily inferred . We always use α1 and α2 to define step sizes ( or learning rates ) which are positive . Gradient descent ( GD ) The generalized GD update has the following form : x ( t+1 ) = x ( t ) − α1∇xf ( x ( t ) , y ( t ) ) , y ( t+1 ) = y ( t ) + α2∇yf ( x ( t ) , y ( t ) ) . ( 2.6 ) When α1 = α2 , the convergence of averaged iterates ( a.k.a . Cesari convergence ) for convex-concave games is analyzed in ( Bruck , 1977 ; Nemirovski & Yudin , 1978 ; Nedić & Ozdaglar , 2009 ) . Recent progress on interpreting GD with dynamical systems can be seen in , e.g. , Mertikopoulos et al . ( 2018 ) ; Bailey et al . ( 2019 ) ; Bailey & Piliouras ( 2018 ) . Extra-gradient ( EG ) We study a generalized version of EG , defined as follows : x ( t+1/2 ) = x ( t ) − γ1∇xf ( x ( t ) , y ( t ) ) , y ( t+1/2 ) = y ( t ) + γ2∇yf ( x ( t ) , y ( t ) ) ; ( 2.7 ) x ( t+1 ) = x ( t ) − α1∇xf ( x ( t+1/2 ) , y ( t+1/2 ) ) , y ( t+1 ) = y ( t ) + α2∇yf ( x ( t+1/2 ) , y ( t+1/2 ) ) . ( 2.8 ) EG was first proposed in Korpelevich ( 1976 ) with the restriction α1 = α2 = γ1 = γ2 , under which linear convergence was proved for bilinear games . Convergence of EG on convex-concave games was analyzed in Nemirovski ( 2004 ) ; Monteiro & Svaiter ( 2010 ) , and Mertikopoulos et al . ( 2019 ) provides convergence guarantees for specific non-convex-non-concave problems . For bilinear games , a slightly more generalized version was proposed in Liang & Stokes ( 2019 ) where α1 = α2 , γ1 = γ2 , with linear convergence proved . For later convenience we define β1 = α2γ1 and β2 = α1γ2 . Optimistic gradient descent ( OGD ) We study a generalized version of OGD , defined as follows : x ( t+1 ) = x ( t ) − α1∇xf ( x ( t ) , y ( t ) ) + β1∇xf ( x ( t−1 ) , y ( t−1 ) ) , ( 2.9 ) y ( t+1 ) = y ( t ) + α2∇yf ( x ( t ) , y ( t ) ) − β2∇yf ( x ( t−1 ) , y ( t−1 ) ) . ( 2.10 ) The original version of OGD was given in Popov ( 1980 ) with α1 = α2 = 2β1 = 2β2 and rediscovered in the GAN literature ( Daskalakis et al. , 2018 ) . Its linear convergence for bilinear games was proved in Liang & Stokes ( 2019 ) . A slightly more generalized version with α1 = α2 and β1 = β2 was analyzed in Peng et al . ( 2019 ) ; Mokhtari et al . ( 2019b ) , again with linear convergence proved . The stochastic case was analyzed in Hsieh et al . ( 2019 ) . Momentum method Generalized heavy ball method was analyzed in Gidel et al . ( 2019b ) : x ( t+1 ) = x ( t ) − α1∇xf ( x ( t ) , y ( t ) ) + β1 ( x ( t ) − x ( t−1 ) ) , ( 2.11 ) y ( t+1 ) = y ( t ) + α2∇yf ( x ( t ) , y ( t ) ) + β2 ( y ( t ) − y ( t−1 ) ) . ( 2.12 ) This is a modification of Polyak ’ s heavy ball ( HB ) ( Polyak , 1964 ) , which also motivated Nesterov ’ s accelerated gradient algorithm ( NAG ) ( Nesterov , 1983 ) . Note that for both x-update and the y-update , we add a scale multiple of the successive difference ( e.g . proxy of the momentum ) . For this algorithm our result below improves those obtained in Gidel et al . ( 2019b ) , as will be discussed in §3 . EG and OGD as approximations of proximal point algorithm It has been observed recently in Mokhtari et al . ( 2019b ) that for convex-concave games , EG ( α1 = α2 = γ1 = γ2 = η ) and OGD ( α1/2 = α2/2 = β1 = β2 = η ) can be treated as approximations of the proximal point algorithm ( Martinet , 1970 ; Rockafellar , 1976 ) when η is small . With this result , one can show that EG and OGD converge to saddle points sublinearly for smooth convex-concave games ( Mokhtari et al. , 2019a ) . We give a brief introduction of the proximal point algorithm in Appendix A ( including a linear convergence result for the slightly generalized version ) . The above algorithms , when specialized to a bilinear function f ( see eq . ( 2.1 ) ) , can be rewritten as a 1-step or 2-step LDS ( see . eq . ( 2.3 ) ) . See Appendix C.1 for details .
The paper presents exact conditions for the convergence of several gradient based methods for solving bilinear games. In particular, the methods under study are Gradient Descent(GD), Extragradient (EG), Optimizatic Gradient descent (OGD) and Momentum methods. For these methods, the authors provide convergence rates (with optimal parameter setup) for both alternating (Gauss-Seidel) and simultaneous (Jacobi) updates.
SP:69704bad659d8cc6e35dc5b7f372bf2e39805f4f
Style-based Encoder Pre-training for Multi-modal Image Synthesis
1 INTRODUCTION . Image-to-Image ( I2I ) translation is the task of transforming images from one domain to another ( e.g. , semantic maps→ scenes , sketches→ photo-realistic images , etc. ) . Many problems in computer vision and graphics can be cast as I2I translation , such as photo-realistic image synthesis ( Chen & Koltun ( 2017 ) ; Isola et al . ( 2017 ) ; Wang et al . ( 2018a ) ) , super-resolution ( Ledig et al . ( 2017 ) ) , colorization ( Zhang et al . ( 2016 ; 2017a ) ) , and inpainting ( Pathak et al . ( 2016 ) ) . Therefore , I2I translation has recently received significant attention in the literature . One main challenge in I2I translation is the multi-modal nature for many such tasks – the relation between an input domain A and an output domain B is often times one-to-many , where a single input image IAi ∈ A can be mapped to different output images from domain B . For example , a sketch of a shoe or a handbag can be mapped to corresponding objects with different colors or styles , or a semantic map of a scene can be mapped to many scenes with different appearance , lighting and/or weather conditions . Since I2I translation networks typically learn one-to-one mappings due to their deterministic nature , an extra input is required to specify an output mode to which an input image will be translated . Simply injecting extra random noise as input proved to be ineffective as shown in ( Isola et al . ( 2017 ) ; Zhu et al . ( 2017b ) ) , where the generator network just learns to ignore the extra noise and collapses to a single or few modes ( which is one form of the mode collapse problem ) . To overcome this problem , Zhu et al . ( 2017b ) proposed BicycleGAN , which learns to encode the distribution of different possible outputs into a latent vector z , and then learns a deterministic mapping G : ( A , z ) → B . So , depending on the latent vector z , a single input IAi ∈ A can be mapped to multiple outputs in B . While BicycleGAN requires paired training data , several works ( Lee et al . ( 2018 ) ; Huang et al . ( 2018 ) ) extended it to the unsupervised case , where images in domains A and B are not in correspondence ( ‘ unpaired ’ ) . One main component of unpaired I2I is a cross-cycle consistency constraint , where the network generates an intermediate output by swapping the styles of a pair of images , then swaps the style between the intermediate output again to reconstruct the original images . This enforces that the latent vector z preserves the encoded style information when translated from an image i to another image j and back to image i again . This constraint can also be applied to paired training data , where it encourages style/attribute transfer between images . However , training BicycleGAN ( Zhu et al . ( 2017b ) ) or its unsupervised counterparts ( Huang et al . ( 2018 ) ; Lee et al . ( 2018 ) ) is not trivial . For example , BicycleGAN combines the objectives of both conditional Variational Auto-Encoders ( cVAEs ) ( Sohn et al . ( 2015 ) ) and a conditional version of Latent Regressor GANs ( cLR-GANs ) ( Donahue et al . ( 2016 ) ; Dumoulin et al . ( 2016 ) ) to train their network . The training objective of ( Huang et al . ( 2018 ) ; Lee et al . ( 2018 ) ) is even more involved to handle the unsupervised setup . In this work , we aim to simplify the training of general purpose multi-modal I2I translation networks , while also improving the diversity and expressiveness of different styles in the output domain . Our approach is inspired by the work of Meshry et al . ( 2019 ) which utilizes a staged training strategy to re-render scenes under different lighting , time of day , and weather conditions . We propose a pretraining approach for style encoders , in multi-modal I2I translation networks , which makes the training simpler and faster by requiring fewer losses/constraints . Our approach is also inspired by the standard training paradigm in visual recognition of first pretraining on a proxy task , either large supervised datasets ( e.g. , ImageNet ) ( Krizhevsky et al . ( 2012 ) ; Sun et al . ( 2017 ) ; Mahajan et al . ( 2018 ) ) or unsupervised tasks ( e.g. , Doersch et al . ( 2015 ) ; Noroozi & Favaro ( 2016 ) ) , and then fine-tuning ( transfer learning ) on the desired task . Similarly , we propose to pretrain the encoder using a proxy task that encourages capturing style into a latent space . Our goal is to highlight the importance of pretraining for I2I networks and demonstrate that a simple approach can be very effective for multi-modal image synthesis . In particular , we make the following contributions : • We explore style pretraining and its generalization for the task of multi-modal I2I translation , which simplifies and speeds up the training compared to competing approaches . • We provide a study of the importance of different losses and regularization terms for multi-modal I2I translation networks . • We show that the pretrained latent embeddings is not dependent on the target domain and generalizes well to other domains ( transfer learning ) . • We achieve state-of-the art results on several benchmarks in terms of style capture and transfer , and diversity of results . 2 RELATED WORK . Deep generative models There has been incredible progress in the field of image synthesis using deep neural networks . In its unconditional setting , a decoder network learns to map random values drawn from a prior distribution ( typically Gaussian ) to output images . Variational Auto-Encoders ( VAEs ) ( Kingma & Welling ( 2014 ) ) assume a bijection mapping between output images and some latent distribution and learn to map the latent distribution to a unit Gaussian using the reparameterization trick . Alternatively , Generative Adversarial Networks ( GANs ) ( Goodfellow et al . ( 2014 ) ) directly map random values sampled from a unit Gaussian to images , while using a discriminator network to enforce that the distribution of generated images resembles that of real images . Recent works proposed improvements to stabilize the training ( Gulrajani et al . ( 2017 ) ; Karnewar & Iyengar ( 2019 ) ; Mao et al . ( 2017 ) ; Radford et al . ( 2016 ) ) and improve the quality and diversity of the output ( Karras et al . ( 2018 ; 2019 ) ) . Other works combine both VAEs and GANs into a hybrid VAE-GAN model ( Larsen et al . ( 2016 ) ; Rosca et al . ( 2017 ) ) . Conditional image synthesis Instead of generating images from input noise , the generator can be augmented with side information in the form of extra conditional inputs . For example , Sohn et al . ( 2015 ) extended VAEs to their conditional setup ( cVAEs ) . Also , GANs can be conditioned on different information , like class labels ( Mirza & Osindero ( 2014 ) ; Odena et al . ( 2017 ) ; Van den Oord et al . ( 2016 ) ) , language description ( Mansimov et al . ( 2016 ) ; Reed et al . ( 2016 ) ) , or an image from another domain ( Chen & Koltun ( 2017 ) ; Isola et al . ( 2017 ) ) . The latter is called Image-to-Image translation . Image-to-Image ( I2I ) translation I2I translation is the task of transforming an image from one domain , such as a sketch , into a another domain , such as photo-realistic images . While there are regression-based approaches to this problem ( Chen & Koltun ( 2017 ) ; Hoshen & Wolf ( 2018 ) ) , significant successes in this field are based on GANs and the influential work of pix2pix ( Isola et al . ( 2017 ) ) . Following the success of pix2pix ( Isola et al . ( 2017 ) ) , I2I translation has since been utilized in a large number of tasks , like inpainting ( Pathak et al . ( 2016 ) ) , colorization ( Zhang et al . ( 2016 ; 2017a ) ) , super-resolution ( Ledig et al . ( 2017 ) ) , rendering ( Martin-Brualla et al . ( 2018 ) ; Meshry et al . ( 2019 ) ; Thies et al . ( 2019 ) ) , and many more ( Dong et al . ( 2017 ) ; Wang & Gupta ( 2016 ) ; Zhang et al . ( 2017b ) ) . There has also been works to extend this task to the unsupervised setting ( Hoshen & Wolf ( 2018 ) ; Kim et al . ( 2017 ) ; Liu et al . ( 2017 ) ; Ma et al . ( 2019 ) ; Royer et al . ( 2017 ) ; Zhu et al . ( 2017a ) ) , to multiple domains ( Choi et al . ( 2018 ) ) , and to videos ( Chan et al . ( 2018 ) ; Wang et al . ( 2018b ) ) . Multi-modal I2I translation Image translation networks are typically deterministic function approximators that learn a one-to-one mapping between inputs and outputs . To extend I2I translation to the case of diverse multi-modal outputs , Zhu et al . ( 2017b ) proposed the BicycleGAN framework that learns a latent distribution that encodes the variability in the output domain and conditions the generator on this extra latent vector for multi-modal image synthesis . Wang et al . ( 2018a ; b ) learn instance-wise latent features for different objects in a target image , which are clustered after training to find f fixed modes for different semantic classes . At test time , they sample one of the feature clusters for each object to achieve multi-modal synthesis . Other works extended the multi-modal I2I framework to the unpaired settings where images from the input and output domains are not in correspondence ( Almahairi et al . ( 2018 ) ; Huang et al . ( 2018 ) ; Lee et al . ( 2018 ) ) by augmenting BicycleGAN with different forms of a cross-cycle consistency constraint between two unpaired images . In our work , we focus on the supervised setting of multi-modal I2I translation . We propose a simple , yet effective , pretraining strategy to learn a latent distribution that encodes variability in the output domain . The learned distribution can be easily adapted to new unseen datasets with simple fine-tuning , instead of training from random initialization . 3 APPROACH . Current multi-modal image translation networks require an extra input z that allows for modelling the one-to-many relation between an input domain A and an output domain B as a one-to-one relation from a pair of inputs ( A , z ) → B . In previous approaches , there has been a trade-off between simplicity and effectiveness for providing the input z . On one hand , providing random noise as the extra input z maintains a simple training objective ( same as in pix2pix ( Isola et al . ( 2017 ) ) ) . However , Isola et al . ( 2017 ) ; Zhu et al . ( 2017b ) showed that the generator has little incentive to utilize the input vector z since it only encodes random information , and therefore the generator ends up ignoring z and collapsing to one or few modes . On the other hand , BicycleGAN ( Zhu et al . ( 2017b ) ) combines the objectives of both conditional Variational Auto-Encoder GANs ( cVAE-GAN ) and conditional Latent Regressor GANs ( cLR-GAN ) to learn a latent embedding z simultaneously with the generator G. Their training enforces two cycle consistencies : B → z → B̂ and z → B̃ → ẑ . This proved to be very effective , but the training objective is more involved , which makes the training slower . Also , since the latent embedding is being trained simultaneously with the the generator , hyper-parameter tuning becomes more critical and sensitive . We aim to combine the best of both worlds : an effective training of a latent embedding that models the distribution of possible outputs , while retaining a simple training objective . This would allow for faster and more efficient training , as well as less sensitivity to hyper-parameters . We observe that the variability in many target domains can be represented by the style diversity of images in the target domain B , where the style is defined in terms of the gram matrices used in the Neural Style Transfer literature ( Gatys et al . ( 2016 ) ) . Then , we learn an embedding by separately training an encoder network E on an auxiliary task to optimize for z = E ( IB ) capturing the style of an image IB . Finally , since we now have learned a deterministic mapping between z and the style of the target output image IB , training the generator G becomes simpler as G is just required to discover the correlation between output images and their corresponding style embedding z . To incorporate this into BicycleGAN ( Zhu et al . ( 2017b ) ) , we replace the simultaneous training of the encoder E and the generator G with a staged training as follows : Stage 1 : Pretrain E on a proxy task that optimizes an embedding of images in the output domain B into a low-dimensional style latent space , such that images with similar styles lie closely in that space ( i.e . clustered ) . Stage 2 : Train the generator network G while fixing the encoder E , so that G learns to associate the style of output images to their deterministic style embedding z = E ( IB ) . Stage 3 : Fine-tune both the E and G networks together , allowing for the style embedding to be further adapted to best suit the image synthesis task for the target domain . The intuition why such staged training would be effective for multi-modal I2I translation is that the encoder is pretrained to model different modes of the output distribution as clusters of images with similar styles ( refer to the supp . material , figures 6,7 , for a visualization of pretrained latent embeddings ) . During stage 2 , the latent space is kept fixed , and the input latent to the generator can be used to clearly distinguish the style cluster to which the output belongs , which makes the multi-modal synthesis task easier for the generator . Finally , stage 3 finetunes the learned embedding to better serve the synthesis task at hand . Next , we explain how to pre-train the style encoder network E in Section 3.1 , and how to train the generator G using the pre-learned embeddings ( Section 3.2 ) . Finally , we demonstrate the generalization of pre-training the style encoder E in Section 3.3 .
In this paper, the authors tackle the problem of multi-modal image-to-image translation by pre-training a style-based encoder. The style-based encoder is trained with a triplet loss that encourages similarity between images with similar styles and dissimilarity between images with different styles. The output of the encoder is a style embedding that helps differentiates different modes of image synthesis. When training the generator for image synthesis, the input combines an image in the source and a style embedding, and the loss is essentially the sum of image conditional GAN loss and perceptual loss. Additionally, the authors propose a mapping function to sample styles from a unit Gaussian distribution.
SP:0a523e5c8790b62fef099d7c5bec61bb18a2703c
Style-based Encoder Pre-training for Multi-modal Image Synthesis
1 INTRODUCTION . Image-to-Image ( I2I ) translation is the task of transforming images from one domain to another ( e.g. , semantic maps→ scenes , sketches→ photo-realistic images , etc. ) . Many problems in computer vision and graphics can be cast as I2I translation , such as photo-realistic image synthesis ( Chen & Koltun ( 2017 ) ; Isola et al . ( 2017 ) ; Wang et al . ( 2018a ) ) , super-resolution ( Ledig et al . ( 2017 ) ) , colorization ( Zhang et al . ( 2016 ; 2017a ) ) , and inpainting ( Pathak et al . ( 2016 ) ) . Therefore , I2I translation has recently received significant attention in the literature . One main challenge in I2I translation is the multi-modal nature for many such tasks – the relation between an input domain A and an output domain B is often times one-to-many , where a single input image IAi ∈ A can be mapped to different output images from domain B . For example , a sketch of a shoe or a handbag can be mapped to corresponding objects with different colors or styles , or a semantic map of a scene can be mapped to many scenes with different appearance , lighting and/or weather conditions . Since I2I translation networks typically learn one-to-one mappings due to their deterministic nature , an extra input is required to specify an output mode to which an input image will be translated . Simply injecting extra random noise as input proved to be ineffective as shown in ( Isola et al . ( 2017 ) ; Zhu et al . ( 2017b ) ) , where the generator network just learns to ignore the extra noise and collapses to a single or few modes ( which is one form of the mode collapse problem ) . To overcome this problem , Zhu et al . ( 2017b ) proposed BicycleGAN , which learns to encode the distribution of different possible outputs into a latent vector z , and then learns a deterministic mapping G : ( A , z ) → B . So , depending on the latent vector z , a single input IAi ∈ A can be mapped to multiple outputs in B . While BicycleGAN requires paired training data , several works ( Lee et al . ( 2018 ) ; Huang et al . ( 2018 ) ) extended it to the unsupervised case , where images in domains A and B are not in correspondence ( ‘ unpaired ’ ) . One main component of unpaired I2I is a cross-cycle consistency constraint , where the network generates an intermediate output by swapping the styles of a pair of images , then swaps the style between the intermediate output again to reconstruct the original images . This enforces that the latent vector z preserves the encoded style information when translated from an image i to another image j and back to image i again . This constraint can also be applied to paired training data , where it encourages style/attribute transfer between images . However , training BicycleGAN ( Zhu et al . ( 2017b ) ) or its unsupervised counterparts ( Huang et al . ( 2018 ) ; Lee et al . ( 2018 ) ) is not trivial . For example , BicycleGAN combines the objectives of both conditional Variational Auto-Encoders ( cVAEs ) ( Sohn et al . ( 2015 ) ) and a conditional version of Latent Regressor GANs ( cLR-GANs ) ( Donahue et al . ( 2016 ) ; Dumoulin et al . ( 2016 ) ) to train their network . The training objective of ( Huang et al . ( 2018 ) ; Lee et al . ( 2018 ) ) is even more involved to handle the unsupervised setup . In this work , we aim to simplify the training of general purpose multi-modal I2I translation networks , while also improving the diversity and expressiveness of different styles in the output domain . Our approach is inspired by the work of Meshry et al . ( 2019 ) which utilizes a staged training strategy to re-render scenes under different lighting , time of day , and weather conditions . We propose a pretraining approach for style encoders , in multi-modal I2I translation networks , which makes the training simpler and faster by requiring fewer losses/constraints . Our approach is also inspired by the standard training paradigm in visual recognition of first pretraining on a proxy task , either large supervised datasets ( e.g. , ImageNet ) ( Krizhevsky et al . ( 2012 ) ; Sun et al . ( 2017 ) ; Mahajan et al . ( 2018 ) ) or unsupervised tasks ( e.g. , Doersch et al . ( 2015 ) ; Noroozi & Favaro ( 2016 ) ) , and then fine-tuning ( transfer learning ) on the desired task . Similarly , we propose to pretrain the encoder using a proxy task that encourages capturing style into a latent space . Our goal is to highlight the importance of pretraining for I2I networks and demonstrate that a simple approach can be very effective for multi-modal image synthesis . In particular , we make the following contributions : • We explore style pretraining and its generalization for the task of multi-modal I2I translation , which simplifies and speeds up the training compared to competing approaches . • We provide a study of the importance of different losses and regularization terms for multi-modal I2I translation networks . • We show that the pretrained latent embeddings is not dependent on the target domain and generalizes well to other domains ( transfer learning ) . • We achieve state-of-the art results on several benchmarks in terms of style capture and transfer , and diversity of results . 2 RELATED WORK . Deep generative models There has been incredible progress in the field of image synthesis using deep neural networks . In its unconditional setting , a decoder network learns to map random values drawn from a prior distribution ( typically Gaussian ) to output images . Variational Auto-Encoders ( VAEs ) ( Kingma & Welling ( 2014 ) ) assume a bijection mapping between output images and some latent distribution and learn to map the latent distribution to a unit Gaussian using the reparameterization trick . Alternatively , Generative Adversarial Networks ( GANs ) ( Goodfellow et al . ( 2014 ) ) directly map random values sampled from a unit Gaussian to images , while using a discriminator network to enforce that the distribution of generated images resembles that of real images . Recent works proposed improvements to stabilize the training ( Gulrajani et al . ( 2017 ) ; Karnewar & Iyengar ( 2019 ) ; Mao et al . ( 2017 ) ; Radford et al . ( 2016 ) ) and improve the quality and diversity of the output ( Karras et al . ( 2018 ; 2019 ) ) . Other works combine both VAEs and GANs into a hybrid VAE-GAN model ( Larsen et al . ( 2016 ) ; Rosca et al . ( 2017 ) ) . Conditional image synthesis Instead of generating images from input noise , the generator can be augmented with side information in the form of extra conditional inputs . For example , Sohn et al . ( 2015 ) extended VAEs to their conditional setup ( cVAEs ) . Also , GANs can be conditioned on different information , like class labels ( Mirza & Osindero ( 2014 ) ; Odena et al . ( 2017 ) ; Van den Oord et al . ( 2016 ) ) , language description ( Mansimov et al . ( 2016 ) ; Reed et al . ( 2016 ) ) , or an image from another domain ( Chen & Koltun ( 2017 ) ; Isola et al . ( 2017 ) ) . The latter is called Image-to-Image translation . Image-to-Image ( I2I ) translation I2I translation is the task of transforming an image from one domain , such as a sketch , into a another domain , such as photo-realistic images . While there are regression-based approaches to this problem ( Chen & Koltun ( 2017 ) ; Hoshen & Wolf ( 2018 ) ) , significant successes in this field are based on GANs and the influential work of pix2pix ( Isola et al . ( 2017 ) ) . Following the success of pix2pix ( Isola et al . ( 2017 ) ) , I2I translation has since been utilized in a large number of tasks , like inpainting ( Pathak et al . ( 2016 ) ) , colorization ( Zhang et al . ( 2016 ; 2017a ) ) , super-resolution ( Ledig et al . ( 2017 ) ) , rendering ( Martin-Brualla et al . ( 2018 ) ; Meshry et al . ( 2019 ) ; Thies et al . ( 2019 ) ) , and many more ( Dong et al . ( 2017 ) ; Wang & Gupta ( 2016 ) ; Zhang et al . ( 2017b ) ) . There has also been works to extend this task to the unsupervised setting ( Hoshen & Wolf ( 2018 ) ; Kim et al . ( 2017 ) ; Liu et al . ( 2017 ) ; Ma et al . ( 2019 ) ; Royer et al . ( 2017 ) ; Zhu et al . ( 2017a ) ) , to multiple domains ( Choi et al . ( 2018 ) ) , and to videos ( Chan et al . ( 2018 ) ; Wang et al . ( 2018b ) ) . Multi-modal I2I translation Image translation networks are typically deterministic function approximators that learn a one-to-one mapping between inputs and outputs . To extend I2I translation to the case of diverse multi-modal outputs , Zhu et al . ( 2017b ) proposed the BicycleGAN framework that learns a latent distribution that encodes the variability in the output domain and conditions the generator on this extra latent vector for multi-modal image synthesis . Wang et al . ( 2018a ; b ) learn instance-wise latent features for different objects in a target image , which are clustered after training to find f fixed modes for different semantic classes . At test time , they sample one of the feature clusters for each object to achieve multi-modal synthesis . Other works extended the multi-modal I2I framework to the unpaired settings where images from the input and output domains are not in correspondence ( Almahairi et al . ( 2018 ) ; Huang et al . ( 2018 ) ; Lee et al . ( 2018 ) ) by augmenting BicycleGAN with different forms of a cross-cycle consistency constraint between two unpaired images . In our work , we focus on the supervised setting of multi-modal I2I translation . We propose a simple , yet effective , pretraining strategy to learn a latent distribution that encodes variability in the output domain . The learned distribution can be easily adapted to new unseen datasets with simple fine-tuning , instead of training from random initialization . 3 APPROACH . Current multi-modal image translation networks require an extra input z that allows for modelling the one-to-many relation between an input domain A and an output domain B as a one-to-one relation from a pair of inputs ( A , z ) → B . In previous approaches , there has been a trade-off between simplicity and effectiveness for providing the input z . On one hand , providing random noise as the extra input z maintains a simple training objective ( same as in pix2pix ( Isola et al . ( 2017 ) ) ) . However , Isola et al . ( 2017 ) ; Zhu et al . ( 2017b ) showed that the generator has little incentive to utilize the input vector z since it only encodes random information , and therefore the generator ends up ignoring z and collapsing to one or few modes . On the other hand , BicycleGAN ( Zhu et al . ( 2017b ) ) combines the objectives of both conditional Variational Auto-Encoder GANs ( cVAE-GAN ) and conditional Latent Regressor GANs ( cLR-GAN ) to learn a latent embedding z simultaneously with the generator G. Their training enforces two cycle consistencies : B → z → B̂ and z → B̃ → ẑ . This proved to be very effective , but the training objective is more involved , which makes the training slower . Also , since the latent embedding is being trained simultaneously with the the generator , hyper-parameter tuning becomes more critical and sensitive . We aim to combine the best of both worlds : an effective training of a latent embedding that models the distribution of possible outputs , while retaining a simple training objective . This would allow for faster and more efficient training , as well as less sensitivity to hyper-parameters . We observe that the variability in many target domains can be represented by the style diversity of images in the target domain B , where the style is defined in terms of the gram matrices used in the Neural Style Transfer literature ( Gatys et al . ( 2016 ) ) . Then , we learn an embedding by separately training an encoder network E on an auxiliary task to optimize for z = E ( IB ) capturing the style of an image IB . Finally , since we now have learned a deterministic mapping between z and the style of the target output image IB , training the generator G becomes simpler as G is just required to discover the correlation between output images and their corresponding style embedding z . To incorporate this into BicycleGAN ( Zhu et al . ( 2017b ) ) , we replace the simultaneous training of the encoder E and the generator G with a staged training as follows : Stage 1 : Pretrain E on a proxy task that optimizes an embedding of images in the output domain B into a low-dimensional style latent space , such that images with similar styles lie closely in that space ( i.e . clustered ) . Stage 2 : Train the generator network G while fixing the encoder E , so that G learns to associate the style of output images to their deterministic style embedding z = E ( IB ) . Stage 3 : Fine-tune both the E and G networks together , allowing for the style embedding to be further adapted to best suit the image synthesis task for the target domain . The intuition why such staged training would be effective for multi-modal I2I translation is that the encoder is pretrained to model different modes of the output distribution as clusters of images with similar styles ( refer to the supp . material , figures 6,7 , for a visualization of pretrained latent embeddings ) . During stage 2 , the latent space is kept fixed , and the input latent to the generator can be used to clearly distinguish the style cluster to which the output belongs , which makes the multi-modal synthesis task easier for the generator . Finally , stage 3 finetunes the learned embedding to better serve the synthesis task at hand . Next , we explain how to pre-train the style encoder network E in Section 3.1 , and how to train the generator G using the pre-learned embeddings ( Section 3.2 ) . Finally , we demonstrate the generalization of pre-training the style encoder E in Section 3.3 .
The authors propose to use a non-end-to-end approach to the problem of multi-modal I2I. Firstly, a metric learning problem is solved to embed images into space, taking into account the pairwise style discrepancy (style is defined, e.g., based on VGG Gramians). As the notion of style is universal for similar datasets, this step further is shown to be generalizable. Secondly, the generator is trained on a supervised image translation tasks: the original image and the style, extracted from the target image, are fed to the generator, and the output is a translated image. Thirdly, style encoder and generator are simultaneously finetuned.
SP:0a523e5c8790b62fef099d7c5bec61bb18a2703c
Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control
1 INTRODUCTION . Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents . This decomposition confers several notable benefits . First , it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction . Second , once a dynamics model is learned , it can be shared across multiple tasks within the same environment . While the merits of this decomposition have been demonstrated in low-dimensional environments ( Deisenroth & Rasmussen , 2011 ; Gal et al. , 2016 ) , scaling these methods to high-dimensional environments remains an open challenge . The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes ( Watter et al. , 2015 ; Ha & Schmidhuber , 2018 ; Kurutach et al. , 2018 ) . This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques , ranging from optimal control to reinforcement learning ( RL ) ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Finn et al. , 2016 ; Chua et al. , 2018 ; Ha & Schmidhuber , 2018 ; Kaiser et al. , 2019 ; Hafner et al. , 2018 ; Zhang et al. , 2019 ) . One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space , where the embedding itself is learned through this process ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Hafner et al. , 2018 ; Zhang et al. , 2019 ) . We refer to this approach as learning controllable embedding ( LCE ) . There have been two main approaches to this problem : 1 ) to start by defining a cost function in the high-dimensional observation space and learn the embedding space , its dynamics , and reward function , by interacting with the environment in a RL fashion ( Hafner et al. , 2018 ; Zhang et al. , 2019 ) , and 2 ) to first learn the embedding space and its dynamics , and then define a cost function in this low-dimensional space and conduct the control ( Watter et al. , 2015 ; Banijamali et al. , 2018 ) . This can be later combined with RL for extra fine-tuning of the model and control . In this paper , we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms , namely the widely used class of locally-linear control ( LLC ) algorithms ? We argue from an optimal control standpoint that our latent space should exhibit three properties . The first is prediction : given the ability to encode to and decode from the latent space , we expect ∗Equal contribution . Correspondence to nirlevine @ google.com the process of encoding , transitioning via the latent dynamics , and then decoding , to adhere to the true observation dynamics . The second is consistency : given the ability to encode a observation trajectory sampled from the true environment , we expect the latent dynamics to be consistent with the encoded trajectory . Finally , curvature : in order to learn a latent space that is specifically amenable to LLC algorithms , we expect the ( learned ) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms . Our contributions are thus as follows : ( 1 ) We propose the Prediction , Consistency , and Curvature ( PCC ) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space . ( 2 ) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model . ( 3 ) To the best of our knowledge , our proposed curvature loss for the transition dynamics ( in the latent space ) is novel . We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently . ( 4 ) Through extensive experimental comparison , we show that the PCC model consistently outperforms E2C ( Watter et al. , 2015 ) and RCE ( Banijamali et al. , 2018 ) on a number of control-from-images tasks , and verify via ablation , the importance of regularizing the model to have consistency and low-curvature . 2 PROBLEM FORMULATION . We are interested in controlling the non-linear dynamical systems of the form st+1 = fS ( st , ut ) +w , over the horizon T . In this definition , st ∈ S ⊆ Rns and ut ∈ U ⊆ Rnu are the state and action of the system at time step t ∈ { 0 , . . . , T − 1 } , w is the Gaussian system noise , and fS is a smooth non-linear system dynamics . We are particularly interested in the scenario in which we only have access to the high-dimensional observation xt ∈ X ⊆ Rnx of each state st ( nx ns ) . This scenario has application in many real-world problems , such as visual-servoing ( Espiau et al. , 1992 ) , in which we only observe high-dimensional images of the environment and not its underlying state . We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = { ut } T−1t=0 , the observation sequence { xt } Tt=0 is generated by a stationary Markov process , i.e. , xt+1 ∼ P ( ·|xt , ut ) , ∀t ∈ { 0 , . . . , T − 1 } .1 A common approach to control the above dynamical system is to solve the following stochastic optimal control ( SOC ) problem ( Shapiro et al. , 2009 ) that minimizes expected cumulative cost : min U L ( U , P , c , x0 ) : = E [ cT ( xT ) + T−1∑ t=0 ct ( xt , ut ) | P , x0 ] , 2 ( SOC1 ) where ct : X ×U → R≥0 is the immediate cost function at time t , cT ∈ R≥0 is the terminal cost , and x0 is the observation at the initial state s0 . Note that all immediate costs are defined in the observation space X , and are bounded by cmax > 0 and Lipschitz with constant clip > 0 . For example , in visualservoing , ( SOC1 ) can be formulated as a goal tracking problem ( Ebert et al. , 2018 ) , where we control the robot to reach the goal observation xgoal , and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error E [ ∑ t ‖xt − xgoal‖2 | P , x0 ] . Since the observations x are high dimensional and the dynamics in the observation space P ( ·|xt , ut ) is unknown , solving ( SOC1 ) is often intractable . To address this issue , a class of algorithms has been recently developed that is based on learning a low-dimensional latent ( embedding ) space Z ⊆ Rnz ( nz nx ) and latent state dynamics , and performing optimal control there . This class that we refer to as learning controllable embedding ( LCE ) throughout the paper , include recently developed algorithms , such as E2C ( Watter et al. , 2015 ) , RCE ( Banijamali et al. , 2018 ) , and SOLAR ( Zhang et al. , 2019 ) . The main idea behind the LCE approach is to learn a triplet , ( i ) an encoderE : X → P ( Z ) ; ( ii ) a dynamics in the latent space F : Z ×U → P ( Z ) ; and ( iii ) a decoder D : Z → P ( X ) . These in turn can be thought of as defining a ( stochastic ) mapping P̂ : X ×U → P ( X ) of the form P̂ = D ◦F ◦E . We then wish to solve the SOC in latent space Z : min U , P̂ E [ L ( U , F , c , z0 ) | E , x0 ] + λ2 √ R2 ( P̂ ) , ( SOC2 ) such that the solution of ( SOC2 ) , U∗2 , has similar performance to that of ( SOC1 ) , U ∗ 1 , i.e. , L ( U∗1 , P , c , x0 ) ≈ L ( U∗2 , P , c , x0 ) . In ( SOC2 ) , z0 is the initial latent state sampled from the encoder E ( ·|x0 ) ; c̄ : Z × U → R≥0 is the latent cost function defined as c̄t ( zt , ut ) =∫ ct ( xt , ut ) dD ( xt|zt ) ; R2 ( P̂ ) is a regularizer over the mapping P̂ ; and λ2 is the corresponding 1A method to ensure this Markovian assumption is by buffering observations ( Mnih et al. , 2013 ) for a number of time steps . 2See Appendix B.3 for the extension to the closed-loop MDP problem . tion SOC2 under dynamics F , and ( c ) ( red ) in equation SOC3 under dynamics P̂ . regularization parameter . We will define R2 and λ2 more precisely in Section 3 . Note that the expectation in ( SOC2 ) is over the randomness generated by the ( stochastic ) encoder E . 3 PCC MODEL : A CONTROL PERSPECTIVE . As described in Section 2 , we are primarily interested in solving ( SOC1 ) , whose states evolve under dynamics P , as shown at the bottom row of Figure 1 ( a ) in ( blue ) . However , because of the difficulties in solving ( SOC1 ) , mainly due to the high dimension of observations x , LCE proposes to learn a mapping P̂ by solving ( SOC2 ) that consists of a loss function , whose states evolve under dynamics F ( after an initial transition by encoder E ) , as depicted in Figure 1 ( b ) , and a regularization term . The role of the regularizer R2 is to account for the performance gap between ( SOC1 ) and the loss function of ( SOC2 ) , due to the discrepancy between their evolution paths , shown in Figures 1 ( a ) ( blue ) and 1 ( b ) ( green ) . The goal of LCE is to learn P̂ of the particular form P̂ = D ◦ F ◦ E , described in Section 2 , such that the solution of ( SOC2 ) has similar performance to that of ( SOC1 ) . In this section , we propose a principled way to select the regularizer R2 to achieve this goal . Since the exact form of ( SOC2 ) has a direct effect on learning P̂ , designing this regularization term , in turn , provides us with a recipe ( loss function ) to learn the latent ( embedded ) space Z . In the following subsections , we show that this loss function consists of three terms that correspond to prediction , consistency , and curvature , the three ingredients of our PCC model . Note that these two SOCs evolve in two different spaces , one in the observation space X under dynamics P , and the other one in the latent space Z ( after an initial transition from X to Z ) under dynamics F . Unlike P and F that only operate in a single space , X and Z , respectively , P̂ can govern the evolution of the system in both X and Z ( see Figure 1 ( c ) ) . Therefore , any recipe to learn P̂ , and as a result the latent space Z , should have at least two terms , to guarantee that the evolution paths resulted from P̂ in X and Z are consistent with those generated by P and F . We derive these two terms , that are the prediction and consistency terms in the loss function used by our PCC model , in Sections 3.1 and 3.2 , respectively . While these two terms are the result of learning P̂ in general SOC problems , in Section 3.3 , we concentrate on the particular class of LLC algorithms ( e.g. , iLQR ( Li & Todorov , 2004 ) ) to solve SOC , and add the third term , curvature , to our recipe for learning P̂ .
This paper considers learning low-dimensional representations from high-dimensional observations for control purposes. The authors extend the E2C framework by introducing the new PCC-Loss function. This new loss function aims to reflect the prediction in the observation space, the consistency between latent and observation dynamics, and the low curvature in the latent dynamics. The low curvature term is used to bias the latent dynamics towards models that can be better approximated as locally linear models. The authors provide theory (error bounds) to justify their proposed PCC-Loss function. Then variational PCC is developed to make the algorithm tractable. The proposed method is evaluated in 5 different simulated tasks and compared with the original E2C method and the RCE method.
SP:8ec794421e38087b73f7d7fb4fbf373728ea39c7
Prediction, Consistency, Curvature: Representation Learning for Locally-Linear Control
1 INTRODUCTION . Decomposing the problem of decision-making in an unknown environment into estimating dynamics followed by planning provides a powerful framework for building intelligent agents . This decomposition confers several notable benefits . First , it enables the handling of sparse-reward environments by leveraging the dense signal of dynamics prediction . Second , once a dynamics model is learned , it can be shared across multiple tasks within the same environment . While the merits of this decomposition have been demonstrated in low-dimensional environments ( Deisenroth & Rasmussen , 2011 ; Gal et al. , 2016 ) , scaling these methods to high-dimensional environments remains an open challenge . The recent advancements in generative models have enabled the successful dynamics estimation of high-dimensional decision processes ( Watter et al. , 2015 ; Ha & Schmidhuber , 2018 ; Kurutach et al. , 2018 ) . This procedure of learning dynamics can then be used in conjunction with a plethora of decision-making techniques , ranging from optimal control to reinforcement learning ( RL ) ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Finn et al. , 2016 ; Chua et al. , 2018 ; Ha & Schmidhuber , 2018 ; Kaiser et al. , 2019 ; Hafner et al. , 2018 ; Zhang et al. , 2019 ) . One particularly promising line of work in this area focuses on learning the dynamics and conducting control in a low-dimensional latent embedding of the observation space , where the embedding itself is learned through this process ( Watter et al. , 2015 ; Banijamali et al. , 2018 ; Hafner et al. , 2018 ; Zhang et al. , 2019 ) . We refer to this approach as learning controllable embedding ( LCE ) . There have been two main approaches to this problem : 1 ) to start by defining a cost function in the high-dimensional observation space and learn the embedding space , its dynamics , and reward function , by interacting with the environment in a RL fashion ( Hafner et al. , 2018 ; Zhang et al. , 2019 ) , and 2 ) to first learn the embedding space and its dynamics , and then define a cost function in this low-dimensional space and conduct the control ( Watter et al. , 2015 ; Banijamali et al. , 2018 ) . This can be later combined with RL for extra fine-tuning of the model and control . In this paper , we take the second approach and particularly focus on the important question of what desirable traits should the latent embedding exhibit for it to be amenable to a specific class of control/learning algorithms , namely the widely used class of locally-linear control ( LLC ) algorithms ? We argue from an optimal control standpoint that our latent space should exhibit three properties . The first is prediction : given the ability to encode to and decode from the latent space , we expect ∗Equal contribution . Correspondence to nirlevine @ google.com the process of encoding , transitioning via the latent dynamics , and then decoding , to adhere to the true observation dynamics . The second is consistency : given the ability to encode a observation trajectory sampled from the true environment , we expect the latent dynamics to be consistent with the encoded trajectory . Finally , curvature : in order to learn a latent space that is specifically amenable to LLC algorithms , we expect the ( learned ) latent dynamics to exhibit low curvature in order to minimize the approximation error of its first-order Taylor expansion employed by LLC algorithms . Our contributions are thus as follows : ( 1 ) We propose the Prediction , Consistency , and Curvature ( PCC ) framework for learning a latent space that is amenable to LLC algorithms and show that the elements of PCC arise systematically from bounding the suboptimality of the solution of the LLC algorithm in the latent space . ( 2 ) We design a latent variable model that adheres to the PCC framework and derive a tractable variational bound for training the model . ( 3 ) To the best of our knowledge , our proposed curvature loss for the transition dynamics ( in the latent space ) is novel . We also propose a direct amortization of the Jacobian calculation in the curvature loss to help training with curvature loss more efficiently . ( 4 ) Through extensive experimental comparison , we show that the PCC model consistently outperforms E2C ( Watter et al. , 2015 ) and RCE ( Banijamali et al. , 2018 ) on a number of control-from-images tasks , and verify via ablation , the importance of regularizing the model to have consistency and low-curvature . 2 PROBLEM FORMULATION . We are interested in controlling the non-linear dynamical systems of the form st+1 = fS ( st , ut ) +w , over the horizon T . In this definition , st ∈ S ⊆ Rns and ut ∈ U ⊆ Rnu are the state and action of the system at time step t ∈ { 0 , . . . , T − 1 } , w is the Gaussian system noise , and fS is a smooth non-linear system dynamics . We are particularly interested in the scenario in which we only have access to the high-dimensional observation xt ∈ X ⊆ Rnx of each state st ( nx ns ) . This scenario has application in many real-world problems , such as visual-servoing ( Espiau et al. , 1992 ) , in which we only observe high-dimensional images of the environment and not its underlying state . We further assume that the high-dimensional observations x have been selected such that for any arbitrary control sequence U = { ut } T−1t=0 , the observation sequence { xt } Tt=0 is generated by a stationary Markov process , i.e. , xt+1 ∼ P ( ·|xt , ut ) , ∀t ∈ { 0 , . . . , T − 1 } .1 A common approach to control the above dynamical system is to solve the following stochastic optimal control ( SOC ) problem ( Shapiro et al. , 2009 ) that minimizes expected cumulative cost : min U L ( U , P , c , x0 ) : = E [ cT ( xT ) + T−1∑ t=0 ct ( xt , ut ) | P , x0 ] , 2 ( SOC1 ) where ct : X ×U → R≥0 is the immediate cost function at time t , cT ∈ R≥0 is the terminal cost , and x0 is the observation at the initial state s0 . Note that all immediate costs are defined in the observation space X , and are bounded by cmax > 0 and Lipschitz with constant clip > 0 . For example , in visualservoing , ( SOC1 ) can be formulated as a goal tracking problem ( Ebert et al. , 2018 ) , where we control the robot to reach the goal observation xgoal , and the objective is to compute a sequence of optimal open-loop actions U that minimizes the cumulative tracking error E [ ∑ t ‖xt − xgoal‖2 | P , x0 ] . Since the observations x are high dimensional and the dynamics in the observation space P ( ·|xt , ut ) is unknown , solving ( SOC1 ) is often intractable . To address this issue , a class of algorithms has been recently developed that is based on learning a low-dimensional latent ( embedding ) space Z ⊆ Rnz ( nz nx ) and latent state dynamics , and performing optimal control there . This class that we refer to as learning controllable embedding ( LCE ) throughout the paper , include recently developed algorithms , such as E2C ( Watter et al. , 2015 ) , RCE ( Banijamali et al. , 2018 ) , and SOLAR ( Zhang et al. , 2019 ) . The main idea behind the LCE approach is to learn a triplet , ( i ) an encoderE : X → P ( Z ) ; ( ii ) a dynamics in the latent space F : Z ×U → P ( Z ) ; and ( iii ) a decoder D : Z → P ( X ) . These in turn can be thought of as defining a ( stochastic ) mapping P̂ : X ×U → P ( X ) of the form P̂ = D ◦F ◦E . We then wish to solve the SOC in latent space Z : min U , P̂ E [ L ( U , F , c , z0 ) | E , x0 ] + λ2 √ R2 ( P̂ ) , ( SOC2 ) such that the solution of ( SOC2 ) , U∗2 , has similar performance to that of ( SOC1 ) , U ∗ 1 , i.e. , L ( U∗1 , P , c , x0 ) ≈ L ( U∗2 , P , c , x0 ) . In ( SOC2 ) , z0 is the initial latent state sampled from the encoder E ( ·|x0 ) ; c̄ : Z × U → R≥0 is the latent cost function defined as c̄t ( zt , ut ) =∫ ct ( xt , ut ) dD ( xt|zt ) ; R2 ( P̂ ) is a regularizer over the mapping P̂ ; and λ2 is the corresponding 1A method to ensure this Markovian assumption is by buffering observations ( Mnih et al. , 2013 ) for a number of time steps . 2See Appendix B.3 for the extension to the closed-loop MDP problem . tion SOC2 under dynamics F , and ( c ) ( red ) in equation SOC3 under dynamics P̂ . regularization parameter . We will define R2 and λ2 more precisely in Section 3 . Note that the expectation in ( SOC2 ) is over the randomness generated by the ( stochastic ) encoder E . 3 PCC MODEL : A CONTROL PERSPECTIVE . As described in Section 2 , we are primarily interested in solving ( SOC1 ) , whose states evolve under dynamics P , as shown at the bottom row of Figure 1 ( a ) in ( blue ) . However , because of the difficulties in solving ( SOC1 ) , mainly due to the high dimension of observations x , LCE proposes to learn a mapping P̂ by solving ( SOC2 ) that consists of a loss function , whose states evolve under dynamics F ( after an initial transition by encoder E ) , as depicted in Figure 1 ( b ) , and a regularization term . The role of the regularizer R2 is to account for the performance gap between ( SOC1 ) and the loss function of ( SOC2 ) , due to the discrepancy between their evolution paths , shown in Figures 1 ( a ) ( blue ) and 1 ( b ) ( green ) . The goal of LCE is to learn P̂ of the particular form P̂ = D ◦ F ◦ E , described in Section 2 , such that the solution of ( SOC2 ) has similar performance to that of ( SOC1 ) . In this section , we propose a principled way to select the regularizer R2 to achieve this goal . Since the exact form of ( SOC2 ) has a direct effect on learning P̂ , designing this regularization term , in turn , provides us with a recipe ( loss function ) to learn the latent ( embedded ) space Z . In the following subsections , we show that this loss function consists of three terms that correspond to prediction , consistency , and curvature , the three ingredients of our PCC model . Note that these two SOCs evolve in two different spaces , one in the observation space X under dynamics P , and the other one in the latent space Z ( after an initial transition from X to Z ) under dynamics F . Unlike P and F that only operate in a single space , X and Z , respectively , P̂ can govern the evolution of the system in both X and Z ( see Figure 1 ( c ) ) . Therefore , any recipe to learn P̂ , and as a result the latent space Z , should have at least two terms , to guarantee that the evolution paths resulted from P̂ in X and Z are consistent with those generated by P and F . We derive these two terms , that are the prediction and consistency terms in the loss function used by our PCC model , in Sections 3.1 and 3.2 , respectively . While these two terms are the result of learning P̂ in general SOC problems , in Section 3.3 , we concentrate on the particular class of LLC algorithms ( e.g. , iLQR ( Li & Todorov , 2004 ) ) to solve SOC , and add the third term , curvature , to our recipe for learning P̂ .
This work proposes a regularization strategy for learning optimal policy for a dynamic control problem in a latent low-dimensional domain. The work is based on LCE approach, but with in-depth analysis on how to choose/design the regularization for the \hat{P} operator, which consists of an encoder, a decoder, and dynamics in the latent space. In particular, the author argued that three principles (prediction, consistency, and curvature) should be taken into consideration when designing the regularizer of the learning cost function - so that the learned latent domain can serve better for the purpose of optimizing the long-term cost in the ambient domain.
SP:8ec794421e38087b73f7d7fb4fbf373728ea39c7
IsoNN: Isomorphic Neural Network for Graph Representation Learning and Classification
1 INTRODUCTION . The graph structure is attracting increasing interests because of its great representation power on various types of data . Researchers have done many analyses based on different types of graphs , such as social networks , brain networks and biological networks . In this paper , we will focus on the binary graph classification problem , which has extensive applications in the real world . For example , one may wish to identify the social community categories according to the users ’ social interactions ( Gao et al. , 2017 ) , distinguish the brain states of patients via their brain networks ( Wang et al. , 2017 ) , and classify the functions of proteins in a biological interaction network ( Hamilton et al. , 2017 ) . To address the graph classification task , many approaches have been proposed . One way to estimate the usefulness of subgraph features is feature evaluation criteria based on both labeled and unlabeled graphs ( Kong & Yu , 2010 ) . Some other works also proposed to design a pattern exploration approach based on pattern co-occurrence and build the classification model ( Jin et al. , 2009 ) or develop a boosting algorithm ( Wu et al. , 2014 ) . However , such works based on BFS or DFS can not avoid computing a large quantity of possible subgraphs , which causes high computational complexity though the explicit subgraphs are maintained . Recently , deep learning models are also widely used to solve the graph-oriented problems . Although some deep models like MPNN ( Gilmer et al. , 2017 ) and GCN ( Kipf & Welling , 2016 ) learn implicit structural features , the explict structural information can not be maintained for further research . Besides , most existing works on graph classification use the aggregation of the node features in graphs as the graph representation ( Xu et al. , 2018 ; Hamilton et al. , 2017 ) , but simply doing aggregation on the whole graph can not capture the substructure precisely . While there are other models can capture the subgraphs , they often need more complex computation and mechanism ( Wang et al. , 2017 ; Narayanan et al. , 2017 ) or need additonal node labels to find the subgraph strcuture ( Gaüzere et al. , 2012 ; Shervashidze et al. , 2011 ) . However , we should notice that when we deal with the graph-structured data , different node-orders will result in very different adjacency matrix representations for most existing deep models which take the adjacency matrices as input if there is no other information on graph . Therefore , compared with the original graph , matrix naturally poses a redundant constraint on the graph node-order . Such a node-order is usually unnecessary and manually defined . The different graph matrix representations brought by the node-order differences may render the learning performance of the existing models to be extremely erratic and not robust . Formally , we summarize the encountered challenges in the graph classification problem as follows : • Explicit useful subgraph extraction . The existing works have proposed many discriminative models to discover useful subgraphs for graph classification , and most of them require manual efforts . Nevertheless , how to select the contributing subgraphs automatically without any additional manual involvement is a challenging problem . • Graph representation learning . Representing graphs in the vector space is an important task since it facilitates the storage , parallelism and the usage of machine learning models for the graph data . Extensive works have been done on node representations ( Grover & Leskovec , 2016 ; Lin et al. , 2015 ; Lai et al. , 2017 ; Hamilton et al. , 2017 ) , whereas learning the representation of the whole graph with clear interpretability is still an open problem requiring more explorations . • Node-order elimination for subgraphs . Nodes in graphs are orderless , whereas the matrix representations of graphs cast an unnecessary order on nodes , which also renders the features extracted with the existing learning models , e.g. , CNN , to be useless for the graphs . For subgraphs , this problem also exists . Thus , how to break such a node-order constraint for subgraphs is challenging . • Efficient matching for large subgraphs . To break the node-order , we will try all possible node permutations to find the best permutation for a subgraph . Clearly , trying all possible permutaions is a combinatorial explosion problem , which is extremly time-comsuming for finding large subgraph templates . The problem shows that how to accelerate the proposed model for large subgraphs also needs to be solved . In this paper , we propose a novel model , namely Isomorphic Neural Network ( ISONN ) and its variants , to address the aforementioned challenges in the graph representation learning and classification problem . ISONN is composed of two components : the graph isomorphic feature extraction component and the classification component , aiming at learning isomorphic features and classifying graph instances , respectively . In the graph isomorphic feature extraction component , ISONN automatically learns a group of subgraph templates of useful patterns from the input graph . ISONN makes use of a set of permutation matrices , which act as the node isomorphism mappings between the templates and the input graph . With the potential isomorphic features learned by all the permutation matrices and the templates , ISONN adopts one min-pooling layer to find the best node permutation for each template and one softmax layer to normalize and fuse all subgraph features learned by different kernels , respectively . Such features learned by different kernels will be fused together and fed as the input for the classification component . ISONN further adopts three fully-connected layers as the classification component to project the graph instances to their labels . Moreover , to accelerate the proposed model when dealing with large subgraphs , we also propose two variants of ISONN to gurantee the efficiency . 2 RELATED WORK . Our work relates to subgraph mining , graph neural networks , network embedding as well as graph classification . We will discuss them briefly in the followings . Subgraph Mining and Graph Kernel Methods : Mining subgraph features from graph data has been studied for many years . The aim is to extract useful subgraph features from a set of graphs by adopting some specific criteria . One classic unsupervised method ( i.e. , without label information ) is gSpan ( Yan & Han , 2002 ) , which builds a lexicographic order among graphs and map each graph to a unique minimum DFS code as its canonical label . GRAMI ( Elseidy et al. , 2014 ) only stores templates of frequent subgraphs and treat the frequency evaluation as a constraint satisfaction problem to find the minimal set . For the supervised model ( i.e. , with label information ) , CORK utilizes labels to guide the feature selection , where the features are generated by gSpan ( Thoma et al. , 2009 ) . Due to the mature development of the sub-graph mining field , subgraph mining methods have also been adopted in life sciences ( Mrzic et al. , 2018 ) . Moreover , several parallel computing based methods ( Qiao et al. , 2018 ; Hill et al. , 2012 ; Lin et al. , 2014 ) have proposed to reduce the time cost . On the other hand , graph kernel methods are also applied to discover the subgraph structures ( Kashima et al. , 2003 ; Vishwanathan et al. , 2010 ; Gaüzere et al. , 2012 ; Shervashidze et al. , 2011 ) . Among them , most existing works focus on the graph with node labels and the kernels methods only computes the similarity between pairwise graphs . Yet , in this paper , we are handling the graph without node labels . Moreover , we can not only compute the similarity between pairwise graphs but also learn subgraph templates , which can be further analyzed . Graph Neural Network and Network Embedding : Graph Neural Networks ( Monti et al. , 2017 ; Atwood & Towsley , 2016 ; Masci et al. , 2015 ; Kipf & Welling , 2016 ; Battaglia et al. , 2018 ) have been studied in recent years because of the prosperity of deep learning . Traditional deep models can not be directly applied to graphs due to the special data structure . The general graph neural model MoNet ( Monti et al. , 2017 ) employs CNN architectures on non-Euclidean domains such as graphs and manifold . The GCN proposed in ( Kipf & Welling , 2016 ) utilizes the normalized adjacency matrix to learn the node features for node classification ; ( Bai et al. , 2018 ) proposes the multiscale convolutional model for pairwise graph similarity with a set matching based graph similarity computation . However , these existing works based on graph neural networks all fail to investigate the node-orderless property of the graph data and to maintain the explicit structural information . Another important topic related to this paper is network embedding ( Bordes et al. , 2013 ; Lin et al. , 2015 ; Lai et al. , 2017 ; Abu-El-Haija et al. , 2018 ; Hamilton et al. , 2017 ) , which aims at learning the feature representation of each individual node in a network based on either the network structure or attribute information . Distinct from these network embedding works , the graph representation learning problem studied in this paper treats each graph as an individual instance and focuses on learning the representation of the whole graph instead . Graph Classification : Graph classification is an important problem with many practical applications . Data like social networks , chemical compounds , brain networks can be represented as graphs naturally and they can have applications such as community detection ( Zhang et al. , 2018 ) , anti-cancer activity identification ( Kong et al. , 2013 ; Kong & Yu , 2010 ) and Alzheimer ’ s patients diagnosis ( Tong et al. , 2017 ; 2015 ) respectively . Traditionally , researchers mine the subgraphs by DFS or BFS ( Saigo et al. , 2009 ; Kong et al. , 2013 ) , and use them as the features . With the rapid development of deep learning ( DL ) , many works are done based on DL methods . GAM builds the model by RNN with self-attention mechanism ( Lee et al. , 2018 ) . DCNN extend CNN to general graph-structured data by introducing a ‘ diffusion-convolution ’ operation ( Atwood & Towsley , 2016 ) . 3 TERMINOLOGY AND PROBLEM DEFINITION . In this section , we will define the notations and the terminologies used in this paper and give the formulation for the graph classification problem . 3.1 NOTATIONS . In the following sections , we will use lower case letters like x to denote scalars , lower case bold letters ( e.g . x ) to represent vectors , bold-face capital letters ( e.g . X ) to show the matrices . For tensors or sets , capital calligraphic letters are used to denote them . We use xi to represent the i-th element in x . Given a matrix X , we use X ( i , j ) to express the element in i-th row and j-th column . For i-th row vector and j-th column vector , we use X ( i , : ) and X ( : , j ) to denote respectively . Moreover , notations x > and X > denote the transpose of vector x and matrix X respectively . Besides , the F -norm of matrix X can be represented as ‖X‖F = ( ∑ i , j |Xi , j |2 ) 1 2 .
This paper proposes a neural network architecture to classify graph structure. A graph is specified using its adjacency matrix, and the authors prose to extract features by identifying temples, implemented as small kernels on sub matrices of the adjacency matrix. The main problem is how to handle isomorphism: there is no node order in a graph. The authors propose to test against all permutations of the kernel, and choose the permutation with minimal activation. Thus, the network can learn isomorphic features of the graph. This idea is used for binary graph classification on a number of tasks.
SP:2656017dbf3c1e8b659857d3a44fdbb91e186237
IsoNN: Isomorphic Neural Network for Graph Representation Learning and Classification
1 INTRODUCTION . The graph structure is attracting increasing interests because of its great representation power on various types of data . Researchers have done many analyses based on different types of graphs , such as social networks , brain networks and biological networks . In this paper , we will focus on the binary graph classification problem , which has extensive applications in the real world . For example , one may wish to identify the social community categories according to the users ’ social interactions ( Gao et al. , 2017 ) , distinguish the brain states of patients via their brain networks ( Wang et al. , 2017 ) , and classify the functions of proteins in a biological interaction network ( Hamilton et al. , 2017 ) . To address the graph classification task , many approaches have been proposed . One way to estimate the usefulness of subgraph features is feature evaluation criteria based on both labeled and unlabeled graphs ( Kong & Yu , 2010 ) . Some other works also proposed to design a pattern exploration approach based on pattern co-occurrence and build the classification model ( Jin et al. , 2009 ) or develop a boosting algorithm ( Wu et al. , 2014 ) . However , such works based on BFS or DFS can not avoid computing a large quantity of possible subgraphs , which causes high computational complexity though the explicit subgraphs are maintained . Recently , deep learning models are also widely used to solve the graph-oriented problems . Although some deep models like MPNN ( Gilmer et al. , 2017 ) and GCN ( Kipf & Welling , 2016 ) learn implicit structural features , the explict structural information can not be maintained for further research . Besides , most existing works on graph classification use the aggregation of the node features in graphs as the graph representation ( Xu et al. , 2018 ; Hamilton et al. , 2017 ) , but simply doing aggregation on the whole graph can not capture the substructure precisely . While there are other models can capture the subgraphs , they often need more complex computation and mechanism ( Wang et al. , 2017 ; Narayanan et al. , 2017 ) or need additonal node labels to find the subgraph strcuture ( Gaüzere et al. , 2012 ; Shervashidze et al. , 2011 ) . However , we should notice that when we deal with the graph-structured data , different node-orders will result in very different adjacency matrix representations for most existing deep models which take the adjacency matrices as input if there is no other information on graph . Therefore , compared with the original graph , matrix naturally poses a redundant constraint on the graph node-order . Such a node-order is usually unnecessary and manually defined . The different graph matrix representations brought by the node-order differences may render the learning performance of the existing models to be extremely erratic and not robust . Formally , we summarize the encountered challenges in the graph classification problem as follows : • Explicit useful subgraph extraction . The existing works have proposed many discriminative models to discover useful subgraphs for graph classification , and most of them require manual efforts . Nevertheless , how to select the contributing subgraphs automatically without any additional manual involvement is a challenging problem . • Graph representation learning . Representing graphs in the vector space is an important task since it facilitates the storage , parallelism and the usage of machine learning models for the graph data . Extensive works have been done on node representations ( Grover & Leskovec , 2016 ; Lin et al. , 2015 ; Lai et al. , 2017 ; Hamilton et al. , 2017 ) , whereas learning the representation of the whole graph with clear interpretability is still an open problem requiring more explorations . • Node-order elimination for subgraphs . Nodes in graphs are orderless , whereas the matrix representations of graphs cast an unnecessary order on nodes , which also renders the features extracted with the existing learning models , e.g. , CNN , to be useless for the graphs . For subgraphs , this problem also exists . Thus , how to break such a node-order constraint for subgraphs is challenging . • Efficient matching for large subgraphs . To break the node-order , we will try all possible node permutations to find the best permutation for a subgraph . Clearly , trying all possible permutaions is a combinatorial explosion problem , which is extremly time-comsuming for finding large subgraph templates . The problem shows that how to accelerate the proposed model for large subgraphs also needs to be solved . In this paper , we propose a novel model , namely Isomorphic Neural Network ( ISONN ) and its variants , to address the aforementioned challenges in the graph representation learning and classification problem . ISONN is composed of two components : the graph isomorphic feature extraction component and the classification component , aiming at learning isomorphic features and classifying graph instances , respectively . In the graph isomorphic feature extraction component , ISONN automatically learns a group of subgraph templates of useful patterns from the input graph . ISONN makes use of a set of permutation matrices , which act as the node isomorphism mappings between the templates and the input graph . With the potential isomorphic features learned by all the permutation matrices and the templates , ISONN adopts one min-pooling layer to find the best node permutation for each template and one softmax layer to normalize and fuse all subgraph features learned by different kernels , respectively . Such features learned by different kernels will be fused together and fed as the input for the classification component . ISONN further adopts three fully-connected layers as the classification component to project the graph instances to their labels . Moreover , to accelerate the proposed model when dealing with large subgraphs , we also propose two variants of ISONN to gurantee the efficiency . 2 RELATED WORK . Our work relates to subgraph mining , graph neural networks , network embedding as well as graph classification . We will discuss them briefly in the followings . Subgraph Mining and Graph Kernel Methods : Mining subgraph features from graph data has been studied for many years . The aim is to extract useful subgraph features from a set of graphs by adopting some specific criteria . One classic unsupervised method ( i.e. , without label information ) is gSpan ( Yan & Han , 2002 ) , which builds a lexicographic order among graphs and map each graph to a unique minimum DFS code as its canonical label . GRAMI ( Elseidy et al. , 2014 ) only stores templates of frequent subgraphs and treat the frequency evaluation as a constraint satisfaction problem to find the minimal set . For the supervised model ( i.e. , with label information ) , CORK utilizes labels to guide the feature selection , where the features are generated by gSpan ( Thoma et al. , 2009 ) . Due to the mature development of the sub-graph mining field , subgraph mining methods have also been adopted in life sciences ( Mrzic et al. , 2018 ) . Moreover , several parallel computing based methods ( Qiao et al. , 2018 ; Hill et al. , 2012 ; Lin et al. , 2014 ) have proposed to reduce the time cost . On the other hand , graph kernel methods are also applied to discover the subgraph structures ( Kashima et al. , 2003 ; Vishwanathan et al. , 2010 ; Gaüzere et al. , 2012 ; Shervashidze et al. , 2011 ) . Among them , most existing works focus on the graph with node labels and the kernels methods only computes the similarity between pairwise graphs . Yet , in this paper , we are handling the graph without node labels . Moreover , we can not only compute the similarity between pairwise graphs but also learn subgraph templates , which can be further analyzed . Graph Neural Network and Network Embedding : Graph Neural Networks ( Monti et al. , 2017 ; Atwood & Towsley , 2016 ; Masci et al. , 2015 ; Kipf & Welling , 2016 ; Battaglia et al. , 2018 ) have been studied in recent years because of the prosperity of deep learning . Traditional deep models can not be directly applied to graphs due to the special data structure . The general graph neural model MoNet ( Monti et al. , 2017 ) employs CNN architectures on non-Euclidean domains such as graphs and manifold . The GCN proposed in ( Kipf & Welling , 2016 ) utilizes the normalized adjacency matrix to learn the node features for node classification ; ( Bai et al. , 2018 ) proposes the multiscale convolutional model for pairwise graph similarity with a set matching based graph similarity computation . However , these existing works based on graph neural networks all fail to investigate the node-orderless property of the graph data and to maintain the explicit structural information . Another important topic related to this paper is network embedding ( Bordes et al. , 2013 ; Lin et al. , 2015 ; Lai et al. , 2017 ; Abu-El-Haija et al. , 2018 ; Hamilton et al. , 2017 ) , which aims at learning the feature representation of each individual node in a network based on either the network structure or attribute information . Distinct from these network embedding works , the graph representation learning problem studied in this paper treats each graph as an individual instance and focuses on learning the representation of the whole graph instead . Graph Classification : Graph classification is an important problem with many practical applications . Data like social networks , chemical compounds , brain networks can be represented as graphs naturally and they can have applications such as community detection ( Zhang et al. , 2018 ) , anti-cancer activity identification ( Kong et al. , 2013 ; Kong & Yu , 2010 ) and Alzheimer ’ s patients diagnosis ( Tong et al. , 2017 ; 2015 ) respectively . Traditionally , researchers mine the subgraphs by DFS or BFS ( Saigo et al. , 2009 ; Kong et al. , 2013 ) , and use them as the features . With the rapid development of deep learning ( DL ) , many works are done based on DL methods . GAM builds the model by RNN with self-attention mechanism ( Lee et al. , 2018 ) . DCNN extend CNN to general graph-structured data by introducing a ‘ diffusion-convolution ’ operation ( Atwood & Towsley , 2016 ) . 3 TERMINOLOGY AND PROBLEM DEFINITION . In this section , we will define the notations and the terminologies used in this paper and give the formulation for the graph classification problem . 3.1 NOTATIONS . In the following sections , we will use lower case letters like x to denote scalars , lower case bold letters ( e.g . x ) to represent vectors , bold-face capital letters ( e.g . X ) to show the matrices . For tensors or sets , capital calligraphic letters are used to denote them . We use xi to represent the i-th element in x . Given a matrix X , we use X ( i , j ) to express the element in i-th row and j-th column . For i-th row vector and j-th column vector , we use X ( i , : ) and X ( : , j ) to denote respectively . Moreover , notations x > and X > denote the transpose of vector x and matrix X respectively . Besides , the F -norm of matrix X can be represented as ‖X‖F = ( ∑ i , j |Xi , j |2 ) 1 2 .
This paper proposes a new neural network architecture for dealing with graphs dealing with the lack of order of the nodes. The first step called the graph isomorphic layer compute features invariant to the order of nodes by extracting sub-graphs and cosidering all possible permutation of these subgraphs. There is no training involved here as no parameter is learned. Indeed the only learning part is in the so-called classification component which is a (standard) fully connected layer. In my opinion, any classification algorithm could be used on the features extracted from the graphs.
SP:2656017dbf3c1e8b659857d3a44fdbb91e186237
Toward Understanding The Effect of Loss Function on The Performance of Knowledge Graph Embedding
1 INTRODUCTION . Knowledge is considered as commonsense facts and other information accumulated from different sources . A Knowledge Graph ( KG ) is collection of facts and is usually represented as a set of triples ( h , r , t ) where h , t are entities and r is a relation , e.g . ( iphone , hyponym , smartphone ) . Entities and relations are nodes and edges in the graph , respectively . As KGs are inherently incomplete , making prediction of missing links is a fundamental task in knowlege graph analyses . Among different approaches used for KG completion , KG Embedding ( KGE ) has recently received growing attentions . KGE embeds entities and relations as low dimensional vectors known as embeddings . To measure the degree of plausibility of a triple , a scoring function is defined over the embeddings . TransE , Translation-based Embedding model , ( Bordes et al. , 2013 ) is one of the most widely used KGE models . The original assumption of TransE is to hold : h + r = t , for every positive triple ( h , r , t ) where h , r , t ∈ Rd are embedding vectors of head ( h ) , relation ( r ) and tail ( t ) respectively . TransE and its many variants like TransH ( Wang et al. , 2014 ) and TransR ( Lin et al. , 2015b ) , underperform greatly compared to the current state-of-the-art models . That is reported to be due to the limitations of their scoring functions . For instance , ( Wang et al. , 2018 ) reports that TransE can not encode a relation pattern which is neither reflexive nor irreflexive . In most of these works the effect of the loss function is ignored and the provided proofs are based on the assumptions that are not fulfilled by the associated loss functions . For instance ( Sun et al. , 2019 ) proves that TransE is incapable of encoding symmetric relation . To this end the loss function must enforce the distance of ‖h + r− t‖ to zero , but this is never fulfilled ( or even approximated ) by the employed loss function . Similarly , ( Wang et al. , 2018 ) reports that TransE can not encode a relation pattern which is neither reflexive nor irreflexive and ( Wang et al. , 2014 ) adds that TransE can not properly encode reflexive , one-to-many , many-to-one and many-to-many relations . However , as mentioned earlier , such reported limitations are not accurate and the problem is not fully investigated due to the effect of the loss function . In this regards , although TransH , TransR and TransD ( Wang et al. , 2014 ; Lin et al. , 2015b ; Ji et al. , 2015 ) addressed the reported problem of TransE in one-to-many , many-to-one , many-to-many and reflexive etc , they were misled by the assumption ( enforcing ‖h + r− t‖ to be zero ) that was not fulfilled by the employed loss function . Considering the same assumption , ( Kazemi & Poole , 2018 ) investigated three additional limitations of TransE , FTransE ( Feng et al. , 2016 ) , STransE ( Nguyen et al. , 2016 ) , TransH and TransR models : ( i ) if the models encode a reflexive relation r , they automatically encode symmetric , ( ii ) if the models encode a reflexive relation r , they automatically encode transitive and , ( iii ) if entity e1 has relation r with every entity in ∆ ∈ E and entity e2 has relation r with one of entities in ∆ , then e2 must have the relation r with every entity in ∆ . Assuming that the loss function enforces the norm to be zero , the aforementioned works have investigated these limitations by focusing on the capability of scoring functions in encoding relation patterns . However , we prove that the selection of loss function affects the boundary of score functions ; consequently , the selection of loss functions significantly affects the limitations . Therefore , the above mentioned theories corresponding to the limitations of translation-based embedding models in encoding relation patterns are inaccurate . We pose new theories about the limitations of TransX ( X=H , D , R , etc ) models considering the loss functions . To the best of our knowledge , it is the first time that the effect of loss function is investigated to prove theories corresponding to the limitations of translation-based models . In a nutshell , the key contributions of this paper is as follows . ( i ) We show that different loss functions enforce different upper-bounds and lower-bounds for the scores of positive and negative samples respectively . This implies that existing theories corresponding the limitation of TransX models are inaccurate because the effect of loss function is ignored . We introduce new theories accordingly and prove that the proper selection of loss functions mitigates the main limitations . ( ii ) We reformulate the existing loss functions and their optimization problems as an standard constrained optimization problem . This makes perfectly clear that how each of the loss functions affect on the boundary of triples scores and consequently ability of relation pattern encoding . ( iii ) Using symmetric relation patterns , we obtain a proper upper-bound of positive triples score to enable encoding of symmetric patterns . ( iv ) We additionally investigate the theoretical capability of translation-based embedding model when translation is applied in the complex space ( TransComplEx ) . We show that TransComplEx is a more powerful embedding model with fewer theoretical limitations in encoding different relation patterns such as symmetric while it is efficient in memory and time . 2 RELATED WORKS . Most of the previous work have investigated the capability of translation-based class of embedding models considering solely the formulation of the score function . Accordingly , in this section , we review the score functions of TransE and some of its variants together with their capabilities . Then , in the next section the existing limitations of Translation-based embedding models emphasized in recent works are reviewed . These limitations will be reinvestigated in the light of score and loss functions in the section 4 . The score of TransE ( Bordes et al. , 2013 ) is defined as : fr ( h , t ) = ‖h + r − t‖ . TransH ( Wang et al. , 2014 ) projects each entity ( e ) to the relation space ( e⊥ = e − wrewTr ) . The score function is defined as fr ( h , t ) = ‖h⊥ + r − t⊥‖ . TransH can encode reflexive , one-to-many , many-to-one and many-to-many relations . However , recent theories ( Kazemi & Poole , 2018 ) prove that encoding reflexive results in encoding the both symmetric and transitive which is undesired . TransR ( Lin et al. , 2015b ) projects each entity ( e ) to the relation space by using a matrix provided for each relation ( e⊥ = eMr , Mr ∈ Rde×dr ) . TransR uses the same scoring function as TransH . TransD ( Ji et al. , 2015 ) provides two vectors for each individual entities and relations ( h , hp , r , rp , t , tp ) . Head and tail entities are projected by using the following matrices : Mrh = rTp hp + I m×n , Mrt = rTp tp + I m×n . The score function of TransD is similar to the score function of TransH . RotatE ( Sun et al. , 2019 ) rotates the head to the tail entity by using relation . RotatE embeds entities and relations in the Complex space . By inclusion of constraints on the norm of entity vectors , the model would be degenerated to TransE . The scoring function of RotatE is fr ( h , t ) = ‖h ◦ r − t‖ , where h , r , t ∈ Cd , and ◦ is element-wise product . RotatE obtains the state-of-the-art results using very big embedding dimension ( 1000 ) and a lot of negative samples ( 1000 ) . TorusE ( Ebisu & Ichise , 2018 ) fixes the problem of regularization in TransE by applying translation on a compact Lie group . The model has several variants including mapping from torus to Complex space . In this case , the model is regarded as a very special case of RotatE Sun et al . ( 2019 ) that applies rotation instead of translation in the target the Complex space . According to Sun et al . ( 2019 ) , TorusE is not defined on the entire Complex space . Therefore , it has less representation capacity . TorusE needs a very big embedding dimension ( 10000 as reported in Ebisu & Ichise ( 2018 ) ) which is a limitation . 3 THE MAIN LIMITATIONS OF TRANSLATION-BASED EMBEDDING MODELS . We review six limitations of translation-based embedding models in encoding relation patterns ( e.g . reflexive , symmetric ) mentioned in the literature ( Wang et al. , 2014 ; Kazemi & Poole , 2018 ; Wang et al. , 2018 ; Sun et al. , 2019 ) . Limitation L1 . TransE can not encode reflexive relations when the relation vector is non-zero ( Wang et al. , 2014 ) . Limitation L2 . TransE can not encode a relation which is neither reflexive nor irreflexive . To see that , if TransE encodes a relation r , which is neither reflexive nor irreflexive we have h1 + r = h1 and h2 + r 6= h2 , resulting r = 0 , r 6= 0 which is a contradiction ( Wang et al. , 2018 ) . Limitation L3 . TransE can not properly encode symmetric relation when r 6= 0 . To see that ( Sun et al. , 2019 ) , if r is symmetric , then we have : h + r = t and t + r = h. Therefore , r = 0 and so all entities appeared in head or tail parts of training triples will have the same embedding vectors . The following limitations hold for TransE , FTransE , STransE , TransH and TransR ( Feng et al. , 2016 ; Nguyen et al. , 2016 ; Kazemi & Poole , 2018 ) : Limitation L4 . If r is reflexive on ∆ ∈ E , where E is the set of all entities in the KG , then r must also be symmetric . Limitation L5 . If r is reflexive on ∆ ∈ E , r must also be transitive . Limitation L6 . If entity e1 has relation r with every entity in ∆ ∈ E and entity e2 has relation r with one of entities in ∆ , then e2 must have the relation r with every entity in ∆ . 4 OUR MODEL . TransE and its variants underperform compared to other embedding models due to their limitations we iterated in Section 3 . In this section , we reinvestigate the limitations . We show that the corresponding theoretical proofs are inaccurate because the effect of loss function is ignored . So we propose new theories and prove that each of the limitations of TransE are resolved by revising either the scoring function or the loss . In this regard , we consider several loss functions and their effects on the boundary of the TransE scoring function . For each of the loss functions , we pose theories corresponding to the limitations . We additionally investigate the limitations of TransE using each of the loss functions while translation is performed in the complex space and show that by this new approach the aforementioned limitations are lifted . Our new model , TransComplEx , with a proper selection of loss function addresses the above problems .
This paper list several limitations of translational-based Knowledge Graph embedding methods, TransE which have been identified by prior works and have theoretically/empirically shown that all limitations can be addressed by altering the loss function and shifting to Complex domain. The authors propose four variants of loss function which address the limitations and propose a method, RPTransComplEx which utilizes their observations for outperforming several existing Knowledge Graph embedding methods. Overall, the proposed method is well motivated and experimental results have been found to be consistent with the theoretical analysis.
SP:86076eabb48ef1fe9d51b54945bf81ed44bcacd7
Toward Understanding The Effect of Loss Function on The Performance of Knowledge Graph Embedding
1 INTRODUCTION . Knowledge is considered as commonsense facts and other information accumulated from different sources . A Knowledge Graph ( KG ) is collection of facts and is usually represented as a set of triples ( h , r , t ) where h , t are entities and r is a relation , e.g . ( iphone , hyponym , smartphone ) . Entities and relations are nodes and edges in the graph , respectively . As KGs are inherently incomplete , making prediction of missing links is a fundamental task in knowlege graph analyses . Among different approaches used for KG completion , KG Embedding ( KGE ) has recently received growing attentions . KGE embeds entities and relations as low dimensional vectors known as embeddings . To measure the degree of plausibility of a triple , a scoring function is defined over the embeddings . TransE , Translation-based Embedding model , ( Bordes et al. , 2013 ) is one of the most widely used KGE models . The original assumption of TransE is to hold : h + r = t , for every positive triple ( h , r , t ) where h , r , t ∈ Rd are embedding vectors of head ( h ) , relation ( r ) and tail ( t ) respectively . TransE and its many variants like TransH ( Wang et al. , 2014 ) and TransR ( Lin et al. , 2015b ) , underperform greatly compared to the current state-of-the-art models . That is reported to be due to the limitations of their scoring functions . For instance , ( Wang et al. , 2018 ) reports that TransE can not encode a relation pattern which is neither reflexive nor irreflexive . In most of these works the effect of the loss function is ignored and the provided proofs are based on the assumptions that are not fulfilled by the associated loss functions . For instance ( Sun et al. , 2019 ) proves that TransE is incapable of encoding symmetric relation . To this end the loss function must enforce the distance of ‖h + r− t‖ to zero , but this is never fulfilled ( or even approximated ) by the employed loss function . Similarly , ( Wang et al. , 2018 ) reports that TransE can not encode a relation pattern which is neither reflexive nor irreflexive and ( Wang et al. , 2014 ) adds that TransE can not properly encode reflexive , one-to-many , many-to-one and many-to-many relations . However , as mentioned earlier , such reported limitations are not accurate and the problem is not fully investigated due to the effect of the loss function . In this regards , although TransH , TransR and TransD ( Wang et al. , 2014 ; Lin et al. , 2015b ; Ji et al. , 2015 ) addressed the reported problem of TransE in one-to-many , many-to-one , many-to-many and reflexive etc , they were misled by the assumption ( enforcing ‖h + r− t‖ to be zero ) that was not fulfilled by the employed loss function . Considering the same assumption , ( Kazemi & Poole , 2018 ) investigated three additional limitations of TransE , FTransE ( Feng et al. , 2016 ) , STransE ( Nguyen et al. , 2016 ) , TransH and TransR models : ( i ) if the models encode a reflexive relation r , they automatically encode symmetric , ( ii ) if the models encode a reflexive relation r , they automatically encode transitive and , ( iii ) if entity e1 has relation r with every entity in ∆ ∈ E and entity e2 has relation r with one of entities in ∆ , then e2 must have the relation r with every entity in ∆ . Assuming that the loss function enforces the norm to be zero , the aforementioned works have investigated these limitations by focusing on the capability of scoring functions in encoding relation patterns . However , we prove that the selection of loss function affects the boundary of score functions ; consequently , the selection of loss functions significantly affects the limitations . Therefore , the above mentioned theories corresponding to the limitations of translation-based embedding models in encoding relation patterns are inaccurate . We pose new theories about the limitations of TransX ( X=H , D , R , etc ) models considering the loss functions . To the best of our knowledge , it is the first time that the effect of loss function is investigated to prove theories corresponding to the limitations of translation-based models . In a nutshell , the key contributions of this paper is as follows . ( i ) We show that different loss functions enforce different upper-bounds and lower-bounds for the scores of positive and negative samples respectively . This implies that existing theories corresponding the limitation of TransX models are inaccurate because the effect of loss function is ignored . We introduce new theories accordingly and prove that the proper selection of loss functions mitigates the main limitations . ( ii ) We reformulate the existing loss functions and their optimization problems as an standard constrained optimization problem . This makes perfectly clear that how each of the loss functions affect on the boundary of triples scores and consequently ability of relation pattern encoding . ( iii ) Using symmetric relation patterns , we obtain a proper upper-bound of positive triples score to enable encoding of symmetric patterns . ( iv ) We additionally investigate the theoretical capability of translation-based embedding model when translation is applied in the complex space ( TransComplEx ) . We show that TransComplEx is a more powerful embedding model with fewer theoretical limitations in encoding different relation patterns such as symmetric while it is efficient in memory and time . 2 RELATED WORKS . Most of the previous work have investigated the capability of translation-based class of embedding models considering solely the formulation of the score function . Accordingly , in this section , we review the score functions of TransE and some of its variants together with their capabilities . Then , in the next section the existing limitations of Translation-based embedding models emphasized in recent works are reviewed . These limitations will be reinvestigated in the light of score and loss functions in the section 4 . The score of TransE ( Bordes et al. , 2013 ) is defined as : fr ( h , t ) = ‖h + r − t‖ . TransH ( Wang et al. , 2014 ) projects each entity ( e ) to the relation space ( e⊥ = e − wrewTr ) . The score function is defined as fr ( h , t ) = ‖h⊥ + r − t⊥‖ . TransH can encode reflexive , one-to-many , many-to-one and many-to-many relations . However , recent theories ( Kazemi & Poole , 2018 ) prove that encoding reflexive results in encoding the both symmetric and transitive which is undesired . TransR ( Lin et al. , 2015b ) projects each entity ( e ) to the relation space by using a matrix provided for each relation ( e⊥ = eMr , Mr ∈ Rde×dr ) . TransR uses the same scoring function as TransH . TransD ( Ji et al. , 2015 ) provides two vectors for each individual entities and relations ( h , hp , r , rp , t , tp ) . Head and tail entities are projected by using the following matrices : Mrh = rTp hp + I m×n , Mrt = rTp tp + I m×n . The score function of TransD is similar to the score function of TransH . RotatE ( Sun et al. , 2019 ) rotates the head to the tail entity by using relation . RotatE embeds entities and relations in the Complex space . By inclusion of constraints on the norm of entity vectors , the model would be degenerated to TransE . The scoring function of RotatE is fr ( h , t ) = ‖h ◦ r − t‖ , where h , r , t ∈ Cd , and ◦ is element-wise product . RotatE obtains the state-of-the-art results using very big embedding dimension ( 1000 ) and a lot of negative samples ( 1000 ) . TorusE ( Ebisu & Ichise , 2018 ) fixes the problem of regularization in TransE by applying translation on a compact Lie group . The model has several variants including mapping from torus to Complex space . In this case , the model is regarded as a very special case of RotatE Sun et al . ( 2019 ) that applies rotation instead of translation in the target the Complex space . According to Sun et al . ( 2019 ) , TorusE is not defined on the entire Complex space . Therefore , it has less representation capacity . TorusE needs a very big embedding dimension ( 10000 as reported in Ebisu & Ichise ( 2018 ) ) which is a limitation . 3 THE MAIN LIMITATIONS OF TRANSLATION-BASED EMBEDDING MODELS . We review six limitations of translation-based embedding models in encoding relation patterns ( e.g . reflexive , symmetric ) mentioned in the literature ( Wang et al. , 2014 ; Kazemi & Poole , 2018 ; Wang et al. , 2018 ; Sun et al. , 2019 ) . Limitation L1 . TransE can not encode reflexive relations when the relation vector is non-zero ( Wang et al. , 2014 ) . Limitation L2 . TransE can not encode a relation which is neither reflexive nor irreflexive . To see that , if TransE encodes a relation r , which is neither reflexive nor irreflexive we have h1 + r = h1 and h2 + r 6= h2 , resulting r = 0 , r 6= 0 which is a contradiction ( Wang et al. , 2018 ) . Limitation L3 . TransE can not properly encode symmetric relation when r 6= 0 . To see that ( Sun et al. , 2019 ) , if r is symmetric , then we have : h + r = t and t + r = h. Therefore , r = 0 and so all entities appeared in head or tail parts of training triples will have the same embedding vectors . The following limitations hold for TransE , FTransE , STransE , TransH and TransR ( Feng et al. , 2016 ; Nguyen et al. , 2016 ; Kazemi & Poole , 2018 ) : Limitation L4 . If r is reflexive on ∆ ∈ E , where E is the set of all entities in the KG , then r must also be symmetric . Limitation L5 . If r is reflexive on ∆ ∈ E , r must also be transitive . Limitation L6 . If entity e1 has relation r with every entity in ∆ ∈ E and entity e2 has relation r with one of entities in ∆ , then e2 must have the relation r with every entity in ∆ . 4 OUR MODEL . TransE and its variants underperform compared to other embedding models due to their limitations we iterated in Section 3 . In this section , we reinvestigate the limitations . We show that the corresponding theoretical proofs are inaccurate because the effect of loss function is ignored . So we propose new theories and prove that each of the limitations of TransE are resolved by revising either the scoring function or the loss . In this regard , we consider several loss functions and their effects on the boundary of the TransE scoring function . For each of the loss functions , we pose theories corresponding to the limitations . We additionally investigate the limitations of TransE using each of the loss functions while translation is performed in the complex space and show that by this new approach the aforementioned limitations are lifted . Our new model , TransComplEx , with a proper selection of loss function addresses the above problems .
In this paper, the authors investigate the main limitations of TransE in the light of loss function. The authors claim that their contributions consist of two parts: 1) proving that the proper selection of loss functions is vital in KGE; 2) proposing a model called TransComplEx. The results show that the proper selection of the loss function can mitigate the limitations of TransX (X=H, D, R, etc) models.
SP:86076eabb48ef1fe9d51b54945bf81ed44bcacd7
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
1 Introduction . The finding of the vulnerability of neural networks-based classifiers to adversarial examples , that is small perturbations of the input able to modify the decision of the models , started a fast development of a variety of attack algorithms . The high effectiveness of adversarial attacks reveals the fragility of these networks which questions their safe and reliable use in the real world , especially in safety critical applications . Many defenses have been proposed to fix this issue ( Gu & Rigazio , 2015 ; Zheng et al. , 2016 ; Papernot et al. , 2016 ; Huang et al. , 2016 ; Bastani et al. , 2016 ; Madry et al. , 2018 ) , but with limited success , as new more powerful attacks showed ( Carlini & Wagner , 2017b ; Athalye et al. , 2018 ; Mosbach et al. , 2018 ) . In order to trust the decision of a model , it is necessary to evaluate the exact adversarial robustness . Although this is possible for ReLU networks ( Katz et al. , 2017 ; Tjeng et al. , 2019 ) these techniques do not scale to commonly used large networks . Thus , the robustness is evaluated approximating the solution of the minimal adversarial perturbation problem through adversarial attacks . One can distinguish attacks into black-box ( Narodytska & Kasiviswanathan , 2016 ; Brendel et al. , 2018 ; Su et al. , 2019 ) , where one is only allowed to query the classifier , and white-box attacks , where one has full control over the network , according to the attack model used to create adversarial examples ( typically some lp-norm , but others have become popular as well , e.g . Brown et al . ( 2017 ) ; Engstrom et al . ( 2017 ) ; Wong et al . ) , whether they aim at the minimal adversarial perturbation ( Carlini & Wagner , 2017a ; Chen et al. , 2018 ; Croce et al. , 2019 ) or rather any perturbation below a threshold ( Kurakin et al. , 2017 ; Madry et al. , 2018 ; Zheng et al. , 2019 ) , if they have lower ( Moosavi-Dezfooli et al. , 2016 ; Modas et al. , 2019 ) or higher ( Carlini & Wagner , 2017a ; Croce et al. , 2019 ) computational cost . Moreover , it is clear that due to the non-convexity of the problem there exists no universally best attack ( apart from the exact methods ) , since this depends on runtime constraints , networks architecture , dataset , etc . However , our goal is to have an attack which performs well under a broad spectrum of conditions with minimal amount of hyperparameter tuning . In this paper we propose a new white-box attacking scheme which performs comparably or better than established attacks and has the following features : first , it tries to produce adversarial samples with minimal distortion compared to the original point , measured wrt the lp-norms with p ∈ { 1 , 2 , ∞ } . Respect to the quite popular PGD-attack of Madry et al . ( 2018 ) this has the clear advantage that our method does not need to be restarted for every threshold if one wants to evaluate the success rate of the attack with perturbations constrained to be in { δ ∈ Rd | ‖δ‖p ≤ } . Thus it is particularly suitable to get a complete picture on the robustness of a classifier with low computational cost . Second , it achieves fast good quality in terms of average distortion or robust accuracy . At the same time we show that increasing the number of restarts keeps improving the results and makes it competitive with the strongest available attacks . Third , although it comes with a few parameters , these mostly generalize across datasets , architectures and norms considered , so that we have an almost off-the-shelf method . Most importantly , unlike PGD and other methods , there is no step size parameter which potentially has to be carefully adapted to every new network . 2 FAB : a Fast Adaptive Boundary Attack . We first introduce minimal adversarial perturbations , then we recall the definition and properties of the projection wrt the lp-norms of a point on the intersection of a hyperplane and box constraints , as they are an essential part of our attack . Finally , we present our FAB-attack algorithm to generate minimally distorted adversarial examples . 2.1 Minimal adversarial examples . Let f : Rd → RK be a classifier which assigns every input x ∈ Rd ( with d the dimension of the input space ) to one of the K classes according to arg max r=1 , ... , K fr ( x ) . In many scenarios the input of f has to satisfy a specific set of constraints C , e.g . images are represented as elements of [ 0 , 1 ] d. Then , given a point x ∈ Rd with true class c , we define the minimal adversarial perturbation for x wrt the lp-norm as δmin , p = arg min δ∈Rd ‖δ‖p , s.th . max l 6=c fl ( x+ δ ) ≥ fc ( x+ δ ) , x+ δ ∈ C. ( 1 ) The optimization problem ( 1 ) is non-convex and NP-hard for non-trivial classifiers ( Katz et al . ( 2017 ) ) and , although for some classes of networks it can be formulated as a mixed-integer program ( see Tjeng et al . ( 2019 ) ) , the computational cost of solving it is prohibitive for large , normally trained networks . Thus , δmin , p is usually approximated by an attack algorithm , which can be seen as a heuristic to solve ( 1 ) . We will see in the experiments that current attacks sometimes drastically overestimate ‖δmin , p‖p and thus the robustness of the networks . 2.2 Projection on a hyperplane with box constraints . Let w ∈ Rd and b ∈ R be the normal vector and the offset defining the hyperplane π : 〈w , x〉+ b = 0 . Let x ∈ Rd , we denote by the box-constrained projection wrt the lp-norm of x on π ( projection onto the intersection of the box C = { z ∈ Rd : li ≤ zi ≤ ui } and the hyperplane π ) the following minimization problem : z∗ = arg min z∈Rd ‖z − x‖p s.th . 〈w , z〉+ b = 0 , li ≤ zi ≤ ui , i = 1 , . . . , d , ( 2 ) where li , ui ∈ R are lower and upper bounds on each component of z . For p ≥ 1 the optimization problem ( 2 ) is convex . Hein & Andriushchenko ( 2017 ) proved that for p ∈ { 1 , 2 , ∞ } the solution can be obtained in O ( d log d ) time , that is the complexity of sorting a vector of d elements , as well as determining that it has no solution . Since this projection is part of our iterative scheme , we need to handle specifically the case of ( 2 ) being infeasible . In this case , defining ρ = sign ( 〈w , x〉+ b ) , we instead compute z′ = arg min z∈Rd ρ ( 〈w , z〉+ b ) s.th . li ≤ zi ≤ ui , i = 1 , . . . , d , ( 3 ) whose solution is given componentwise , for every i = 1 , . . . , d , by zi = li if ρwi > 0 , ui if ρwi < 0 , xi if wi = 0 . Assuming that the point x satisfies the box constraints ( as it will be in our algorithm ) , this is equivalent to identifying the corner of the d-dimensional box defined by the componentwise constraints on z closest to the hyperplane π . Notice that if ( 2 ) is infeasible then the objective function of ( 3 ) stays positive and the points x and z are strictly contained in the same of the two halfspaces divided by π . Finally , we define the operator projp : ( x , π , C ) 7−→ { z∗ if Problem ( 2 ) is feasible z′ else ( 4 ) yielding the point which gets as close as possible to π without violating the box constraints . 2.3 FAB Attack . We introduce now our algorithm to produce minimally distorted adversarial examples , wrt any lp-norm for p ∈ { 1 , 2 , ∞ } , for a given point xorig initially correctly classified by f as class c. The high-level idea is that we use the linearization of the classifier at the current iterate x ( i ) , compute the box-constrained projections of x ( i ) respectively xorig onto the approximated decision hyperplane and take a convex combinations of these projections depending on the distance of x ( i ) and xorig to the decision hyperplane , followed by some extrapolation step . We explain below the geometric motivation behind these steps . The attack closest in spirit is DeepFool ( Moosavi-Dezfooli et al . ( 2016 ) ) which is known to be very fast but suffers from low quality . DeepFool just tries to find the decision boundary quickly but has no incentive to provide a solution close to xorig . Our scheme resolves this main problem and , together with the exact projection we use , leads to a principled way to track the decision boundary ( the surface where the decision of f changes ) close to xorig . If f was a linear classifier then the closest point to x ( i ) on the decision hyperplane could be found in closed form . Although neural networks are highly non-linear , ReLU networks ( neural networks which use ReLU as activation function ) are piecewise affine functions and thus locally a linearization of the network is an exact description of the classifier . Let l 6= c , then the decision boundary between classes l and c can be locally approximated using a first order Taylor expansion at x ( i ) by the hyperplane πl ( z ) : fl ( x ( i ) ) − fc ( x ( i ) ) + 〈 ∇fl ( x ( i ) ) −∇fc ( x ( i ) ) , z − x ( i ) 〉 = 0 . ( 5 ) Moreover the lp-distance dp ( π , x ( i ) ) of x ( i ) to πl is given by dp ( πl , x ( i ) ) = |fl ( x ( i ) ) − fc ( x ( i ) ) |∥∥∇fl ( x ( i ) ) −∇fc ( x ( i ) ) ∥∥q , with 1p + 1q = 1 . ( 6 ) Note that if dp ( πl , x ( i ) ) = 0 then x ( i ) belongs to the true decision boundary . Moreover , if the local linear approximation of the network is correct then the class s with the decision hyperplane closest to the point x ( i ) can be computed as s = arg min l 6=c |fl ( x ( i ) ) − fc ( x ( i ) ) |∥∥∇fl ( x ( i ) ) −∇fc ( x ( i ) ) ∥∥q . ( 7 ) Thus , given that the approximation holds in some large enough neighborhood , the projection projp ( x ( i ) , πs , C ) of x ( i ) onto πs lies on the decision boundary ( unless ( 2 ) is infeasible ) . Biased gradient step : The iterative algorithm x ( i+1 ) = projp ( x ( i ) , πs , C ) would be similar to DeepFool except that our projection operator is exact whereas they project onto the hyperplane and then clip to [ 0 , 1 ] d. This scheme is not biased towards the original target point xorig , thus it goes typically further than necessary to find a point on the decision boundary as basically the algorithm does not aim at the minimal adversarial perturbation . Thus we consider additionally projp ( xorig , πs , C ) and use instead the iterative step , with x ( 0 ) = xorig , defined as x ( i+1 ) = ( 1− α ) · projp ( x ( i ) , πs , C ) + α · projp ( xorig , πs , C ) , ( 8 ) which biases the step towards xorig ( see Figure 1 ) . Note that this is a convex combination of two points on πs and in C and thus also x ( i+1 ) lies on πs and is contained in C. As we wish a scheme with minimal amount of parameters , we want to have an automatic selection of α based on the available geometric quantities . Let δ ( i ) = projp ( x ( i ) , πs , C ) − x ( i ) and δ ( i ) orig = projp ( xorig , πs , C ) − xorig . Note that ∥∥δ ( i ) ∥∥ p and ∥∥∥δ ( i ) orig∥∥∥ p are the distances of x ( i ) and xorig to πs ( inside C ) . We propose to use for the parameter α the relative magnitude of these two distances , that is α = min ∥∥δ ( i ) ∥∥ p∥∥δ ( i ) ∥∥ p + ∥∥∥δ ( i ) orig∥∥∥ p , αmax ∈ [ 0 , 1 ] . ( 9 ) The motivation for doing so is that if x ( i ) is close to the decision boundary , then we should stay close to this point ( note that πs is the approximation of f computed at x ( i ) and thus it is valid in a small neighborhood of x ( i ) , whereas xorig is farther away ) . On the other hand we want to have the bias towards xorig in order not to go too far away from xorig . This is why α depends on the distances of x ( i ) and xorig to πs but we limit it from above with αmax . Finally , we use a small extrapolation step as we noted empirically , similarly to Moosavi-Dezfooli et al . ( 2016 ) , that this helps to cross faster the decision boundary and get an adversarial sample . This leads to the final scheme : x ( i+1 ) = projC ( ( 1− α ) ( x ( i ) + ηδ ( i ) ) + α ( xorig + ηδ ( i ) orig ) ) , ( 10 ) where α is chosen as in ( 9 ) , η ≥ 1 and projC is just the projection onto the box which can be done by clipping . In Figure 1 we visualize the scheme : in black one can see the hyperplane πs and the vectors δ ( i ) orig and δ ( i ) , in blue the step we would make going to the decision boundary with the DeepFool variant , while in red the actual step we have in our method . The green vector represents instead the bias towards the original point we introduce . On the left of Figure 1 we use η = 1 , while on the right we use overshooting η > 1 . Interpretation of projp ( xorig , πs , C ) : The projection of the target point onto the intersection of πs and C is defined as arg min z∈Rd ‖z − xorig‖p s.th . 〈w , z〉+ b = 0 , li ≤ zi ≤ ui , Note that replacing z by x ( i ) + δ we can rewrite this as arg min δ∈Rd ∥∥∥x ( i ) + δ − xorig∥∥∥ p s.th . 〈w , x+ δ〉+ b = 0 , li ≤ xi + δi ≤ ui . This can be interpreted as the minimization of the distance of the next iterate x ( i ) + δ to the target point xorig so that x ( i ) + δ lies on the intersection of the ( approximate ) decision hyperplane and the box C. This point of view on the projection projp ( xorig , πs , C ) again justifies using a convex combination of the two projections in our iterative scheme in ( 10 ) . Backward step : The described scheme finds in a few iterations adversarial perturbations . However , we are interested in minimizing their norms . Thus , once we have a new point x ( i+1 ) , we check whether it is assigned by f to a class different from c. In this case , we apply x ( i+1 ) = ( 1− β ) xorig + βx ( i+1 ) , β ∈ ( 0 , 1 ) , ( 11 ) that is we go back towards xorig on the segment [ x ( i+1 ) , xorig ] , effectively starting again the algorithm at a point which is quite close to the decision boundary . In this way , due to the bias of the method towards xorig we successively find adversarial perturbations of smaller norm , meaning that the algorithm tracks the decision boundary while getting closer to xorig . Final search : Our scheme finds points close to the decision boundary but often they are slightly off as the linear approximation is not exact and we apply the extrapolation step with η > 1 . Thus , after finishing Niter iterations of our algorithmic scheme , we perform a last , fast step to further improve the quality of the adversarial examples . Let xout be the closest point to xorig classified differently from c , say s 6= c , found with the iterative scheme . It holds that fs ( xout ) −fc ( xout ) > 0 and fs ( xorig ) −fc ( xorig ) < 0 . This means that , assuming f continuous , there exists a point x∗ on the segment [ xout , xorig ] such that fs ( x∗ ) − fc ( x∗ ) = 0 and ‖x∗ − xorig‖p < ‖xout − xorig‖p . If f is linear x∗ = xout − fs ( xout ) − fc ( xout ) fs ( xout ) − fc ( xout ) + fs ( xorig ) − fc ( xorig ) ( xout − xorig ) . ( 12 ) Since f is typically non-linear , but close to linear , we compute iteratively for a few steps xtemp = xout − fs ( xout ) − fc ( xout ) fs ( xout ) − fc ( xout ) + fs ( xorig ) − fc ( xorig ) ( xout − xorig ) , ( 13 ) each time replacing in ( 13 ) xout with xtemp if fs ( xtemp ) − fc ( xtemp ) > 0 or xorig with xtemp if instead fs ( xtemp ) − fc ( xtemp ) < 0 . With this kind of modified binary search one can find a better adversarial sample with the cost of a few forward passes of the network . Random restarts : So far all the steps are deterministic . To improve the results , we introduce the option of random restarts , that is x ( 0 ) is randomly sampled in proximity of xorig instead of being xorig itself . Most attacks benefit from random restarts , e.g . Madry et al . ( 2018 ) ; Zheng et al . ( 2019 ) , especially dealing with gradient-masking defenses ( Mosbach et al . ( 2018 ) ) , as it allows a wider exploration of the input space . We choose to sample from the lp-sphere centered in the original point with radius half the lp-norm of the current best adversarial perturbation ( or a given threshold if no adversarial example has been found yet ) . Computational cost : Our attack , in Algorithm 1 , consists of two main operations : the computation of f and its gradients and solving the projection ( 2 ) . We perform , for each iteration , a forward and a backward pass of the network in the gradient step and a forward pass in the backward step . The projection can be efficiently implemented to run in batches on the GPU and its complexity depends only on the input dimension . Thus , except for shallow models , its cost is much smaller than the passes through the network . We can approximate the computational cost of our algorithm by the total number of calls of the classifier Niter ×Nrestarts × ( 2× forward passes + 1× backward pass ) . ( 14 ) One has to add the forward passes for the final search , fixed to 3 , that happens just once .
The authors propose a new gradient-based method (FAB) for constructing adversarial perturbations for deep neural networks. At a high level, the method repeatedly estimates the decision boundary based on the linearization of the classifier at a given point and projects to the closest "misclassified" example based on that estimation (similar to DeepFool). The authors build on this idea, proposing several improvements and evaluate their attack empirically against a variety of models.
SP:3d3842a5e0816084c5a2406f1b0143d0215b9559
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
1 Introduction . The finding of the vulnerability of neural networks-based classifiers to adversarial examples , that is small perturbations of the input able to modify the decision of the models , started a fast development of a variety of attack algorithms . The high effectiveness of adversarial attacks reveals the fragility of these networks which questions their safe and reliable use in the real world , especially in safety critical applications . Many defenses have been proposed to fix this issue ( Gu & Rigazio , 2015 ; Zheng et al. , 2016 ; Papernot et al. , 2016 ; Huang et al. , 2016 ; Bastani et al. , 2016 ; Madry et al. , 2018 ) , but with limited success , as new more powerful attacks showed ( Carlini & Wagner , 2017b ; Athalye et al. , 2018 ; Mosbach et al. , 2018 ) . In order to trust the decision of a model , it is necessary to evaluate the exact adversarial robustness . Although this is possible for ReLU networks ( Katz et al. , 2017 ; Tjeng et al. , 2019 ) these techniques do not scale to commonly used large networks . Thus , the robustness is evaluated approximating the solution of the minimal adversarial perturbation problem through adversarial attacks . One can distinguish attacks into black-box ( Narodytska & Kasiviswanathan , 2016 ; Brendel et al. , 2018 ; Su et al. , 2019 ) , where one is only allowed to query the classifier , and white-box attacks , where one has full control over the network , according to the attack model used to create adversarial examples ( typically some lp-norm , but others have become popular as well , e.g . Brown et al . ( 2017 ) ; Engstrom et al . ( 2017 ) ; Wong et al . ) , whether they aim at the minimal adversarial perturbation ( Carlini & Wagner , 2017a ; Chen et al. , 2018 ; Croce et al. , 2019 ) or rather any perturbation below a threshold ( Kurakin et al. , 2017 ; Madry et al. , 2018 ; Zheng et al. , 2019 ) , if they have lower ( Moosavi-Dezfooli et al. , 2016 ; Modas et al. , 2019 ) or higher ( Carlini & Wagner , 2017a ; Croce et al. , 2019 ) computational cost . Moreover , it is clear that due to the non-convexity of the problem there exists no universally best attack ( apart from the exact methods ) , since this depends on runtime constraints , networks architecture , dataset , etc . However , our goal is to have an attack which performs well under a broad spectrum of conditions with minimal amount of hyperparameter tuning . In this paper we propose a new white-box attacking scheme which performs comparably or better than established attacks and has the following features : first , it tries to produce adversarial samples with minimal distortion compared to the original point , measured wrt the lp-norms with p ∈ { 1 , 2 , ∞ } . Respect to the quite popular PGD-attack of Madry et al . ( 2018 ) this has the clear advantage that our method does not need to be restarted for every threshold if one wants to evaluate the success rate of the attack with perturbations constrained to be in { δ ∈ Rd | ‖δ‖p ≤ } . Thus it is particularly suitable to get a complete picture on the robustness of a classifier with low computational cost . Second , it achieves fast good quality in terms of average distortion or robust accuracy . At the same time we show that increasing the number of restarts keeps improving the results and makes it competitive with the strongest available attacks . Third , although it comes with a few parameters , these mostly generalize across datasets , architectures and norms considered , so that we have an almost off-the-shelf method . Most importantly , unlike PGD and other methods , there is no step size parameter which potentially has to be carefully adapted to every new network . 2 FAB : a Fast Adaptive Boundary Attack . We first introduce minimal adversarial perturbations , then we recall the definition and properties of the projection wrt the lp-norms of a point on the intersection of a hyperplane and box constraints , as they are an essential part of our attack . Finally , we present our FAB-attack algorithm to generate minimally distorted adversarial examples . 2.1 Minimal adversarial examples . Let f : Rd → RK be a classifier which assigns every input x ∈ Rd ( with d the dimension of the input space ) to one of the K classes according to arg max r=1 , ... , K fr ( x ) . In many scenarios the input of f has to satisfy a specific set of constraints C , e.g . images are represented as elements of [ 0 , 1 ] d. Then , given a point x ∈ Rd with true class c , we define the minimal adversarial perturbation for x wrt the lp-norm as δmin , p = arg min δ∈Rd ‖δ‖p , s.th . max l 6=c fl ( x+ δ ) ≥ fc ( x+ δ ) , x+ δ ∈ C. ( 1 ) The optimization problem ( 1 ) is non-convex and NP-hard for non-trivial classifiers ( Katz et al . ( 2017 ) ) and , although for some classes of networks it can be formulated as a mixed-integer program ( see Tjeng et al . ( 2019 ) ) , the computational cost of solving it is prohibitive for large , normally trained networks . Thus , δmin , p is usually approximated by an attack algorithm , which can be seen as a heuristic to solve ( 1 ) . We will see in the experiments that current attacks sometimes drastically overestimate ‖δmin , p‖p and thus the robustness of the networks . 2.2 Projection on a hyperplane with box constraints . Let w ∈ Rd and b ∈ R be the normal vector and the offset defining the hyperplane π : 〈w , x〉+ b = 0 . Let x ∈ Rd , we denote by the box-constrained projection wrt the lp-norm of x on π ( projection onto the intersection of the box C = { z ∈ Rd : li ≤ zi ≤ ui } and the hyperplane π ) the following minimization problem : z∗ = arg min z∈Rd ‖z − x‖p s.th . 〈w , z〉+ b = 0 , li ≤ zi ≤ ui , i = 1 , . . . , d , ( 2 ) where li , ui ∈ R are lower and upper bounds on each component of z . For p ≥ 1 the optimization problem ( 2 ) is convex . Hein & Andriushchenko ( 2017 ) proved that for p ∈ { 1 , 2 , ∞ } the solution can be obtained in O ( d log d ) time , that is the complexity of sorting a vector of d elements , as well as determining that it has no solution . Since this projection is part of our iterative scheme , we need to handle specifically the case of ( 2 ) being infeasible . In this case , defining ρ = sign ( 〈w , x〉+ b ) , we instead compute z′ = arg min z∈Rd ρ ( 〈w , z〉+ b ) s.th . li ≤ zi ≤ ui , i = 1 , . . . , d , ( 3 ) whose solution is given componentwise , for every i = 1 , . . . , d , by zi = li if ρwi > 0 , ui if ρwi < 0 , xi if wi = 0 . Assuming that the point x satisfies the box constraints ( as it will be in our algorithm ) , this is equivalent to identifying the corner of the d-dimensional box defined by the componentwise constraints on z closest to the hyperplane π . Notice that if ( 2 ) is infeasible then the objective function of ( 3 ) stays positive and the points x and z are strictly contained in the same of the two halfspaces divided by π . Finally , we define the operator projp : ( x , π , C ) 7−→ { z∗ if Problem ( 2 ) is feasible z′ else ( 4 ) yielding the point which gets as close as possible to π without violating the box constraints . 2.3 FAB Attack . We introduce now our algorithm to produce minimally distorted adversarial examples , wrt any lp-norm for p ∈ { 1 , 2 , ∞ } , for a given point xorig initially correctly classified by f as class c. The high-level idea is that we use the linearization of the classifier at the current iterate x ( i ) , compute the box-constrained projections of x ( i ) respectively xorig onto the approximated decision hyperplane and take a convex combinations of these projections depending on the distance of x ( i ) and xorig to the decision hyperplane , followed by some extrapolation step . We explain below the geometric motivation behind these steps . The attack closest in spirit is DeepFool ( Moosavi-Dezfooli et al . ( 2016 ) ) which is known to be very fast but suffers from low quality . DeepFool just tries to find the decision boundary quickly but has no incentive to provide a solution close to xorig . Our scheme resolves this main problem and , together with the exact projection we use , leads to a principled way to track the decision boundary ( the surface where the decision of f changes ) close to xorig . If f was a linear classifier then the closest point to x ( i ) on the decision hyperplane could be found in closed form . Although neural networks are highly non-linear , ReLU networks ( neural networks which use ReLU as activation function ) are piecewise affine functions and thus locally a linearization of the network is an exact description of the classifier . Let l 6= c , then the decision boundary between classes l and c can be locally approximated using a first order Taylor expansion at x ( i ) by the hyperplane πl ( z ) : fl ( x ( i ) ) − fc ( x ( i ) ) + 〈 ∇fl ( x ( i ) ) −∇fc ( x ( i ) ) , z − x ( i ) 〉 = 0 . ( 5 ) Moreover the lp-distance dp ( π , x ( i ) ) of x ( i ) to πl is given by dp ( πl , x ( i ) ) = |fl ( x ( i ) ) − fc ( x ( i ) ) |∥∥∇fl ( x ( i ) ) −∇fc ( x ( i ) ) ∥∥q , with 1p + 1q = 1 . ( 6 ) Note that if dp ( πl , x ( i ) ) = 0 then x ( i ) belongs to the true decision boundary . Moreover , if the local linear approximation of the network is correct then the class s with the decision hyperplane closest to the point x ( i ) can be computed as s = arg min l 6=c |fl ( x ( i ) ) − fc ( x ( i ) ) |∥∥∇fl ( x ( i ) ) −∇fc ( x ( i ) ) ∥∥q . ( 7 ) Thus , given that the approximation holds in some large enough neighborhood , the projection projp ( x ( i ) , πs , C ) of x ( i ) onto πs lies on the decision boundary ( unless ( 2 ) is infeasible ) . Biased gradient step : The iterative algorithm x ( i+1 ) = projp ( x ( i ) , πs , C ) would be similar to DeepFool except that our projection operator is exact whereas they project onto the hyperplane and then clip to [ 0 , 1 ] d. This scheme is not biased towards the original target point xorig , thus it goes typically further than necessary to find a point on the decision boundary as basically the algorithm does not aim at the minimal adversarial perturbation . Thus we consider additionally projp ( xorig , πs , C ) and use instead the iterative step , with x ( 0 ) = xorig , defined as x ( i+1 ) = ( 1− α ) · projp ( x ( i ) , πs , C ) + α · projp ( xorig , πs , C ) , ( 8 ) which biases the step towards xorig ( see Figure 1 ) . Note that this is a convex combination of two points on πs and in C and thus also x ( i+1 ) lies on πs and is contained in C. As we wish a scheme with minimal amount of parameters , we want to have an automatic selection of α based on the available geometric quantities . Let δ ( i ) = projp ( x ( i ) , πs , C ) − x ( i ) and δ ( i ) orig = projp ( xorig , πs , C ) − xorig . Note that ∥∥δ ( i ) ∥∥ p and ∥∥∥δ ( i ) orig∥∥∥ p are the distances of x ( i ) and xorig to πs ( inside C ) . We propose to use for the parameter α the relative magnitude of these two distances , that is α = min ∥∥δ ( i ) ∥∥ p∥∥δ ( i ) ∥∥ p + ∥∥∥δ ( i ) orig∥∥∥ p , αmax ∈ [ 0 , 1 ] . ( 9 ) The motivation for doing so is that if x ( i ) is close to the decision boundary , then we should stay close to this point ( note that πs is the approximation of f computed at x ( i ) and thus it is valid in a small neighborhood of x ( i ) , whereas xorig is farther away ) . On the other hand we want to have the bias towards xorig in order not to go too far away from xorig . This is why α depends on the distances of x ( i ) and xorig to πs but we limit it from above with αmax . Finally , we use a small extrapolation step as we noted empirically , similarly to Moosavi-Dezfooli et al . ( 2016 ) , that this helps to cross faster the decision boundary and get an adversarial sample . This leads to the final scheme : x ( i+1 ) = projC ( ( 1− α ) ( x ( i ) + ηδ ( i ) ) + α ( xorig + ηδ ( i ) orig ) ) , ( 10 ) where α is chosen as in ( 9 ) , η ≥ 1 and projC is just the projection onto the box which can be done by clipping . In Figure 1 we visualize the scheme : in black one can see the hyperplane πs and the vectors δ ( i ) orig and δ ( i ) , in blue the step we would make going to the decision boundary with the DeepFool variant , while in red the actual step we have in our method . The green vector represents instead the bias towards the original point we introduce . On the left of Figure 1 we use η = 1 , while on the right we use overshooting η > 1 . Interpretation of projp ( xorig , πs , C ) : The projection of the target point onto the intersection of πs and C is defined as arg min z∈Rd ‖z − xorig‖p s.th . 〈w , z〉+ b = 0 , li ≤ zi ≤ ui , Note that replacing z by x ( i ) + δ we can rewrite this as arg min δ∈Rd ∥∥∥x ( i ) + δ − xorig∥∥∥ p s.th . 〈w , x+ δ〉+ b = 0 , li ≤ xi + δi ≤ ui . This can be interpreted as the minimization of the distance of the next iterate x ( i ) + δ to the target point xorig so that x ( i ) + δ lies on the intersection of the ( approximate ) decision hyperplane and the box C. This point of view on the projection projp ( xorig , πs , C ) again justifies using a convex combination of the two projections in our iterative scheme in ( 10 ) . Backward step : The described scheme finds in a few iterations adversarial perturbations . However , we are interested in minimizing their norms . Thus , once we have a new point x ( i+1 ) , we check whether it is assigned by f to a class different from c. In this case , we apply x ( i+1 ) = ( 1− β ) xorig + βx ( i+1 ) , β ∈ ( 0 , 1 ) , ( 11 ) that is we go back towards xorig on the segment [ x ( i+1 ) , xorig ] , effectively starting again the algorithm at a point which is quite close to the decision boundary . In this way , due to the bias of the method towards xorig we successively find adversarial perturbations of smaller norm , meaning that the algorithm tracks the decision boundary while getting closer to xorig . Final search : Our scheme finds points close to the decision boundary but often they are slightly off as the linear approximation is not exact and we apply the extrapolation step with η > 1 . Thus , after finishing Niter iterations of our algorithmic scheme , we perform a last , fast step to further improve the quality of the adversarial examples . Let xout be the closest point to xorig classified differently from c , say s 6= c , found with the iterative scheme . It holds that fs ( xout ) −fc ( xout ) > 0 and fs ( xorig ) −fc ( xorig ) < 0 . This means that , assuming f continuous , there exists a point x∗ on the segment [ xout , xorig ] such that fs ( x∗ ) − fc ( x∗ ) = 0 and ‖x∗ − xorig‖p < ‖xout − xorig‖p . If f is linear x∗ = xout − fs ( xout ) − fc ( xout ) fs ( xout ) − fc ( xout ) + fs ( xorig ) − fc ( xorig ) ( xout − xorig ) . ( 12 ) Since f is typically non-linear , but close to linear , we compute iteratively for a few steps xtemp = xout − fs ( xout ) − fc ( xout ) fs ( xout ) − fc ( xout ) + fs ( xorig ) − fc ( xorig ) ( xout − xorig ) , ( 13 ) each time replacing in ( 13 ) xout with xtemp if fs ( xtemp ) − fc ( xtemp ) > 0 or xorig with xtemp if instead fs ( xtemp ) − fc ( xtemp ) < 0 . With this kind of modified binary search one can find a better adversarial sample with the cost of a few forward passes of the network . Random restarts : So far all the steps are deterministic . To improve the results , we introduce the option of random restarts , that is x ( 0 ) is randomly sampled in proximity of xorig instead of being xorig itself . Most attacks benefit from random restarts , e.g . Madry et al . ( 2018 ) ; Zheng et al . ( 2019 ) , especially dealing with gradient-masking defenses ( Mosbach et al . ( 2018 ) ) , as it allows a wider exploration of the input space . We choose to sample from the lp-sphere centered in the original point with radius half the lp-norm of the current best adversarial perturbation ( or a given threshold if no adversarial example has been found yet ) . Computational cost : Our attack , in Algorithm 1 , consists of two main operations : the computation of f and its gradients and solving the projection ( 2 ) . We perform , for each iteration , a forward and a backward pass of the network in the gradient step and a forward pass in the backward step . The projection can be efficiently implemented to run in batches on the GPU and its complexity depends only on the input dimension . Thus , except for shallow models , its cost is much smaller than the passes through the network . We can approximate the computational cost of our algorithm by the total number of calls of the classifier Niter ×Nrestarts × ( 2× forward passes + 1× backward pass ) . ( 14 ) One has to add the forward passes for the final search , fixed to 3 , that happens just once .
Authors extend deepFool by adding extra steps and constraints to find closer points to the source image as the adversarial image. They both project onto the decision boundary. Deepfool does and adhoc clipping to keep the pixel values in (0,1) but the new proposed method respects the constraints during the steps. Also during the steps they combine projection of last step result and original image to keep it closer to the original image. Moreover, at the end of the optimization they perform extra search steps to get closer to the original image. Also they add random restarts. Rather than considering the original image, they randomly choose an image in the half ballpark of the total delta.
SP:3d3842a5e0816084c5a2406f1b0143d0215b9559
INFERENCE, PREDICTION, AND ENTROPY RATE OF CONTINUOUS-TIME, DISCRETE-EVENT PROCESSES
The inference of models , prediction of future symbols , and entropy rate estimation of discrete-time , discrete-event processes is well-worn ground . However , many time series are better conceptualized as continuous-time , discrete-event processes . Here , we provide new methods for inferring models , predicting future symbols , and estimating the entropy rate of continuous-time , discrete-event processes . The methods rely on an extension of Bayesian structural inference that takes advantage of neural network ’ s universal approximation power . Based on experiments with simple synthetic data , these new methods seem to be competitive with state-ofthe-art methods for prediction and entropy rate estimation as long as the correct model is inferred . 1 INTRODUCTION . Much scientific data is dynamic , meaning that we see not a static image of a system but its time evolution . The additional richness of dynamic data should allow us to better understand the system , but we may not know how to process the richer data in a way that will yield new insight into the system in question . For example , we have records of when earthquakes have occurred , but still lack the ability to predict earthquakes well or estimate their intrinsic randomness ( Geller , 1997 ) ; we know which neurons have spiked when , but lack an understanding of the neural code ( Rieke et al. , 1999 ) ; and finally , we can observe organisms , but have difficulty modeling their behavior ( Berman et al. , 2016 ; Cavagna et al. , 2014 ) . Such examples are not only continuous-time , but also discreteevent , meaning that the observations belong to a finite set ( e.g , neuron spikes or is silent ) and are not better-described as a collection of real numbers . These disparate scientific problems are begging for a unified framework for inferring expressive continuous-time , discrete-event models and for using those models to make predictions and , potentially , estimate the intrinsic randomness of the system . In this paper , we present a step towards such a unified framework that takes advantage of : the inference and the predictive advantages of unifilarity– meaning that the hidden Markov model ’ s underlying state ( the so-called “ causal state ” ( Shalizi & Crutchfield , 2001 ) or “ predictive state representation ” ( Littman & Sutton , 2002 ) ) can be uniquely identified from the past data ; and the universal approximation power of neural networks ( Hornik , 1991 ) . Indeed , one could view the proposed algorithm for model inference as the continuous-time extension of Bayesian structural inference Strelioff & Crutchfield ( 2014 ) . We focus on time series that are discrete-event and inherently stochastic . In particular , we infer the most likely unifilar hidden semi-Markov model ( uhsMm ) given data using the Bayesian information criterion . This class of models is slightly more powerful than semi-Markov models , in which the future symbol depends only on the prior symbol , but for which the dwell time of the next symbol is drawn from a non-exponential distribution . With unifilar hidden semi-Markov models , the probability of a future symbol depends on arbitrarily long pasts of prior symbols , and the dwell time distribution for that symbol is non-exponential . Beyond just model inference , we can use the inferred model and the closed-form expressions in Ref . ( Marzen & Crutchfield , 2017 ) to estimate the process ’ entropy rate , and we can use the inferred states of the uhsMm to predict future input via a k-nearest neighbors approach . We compare the latter two algorithms to reasonable extensions of state-of-the-art algorithms . Our new algorithms appear competitive as long as model inference is in-class , meaning that the true model producing the data is equivalent to one of the models in our search . In Sec . 3 , we introduce the reader to unifilar hidden semi-Markov models . In Sec . 4 , we describe our new algorithms for model inference , entropy rate estimation , and time series prediction and test our algorithms on synthetic data that is memoryful . And in Sec . 5 , we discuss potential extensions and applications of this research . 2 RELATED WORK . There exist many methods for studying discrete-time processes . A classical technique is the autoregressive process , AR-k , in which the predicted symbol is a linear combination of previous symbols ; a slight modification on this is the generalized linear model ( GLM ) , in which the probability of a symbol is proportional to the exponential of a linear combination of previous symbols ( Madsen , 2007 ) . Previous workers have also used the Baum-Welch algorithm ( Rabiner & Juang , 1986 ) , Bayesian structural inference ( Strelioff & Crutchfield , 2014 ) , or a nonparametric extension of Bayesian structural inference ( Pfau et al. , 2010 ) to infer a hidden Markov model or probability distribution over hidden Markov models of the observed process ; if the most likely state of the hidden Markov model is correctly inferred , one can use the model ’ s structure to predict the future symbol . More recently , recurrent neural networks and reservoir computers can be trained to recreate the output of any dynamical system through simple linear or logistic regression for reservoir computers ( Grigoryeva & Ortega , 2018 ) or backpropagation through time for recurrent neural networks ( Werbos et al. , 1990 ) . When it comes to continuous-time , discrete-event predictors , far less has been done . Most continuous-time data is , in fact , discrete-time data with a high time resolution ; as such , one can essentially sample continuous-time , discrete-event data at high resolution and use any of the previously mentioned methods for predicting discrete-time data . Alternatively , one can represent continuoustime , discrete-event data as a list of dwell times and symbols and feed that data into either a recurrent neural network or feedforward neural network . We take a new approach : we infer continuous-time hidden Markov models ( Marzen & Crutchfield , 2017 ) and predict using the model ’ s internal state as useful predictive features . 3 BACKGROUND . We are given a sequence of symbols and durations of those symbols , . . . , ( xi , τi ) , . . . , ( x0 , τ+0 ) . This constitutes the data , D. For example , seismic time series are of this kind : magnitude and time between earthquakes . The last seen symbol x0 has been seen for a duration τ+0 . Had we observed the system for a longer amount of time , τ+0 may increase . The possible symbols { xi } i are assumed to belong to a finite set A , while the interevent intervals { τi } i are assumed to belong to ( 0 , ∞ ) . We assume stationarity– that the statistics of { ( xi , τi ) } i are unchanging in time . Above is the description of the observed time series . What now follows is a shortened description of unifilar hidden semi-Markov models , notated M , that could be generating such a time series ( Marzen & Crutchfield , 2017 ) . The minimal such model that is consistent with the observations is the -Machine . Underlying a unifilar hidden semi-Markov model is a finite-state machine with states g , each equipped with a dwell-time distribution φg ( τ ) , an emission probability p ( x|g ) , and a function + ( g , x ) that specifies the next hidden state when given the current hidden state g and the current emission symbol x . This model generates a time series as follows : a hidden state g is randomly chosen ; a dwell time τ is chosen according to the dwell-time distribution φg ( τ ) ; an emission symbol is chosen according to the conditional probability p ( x|g ) ; and we then observe the chosen x for τ amount of time . A new hidden state is determined via + ( g , x ) , and we further restrict possible next emissions to be different than the previous emission– a property that makes this model unifilar– and the process repeats . See Fig . 1 for illustrations of a unifilar hidden semi-Markov model . 4 ALGORITHMS . We investigate three tasks : model inference ; calculation of the differential entropy rate ; and development of a predictor of future symbols . Our main claim is that restricted attention to a special type of discrete-event , continuous-time model called unifilar hidden semi-Markov models makes all three tasks easier . 4.1 INFERENCE OF UNIFILAR HIDDEN SEMI-MARKOV PROCESSES . The unifilar hidden semi-Markov models described earlier can be parameterized . Let M refer to a model– in this case , the underlying topology of the finite-state machine and neural networks defining the density of dwell times ; let θ refer to the model ’ s parameters , i.e . the emission probabilities and the parameters of the neural networks ; and let D refer to the data , i.e. , the list of emitted symbols and dwell times . Ideally , to choose a model , we would do maximum a posteriori by calculating argmaxM Pr ( M|D ) and choose parameters of that model via maximum likelihood , argmaxθ Pr ( D|θ , M ) . In the case of discrete-time unifilar hidden Markov models , Strelioff and Crutchfield ( Strelioff & Crutchfield , 2014 ) described the Bayesian framework for inferring the best-fit model and parameters . More than that , Ref . ( Strelioff & Crutchfield , 2014 ) calculated the posterior analytically , using the unifilarity property to ease the mathematical burden . Analytic calculations in continuous-time may be possible , but we leave that for a future endeavor . We instead turn to a variety of approximations , still aided by the unifilarity of the inferred models . The main such approximation is our use of the Bayesian inference criterion ( BIC ) Bishop ( 2006 ) . Maximum a posteriori is performed via BIC = max θ logPr ( D|θ , M ) − kM 2 log |D| ( 1 ) M∗ = argmax M BIC , ( 2 ) where kM is the number of parameters θ . To choose a model , then , we must calculate not only the parameters θ that maximize the log likelihood , but the log likelihood itself . We make one further approximation for tractability involving the start state s0 , for which Pr ( D|θ , M ) = ∑ s0 π ( s0|θ , M ) Pr ( D|s0 , θ , M ) . ( 3 ) As the logarithm of a sum has no easy expression , we approximate max θ logPr ( D|θ , M ) = max s0 max θ logPr ( D|s0 , θ , M ) . ( 4 ) Our strategy , then , is to choose the parameters θ that maximize maxs0 logPr ( D|s0 , θ , M ) and to choose the modelM that maximizes maxθ logPr ( D|θ , M ) − kM2 log |D| . This constitutes an inference of a model that can explain the observed data . What remains to be done , therefore , is approximation of maxs0 maxθ logPr ( D|s0 , θ , M ) . The parameters θ of any given model include p ( s′ , x|s ) , the probability of emitting x when in state s and transitioning to state s′ , and φs ( t ) , the interevent interval distribution of state s. Using the unifilarity of the underlying model , the sequence of x ’ s when combined with the start state s0 translate into a single possible sequence of hidden states si . As such , one can show that logPr ( D|s0 , θ , M ) = ∑ s ∑ j log φs ( τ ( s ) j ) + ∑ s , x , s′ n ( s′ , x|s ) log p ( s′ , x|s ) ( 5 ) where τ ( s ) j is any interevent interval produced when in state s. It is relatively easy to analytically maximize with respect to p ( s′ , x|s ) , including the constraint that ∑s′ , x p ( s′ , x|s ) = 1 for any s , and find that p∗ ( s′ , x|s ) = n ( s ′ , x|s ) n ( s ) . ( 6 ) Now we turn to approximation of the dwell-time distributions , φs ( t ) . The dwell-time distribution can , in theory , be any normalized nonnegative function ; inference may seem impossible . However , artificial neural networks can , with enough nodes , represent any continuous function . We therefore represent φs ( t ) by a relatively shallow ( here , three-layer ) artificial neural network ( ANN ) in which nonnegativity and normalization are enforced as follows : • the second-to-last layer ’ s activation functions are ReLus ( max ( 0 , x ) , and so with nonnegative output ) and the weights to the last layer are constrained to be nonnegative ; • and the output is the last layer ’ s output divided by a numerical integration of the last layer ’ s output . The log likelihood ∑ j log φs ( τ ( s ) j ) determines the cost function for the neural network . Then , the neural network can be trained using typical stochastic optimization methods . ( Here , we use Adam Kingma & Ba ( 2014 ) . ) The output of the neural network can successfully estimate the interevent interval density function , given enough samples , within the interval for which there is data . See Fig . 2 . Outside this interval , however , the estimated density function is not guaranteed to vanish as t → ∞ , and can even grow . Stated differently , the neural networks considered here are good interpolators , but can be bad extrapolators . As such , the density function estimated by the network is taken to be 0 outside the interval for which there is data . To the best of our knowledge , this is a new approach to density estimation , referred to as ANN here . A previous approach to density estimation using neural networks learned the cumulative distribution function ( Magdon-Ismail & Atiya , 1999 ) . Typical approaches to density estimation include k-nearest neighbor estimation techniques and Parzen window estimates , both of which need careful tuning of hyperparameters ( k or h ) ( Bishop , 2006 ) . They are referred to here as kNN and Parzen . We compare the ANN , kNN , and Parzen approaches in inferring an interevent interval density function that we have chosen , arbitrarily , to be the mixture of inverse Gaussians shown in Fig . 2 ( left ) . The k in k-nearest neighbor estimation is chosen according to the criterion in Ref . Fukunaga & Hostetler ( 1973 ) , and h is chosen to as to maximize the pseudolikelihood Marron et al . ( 1987 ) . Note that as shown in Fig . 2 ( right ) , this is not a superior approach to density estimation in terms of minimization of mean-squared error , but it is parametric , so that BIC can be used . To test our new method for density estimation presented here– that is , by training a properly normalized ANN– we generated a trajectory from the unifilar hidden semi-Markov model shown in Fig . 3 ( left ) and used BIC to infer the correct model . As BIC is a log likelihood minus a penalty for a larger number of parameters , a larger BIC suggests a higher posterior . With very little data , a two-state model shown in Fig . 3 is deemed most likely ; but as the amount of data increases , the correct four-state model eventually takes precedence . See Fig . 3 ( right ) . The six-state model was never deemed more likely than a two-state or four-state model . Note that although this methodol- ogy might be extended to nonunifilar hidden semi-Markov models , the unifilarity allowed for easily computable and unique identification of dwell times to states in Eq . 5 .
The authors present a model for time series which are represented as discrete events in continuous time and describe methods for doing parameter inference, future event prediction and entropy rate estimation for such processes. Their model is based on models for Bayesian Structure prediction where they add the temporal dimension in a rigorous way while solving several technical challenges in the interim. The writing style is lucid and the illustrations are helpful and high quality.
SP:51a88b77450225e0f80f9fa25510fb4ea64463b2
INFERENCE, PREDICTION, AND ENTROPY RATE OF CONTINUOUS-TIME, DISCRETE-EVENT PROCESSES
The inference of models , prediction of future symbols , and entropy rate estimation of discrete-time , discrete-event processes is well-worn ground . However , many time series are better conceptualized as continuous-time , discrete-event processes . Here , we provide new methods for inferring models , predicting future symbols , and estimating the entropy rate of continuous-time , discrete-event processes . The methods rely on an extension of Bayesian structural inference that takes advantage of neural network ’ s universal approximation power . Based on experiments with simple synthetic data , these new methods seem to be competitive with state-ofthe-art methods for prediction and entropy rate estimation as long as the correct model is inferred . 1 INTRODUCTION . Much scientific data is dynamic , meaning that we see not a static image of a system but its time evolution . The additional richness of dynamic data should allow us to better understand the system , but we may not know how to process the richer data in a way that will yield new insight into the system in question . For example , we have records of when earthquakes have occurred , but still lack the ability to predict earthquakes well or estimate their intrinsic randomness ( Geller , 1997 ) ; we know which neurons have spiked when , but lack an understanding of the neural code ( Rieke et al. , 1999 ) ; and finally , we can observe organisms , but have difficulty modeling their behavior ( Berman et al. , 2016 ; Cavagna et al. , 2014 ) . Such examples are not only continuous-time , but also discreteevent , meaning that the observations belong to a finite set ( e.g , neuron spikes or is silent ) and are not better-described as a collection of real numbers . These disparate scientific problems are begging for a unified framework for inferring expressive continuous-time , discrete-event models and for using those models to make predictions and , potentially , estimate the intrinsic randomness of the system . In this paper , we present a step towards such a unified framework that takes advantage of : the inference and the predictive advantages of unifilarity– meaning that the hidden Markov model ’ s underlying state ( the so-called “ causal state ” ( Shalizi & Crutchfield , 2001 ) or “ predictive state representation ” ( Littman & Sutton , 2002 ) ) can be uniquely identified from the past data ; and the universal approximation power of neural networks ( Hornik , 1991 ) . Indeed , one could view the proposed algorithm for model inference as the continuous-time extension of Bayesian structural inference Strelioff & Crutchfield ( 2014 ) . We focus on time series that are discrete-event and inherently stochastic . In particular , we infer the most likely unifilar hidden semi-Markov model ( uhsMm ) given data using the Bayesian information criterion . This class of models is slightly more powerful than semi-Markov models , in which the future symbol depends only on the prior symbol , but for which the dwell time of the next symbol is drawn from a non-exponential distribution . With unifilar hidden semi-Markov models , the probability of a future symbol depends on arbitrarily long pasts of prior symbols , and the dwell time distribution for that symbol is non-exponential . Beyond just model inference , we can use the inferred model and the closed-form expressions in Ref . ( Marzen & Crutchfield , 2017 ) to estimate the process ’ entropy rate , and we can use the inferred states of the uhsMm to predict future input via a k-nearest neighbors approach . We compare the latter two algorithms to reasonable extensions of state-of-the-art algorithms . Our new algorithms appear competitive as long as model inference is in-class , meaning that the true model producing the data is equivalent to one of the models in our search . In Sec . 3 , we introduce the reader to unifilar hidden semi-Markov models . In Sec . 4 , we describe our new algorithms for model inference , entropy rate estimation , and time series prediction and test our algorithms on synthetic data that is memoryful . And in Sec . 5 , we discuss potential extensions and applications of this research . 2 RELATED WORK . There exist many methods for studying discrete-time processes . A classical technique is the autoregressive process , AR-k , in which the predicted symbol is a linear combination of previous symbols ; a slight modification on this is the generalized linear model ( GLM ) , in which the probability of a symbol is proportional to the exponential of a linear combination of previous symbols ( Madsen , 2007 ) . Previous workers have also used the Baum-Welch algorithm ( Rabiner & Juang , 1986 ) , Bayesian structural inference ( Strelioff & Crutchfield , 2014 ) , or a nonparametric extension of Bayesian structural inference ( Pfau et al. , 2010 ) to infer a hidden Markov model or probability distribution over hidden Markov models of the observed process ; if the most likely state of the hidden Markov model is correctly inferred , one can use the model ’ s structure to predict the future symbol . More recently , recurrent neural networks and reservoir computers can be trained to recreate the output of any dynamical system through simple linear or logistic regression for reservoir computers ( Grigoryeva & Ortega , 2018 ) or backpropagation through time for recurrent neural networks ( Werbos et al. , 1990 ) . When it comes to continuous-time , discrete-event predictors , far less has been done . Most continuous-time data is , in fact , discrete-time data with a high time resolution ; as such , one can essentially sample continuous-time , discrete-event data at high resolution and use any of the previously mentioned methods for predicting discrete-time data . Alternatively , one can represent continuoustime , discrete-event data as a list of dwell times and symbols and feed that data into either a recurrent neural network or feedforward neural network . We take a new approach : we infer continuous-time hidden Markov models ( Marzen & Crutchfield , 2017 ) and predict using the model ’ s internal state as useful predictive features . 3 BACKGROUND . We are given a sequence of symbols and durations of those symbols , . . . , ( xi , τi ) , . . . , ( x0 , τ+0 ) . This constitutes the data , D. For example , seismic time series are of this kind : magnitude and time between earthquakes . The last seen symbol x0 has been seen for a duration τ+0 . Had we observed the system for a longer amount of time , τ+0 may increase . The possible symbols { xi } i are assumed to belong to a finite set A , while the interevent intervals { τi } i are assumed to belong to ( 0 , ∞ ) . We assume stationarity– that the statistics of { ( xi , τi ) } i are unchanging in time . Above is the description of the observed time series . What now follows is a shortened description of unifilar hidden semi-Markov models , notated M , that could be generating such a time series ( Marzen & Crutchfield , 2017 ) . The minimal such model that is consistent with the observations is the -Machine . Underlying a unifilar hidden semi-Markov model is a finite-state machine with states g , each equipped with a dwell-time distribution φg ( τ ) , an emission probability p ( x|g ) , and a function + ( g , x ) that specifies the next hidden state when given the current hidden state g and the current emission symbol x . This model generates a time series as follows : a hidden state g is randomly chosen ; a dwell time τ is chosen according to the dwell-time distribution φg ( τ ) ; an emission symbol is chosen according to the conditional probability p ( x|g ) ; and we then observe the chosen x for τ amount of time . A new hidden state is determined via + ( g , x ) , and we further restrict possible next emissions to be different than the previous emission– a property that makes this model unifilar– and the process repeats . See Fig . 1 for illustrations of a unifilar hidden semi-Markov model . 4 ALGORITHMS . We investigate three tasks : model inference ; calculation of the differential entropy rate ; and development of a predictor of future symbols . Our main claim is that restricted attention to a special type of discrete-event , continuous-time model called unifilar hidden semi-Markov models makes all three tasks easier . 4.1 INFERENCE OF UNIFILAR HIDDEN SEMI-MARKOV PROCESSES . The unifilar hidden semi-Markov models described earlier can be parameterized . Let M refer to a model– in this case , the underlying topology of the finite-state machine and neural networks defining the density of dwell times ; let θ refer to the model ’ s parameters , i.e . the emission probabilities and the parameters of the neural networks ; and let D refer to the data , i.e. , the list of emitted symbols and dwell times . Ideally , to choose a model , we would do maximum a posteriori by calculating argmaxM Pr ( M|D ) and choose parameters of that model via maximum likelihood , argmaxθ Pr ( D|θ , M ) . In the case of discrete-time unifilar hidden Markov models , Strelioff and Crutchfield ( Strelioff & Crutchfield , 2014 ) described the Bayesian framework for inferring the best-fit model and parameters . More than that , Ref . ( Strelioff & Crutchfield , 2014 ) calculated the posterior analytically , using the unifilarity property to ease the mathematical burden . Analytic calculations in continuous-time may be possible , but we leave that for a future endeavor . We instead turn to a variety of approximations , still aided by the unifilarity of the inferred models . The main such approximation is our use of the Bayesian inference criterion ( BIC ) Bishop ( 2006 ) . Maximum a posteriori is performed via BIC = max θ logPr ( D|θ , M ) − kM 2 log |D| ( 1 ) M∗ = argmax M BIC , ( 2 ) where kM is the number of parameters θ . To choose a model , then , we must calculate not only the parameters θ that maximize the log likelihood , but the log likelihood itself . We make one further approximation for tractability involving the start state s0 , for which Pr ( D|θ , M ) = ∑ s0 π ( s0|θ , M ) Pr ( D|s0 , θ , M ) . ( 3 ) As the logarithm of a sum has no easy expression , we approximate max θ logPr ( D|θ , M ) = max s0 max θ logPr ( D|s0 , θ , M ) . ( 4 ) Our strategy , then , is to choose the parameters θ that maximize maxs0 logPr ( D|s0 , θ , M ) and to choose the modelM that maximizes maxθ logPr ( D|θ , M ) − kM2 log |D| . This constitutes an inference of a model that can explain the observed data . What remains to be done , therefore , is approximation of maxs0 maxθ logPr ( D|s0 , θ , M ) . The parameters θ of any given model include p ( s′ , x|s ) , the probability of emitting x when in state s and transitioning to state s′ , and φs ( t ) , the interevent interval distribution of state s. Using the unifilarity of the underlying model , the sequence of x ’ s when combined with the start state s0 translate into a single possible sequence of hidden states si . As such , one can show that logPr ( D|s0 , θ , M ) = ∑ s ∑ j log φs ( τ ( s ) j ) + ∑ s , x , s′ n ( s′ , x|s ) log p ( s′ , x|s ) ( 5 ) where τ ( s ) j is any interevent interval produced when in state s. It is relatively easy to analytically maximize with respect to p ( s′ , x|s ) , including the constraint that ∑s′ , x p ( s′ , x|s ) = 1 for any s , and find that p∗ ( s′ , x|s ) = n ( s ′ , x|s ) n ( s ) . ( 6 ) Now we turn to approximation of the dwell-time distributions , φs ( t ) . The dwell-time distribution can , in theory , be any normalized nonnegative function ; inference may seem impossible . However , artificial neural networks can , with enough nodes , represent any continuous function . We therefore represent φs ( t ) by a relatively shallow ( here , three-layer ) artificial neural network ( ANN ) in which nonnegativity and normalization are enforced as follows : • the second-to-last layer ’ s activation functions are ReLus ( max ( 0 , x ) , and so with nonnegative output ) and the weights to the last layer are constrained to be nonnegative ; • and the output is the last layer ’ s output divided by a numerical integration of the last layer ’ s output . The log likelihood ∑ j log φs ( τ ( s ) j ) determines the cost function for the neural network . Then , the neural network can be trained using typical stochastic optimization methods . ( Here , we use Adam Kingma & Ba ( 2014 ) . ) The output of the neural network can successfully estimate the interevent interval density function , given enough samples , within the interval for which there is data . See Fig . 2 . Outside this interval , however , the estimated density function is not guaranteed to vanish as t → ∞ , and can even grow . Stated differently , the neural networks considered here are good interpolators , but can be bad extrapolators . As such , the density function estimated by the network is taken to be 0 outside the interval for which there is data . To the best of our knowledge , this is a new approach to density estimation , referred to as ANN here . A previous approach to density estimation using neural networks learned the cumulative distribution function ( Magdon-Ismail & Atiya , 1999 ) . Typical approaches to density estimation include k-nearest neighbor estimation techniques and Parzen window estimates , both of which need careful tuning of hyperparameters ( k or h ) ( Bishop , 2006 ) . They are referred to here as kNN and Parzen . We compare the ANN , kNN , and Parzen approaches in inferring an interevent interval density function that we have chosen , arbitrarily , to be the mixture of inverse Gaussians shown in Fig . 2 ( left ) . The k in k-nearest neighbor estimation is chosen according to the criterion in Ref . Fukunaga & Hostetler ( 1973 ) , and h is chosen to as to maximize the pseudolikelihood Marron et al . ( 1987 ) . Note that as shown in Fig . 2 ( right ) , this is not a superior approach to density estimation in terms of minimization of mean-squared error , but it is parametric , so that BIC can be used . To test our new method for density estimation presented here– that is , by training a properly normalized ANN– we generated a trajectory from the unifilar hidden semi-Markov model shown in Fig . 3 ( left ) and used BIC to infer the correct model . As BIC is a log likelihood minus a penalty for a larger number of parameters , a larger BIC suggests a higher posterior . With very little data , a two-state model shown in Fig . 3 is deemed most likely ; but as the amount of data increases , the correct four-state model eventually takes precedence . See Fig . 3 ( right ) . The six-state model was never deemed more likely than a two-state or four-state model . Note that although this methodol- ogy might be extended to nonunifilar hidden semi-Markov models , the unifilarity allowed for easily computable and unique identification of dwell times to states in Eq . 5 .
The paper focuses on the problem of modeling, predicting and estimating entropy information over continuous-time discrete event processes. Specifically, the paper leverages unifilar HSMM's for model inference and then uses the inferred states to make future predictions. The authors also use inferred model with previously developed techniques for estimating entropy rate. The authors describe the methods and provide the evidence of the effectiveness of their method with experiments on a synthetic dataset.
SP:51a88b77450225e0f80f9fa25510fb4ea64463b2
Graph inference learning for semi-supervised classification
1 INTRODUCTION . Graph , which comprises a set of vertices/nodes together with connected edges , is a formal structural representation of non-regular data . Due to the strong representation ability , it accommodates many potential applications , e.g. , social network ( Orsini et al. , 2017 ) , world wide data ( Page et al. , 1999 ) , knowledge graph ( Xu et al. , 2017 ) , and protein-interaction network ( Borgwardt et al. , 2007 ) . Among these , semi-supervised node classification on graphs is one of the most interesting also popular topics . Given a graph in which some nodes are labeled , the aim of semi-supervised classification is to infer the categories of those remaining unlabeled nodes by using various priors of the graph . While there have been numerous previous works ( Brandes et al. , 2008 ; Zhou et al. , 2004 ; Zhu et al. , 2003 ; Yang et al. , 2016 ; Zhao et al. , 2019 ) devoted to semi-supervised node classification based on explicit graph Laplacian regularizations , it is hard to efficiently boost the performance of label prediction due to the strict assumption that connected nodes are likely to share the same label information . With the progress of deep learning on grid-shaped images/videos ( He et al. , 2016 ) , a few of graph convolutional neural networks ( CNN ) based methods , including spectral ( Kipf & Welling , 2017 ) and spatial methods ( Niepert et al. , 2016 ; Pan et al. , 2018 ; Yu et al. , 2018 ) , have been proposed to learn local convolution filters on graphs in order to extract more discriminative node representations . Although graph CNN based methods have achieved considerable capabilities of graph embedding by optimizing filters , they are limited into a conventionally semi-supervised framework and lack of an efficient inference mechanism on graphs . Especially , in the case of few-shot learning , where a small number of training nodes are labeled , this kind of methods would drastically compromise the performance . For example , the Pubmed graph dataset ( Sen et al. , 2008 ) consists ∗Corresponding author : Zhen Cui . ( b ) The process of Graph inference learning . We extract the local representation from the local subgraph ( the circle with dashed line The red wave line denote the node reachability from to . d t th h bilit f d t th d of 19,717 nodes and 44,338 edges , but only 0.3 % nodes are labeled for the semi-supervised node classification task . These aforementioned works usually boil down to a general classification task , where the model is learnt on a training set and selected by checking a validation set . However , they do not put great efforts on how to learn to infer from one node to another node on a topological graph , especially in the few-shot regime . In this paper , we propose a graph inference learning ( GIL ) framework to teach the model itself to adaptively infer from reference labeled nodes to those query unlabeled nodes , and finally boost the performance of semi-supervised node classification in the case of a few number of labeled samples . Given an input graph , GIL attempts to infer the unlabeled nodes from those observed nodes by building between-node relations . The between-node relations are structured as the integration of node attributes , connection paths , and graph topological structures . It means that the similarity between two nodes is decided from three aspects : the consistency of node attributes , the consistency of local topological structures , and the between-node path reachability , as shown in Fig . 1 . The local structures anchored around each node as well as the attributes of nodes therein are jointly encoded with graph convolution ( Defferrard et al. , 2016 ) for the sake of high-level feature extraction . For the between-node path reachability , we adopt the random walk algorithm to obtain the characteristics from a labeled reference node vi to a query unlabeled node vj in a given graph . Based on the computed node representation and between-node reachability , the structure relations can be obtained by computing the similar scores/relationships from reference nodes to unlabeled nodes in a graph . Inspired by the recent meta-learning strategy ( Finn et al. , 2017 ) , we learn to infer the structure relations from a training set to a validation set , which can benefit the generalization capability of the learned model . In other words , our proposed GIL attempts to learn some transferable knowledge underlying in the structure relations from training samples to validation samples , such that the learned structure relations can be better self-adapted to the new testing stage . We summarize the main contributions of this work as three folds : • We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way . The structure relations are well defined by jointly considering node attributes , between-node paths , and graph topological structures . • To make the inference model better generalize to test nodes , we introduce a meta-learning procedure to optimize structure relations , which could be the first time for graph node classification to the best of our knowledge . • Comprehensive evaluations on three citation network datasets ( including Cora , Citeseer , and Pubmed ) and one knowledge graph data ( i.e. , NELL ) demonstrate the superiority of our proposed GIL in contrast with other state-of-the-art methods on the semi-supervised classification task . 2 RELATED WORK . Graph CNNs : With the rapid development of deep learning methods , various graph convolution neural networks ( Kashima et al. , 2003 ; Morris et al. , 2017 ; Shervashidze et al. , 2009 ; Yanardag & Vishwanathan , 2015 ; Jiang et al. , 2019 ; Zhang et al. , 2020 ) have been exploited to analyze the irregular graph-structured data . For better extending general convolutional neural networks to graph domains , two broad strategies have been proposed , including spectral and spatial convolution methods . Specifically , spectral filtering methods ( Henaff et al. , 2015 ; Kipf & Welling , 2017 ) develop convolution-like operators in the spectral domain , and then perform a series of spectral filters by decomposing the graph Laplacian . Unfortunately , the spectral-based approaches often lead to a high computational complex due to the operation of eigenvalue decomposition , especially for a large number of graph nodes . To alleviate this computation burden , local spectral filtering methods ( Defferrard et al. , 2016 ) are then proposed by parameterizing the frequency responses as a Chebyshev polynomial approximation . Another type of graph CNNs , namely spatial methods ( Li et al. , 2016 ; Niepert et al. , 2016 ) , can perform the filtering operation by defining the spatial structures of adjacent vertices . Various approaches can be employed to aggregate or sort neighboring vertices , such as diffusion CNNs ( Atwood & Towsley , 2016 ) , GraphSAGE ( Hamilton et al. , 2017 ) , PSCN ( Niepert et al. , 2016 ) , and NgramCNN ( Luo et al. , 2017 ) . From the perspective of data distribution , recently , the Gaussian induced convolution model ( Jiang et al. , 2019 ) is proposed to disentangle the aggregation process through encoding adjacent regions with Gaussian mixture model . Semi-supervised node classification : Among various graph-related applications , semi-supervised node classification has gained increasing attention recently , and various approaches have been proposed to deal with this problem , including explicit graph Laplacian regularization and graphembedding approaches . Several classic algorithms with graph Laplacian regularization contain the label propagation method using Gaussian random fields ( Zhu et al. , 2003 ) , the regularization framework by relying on the local/global consistency ( Zhou et al. , 2004 ) , and the random walkbased sampling algorithm for acquiring the context information ( Yang et al. , 2016 ) . To further address scalable semi-supervised learning issues ( Liu et al. , 2012 ) , the Anchor Graph regularization approach ( Liu et al. , 2010 ) is proposed to scale linearly with the number of graph nodes and then applied to massive-scale graph datasets . Several graph convolution network methods ( Abu-El-Haija et al. , 2018 ; Du et al. , 2017 ; Thekumparampil et al. , 2018 ; Velickovic et al. , 2018 ; Zhuang & Ma , 2018 ) are then developed to obtain discriminative representations of input graphs . For example , Kipf et al . ( Kipf & Welling , 2017 ) proposed a scalable graph CNN model , which can scale linearly in the number of graph edges and learn graph representations by encoding both local graph structures and node attributes . Graph attention networks ( GAT ) ( Velickovic et al. , 2018 ) are proposed to compute hidden representations of each node for attending to its neighbors with a self-attention strategy . By jointly considering the local- and global-consistency information , dual graph convolutional networks ( Zhuang & Ma , 2018 ) are presented to deal with semi-supervised node classification . The critical difference between our proposed GIL and those previous semi-supervised node classification methods is to adopt a graph inference strategy by defining structure relations on graphs and then leverage a meta optimization mechanism to learn an inference model , which could be the first time to the best of our knowledge , while the existing graph CNNs take semi-supervised node classification as a general classification task . 3 THE PROPOSED MODEL . 3.1 PROBLEM DEFINITION . Formally , we denote an undirected/directed graph as G = { V , E , X , Y } , where V = { vi } ni=1 is the finite set of n ( or |V| ) vertices , E ∈ Rn×n defines the adjacency relationships ( i.e. , edges ) between vertices representing the topology of G , X ∈ Rn×d records the explicit/implicit attributes/signals of vertices , and Y ∈ Rn is the vertex labels of C classes . The edge Eij = E ( vi , vj ) = 0 if and only if vertices vi , vj are not connected , otherwise Eij 6= 0 . The attribute matrix X is attached to the vertex set V , whose i-th row Xvi ( or Xi· ) represents the attribute of the i-th vertex vi . It means that vi ∈ V carries a vector of d-dimensional signals . Associated with each node vi ∈ V , there is a discrete label yi ∈ { 1 , 2 , · · · , C } . We consider the task of semi-supervised node classification over graph data , where only a small number of vertices are labeled for the model learning , i.e. , |VLabel| |V| . Generally , we have three node sets : a training set Vtr , a validation set Vval , and a testing set Vte . In the standard protocol of prior literatures ( Yang et al. , 2016 ) , the three node sets share the same label space . We follow but do not restrict this protocol for our proposed method . Given the training and validation node sets , the aim is to predict the node labels of testing nodes by using node attributes as well as edge connections . A sophisticated machine learning technique used in most existing methods ( Kipf & Welling , 2017 ; Zhou et al. , 2004 ) is to choose the optimal classifier ( trained on a training set ) after checking the performance on the validation set . However , these methods essentially ignore how to extract transferable knowledge from these known labeled nodes to unlabeled nodes , as the graph structure itself implies node connectivity/reachability . Moreover , due to the scarcity of labeled samples , the performance of such a classifier is usually not satisfying . To address these issues , we introduce a meta-learning mechanism ( Finn et al. , 2017 ; Ravi & Larochelle , 2017 ; Sung et al. , 2017 ) to learn to infer node labels on graphs . Specifically , the graph structure , between-node path reachability , and node attributes are jointly modeled into the learning process . Our aim is to learn to infer from labeled nodes to unlabeled nodes , so that the learner can perform better on a validation set and thus classify a testing set more accurately .
This paper proposes to leverage the between-node-path information into the inference of conventional graph neural network methods. Specifically, the proposed method treats the nodes in training set as a reference corpus and, when infering the label of a specific node, makes this node "attend" to the reference corpus, where the "attention" weights are calculated based on the node representations and the between-node paths. (The paper used different terms about the "attention".)
SP:06bbc70edab65f046adb46bc364c3b91f5880845
Graph inference learning for semi-supervised classification
1 INTRODUCTION . Graph , which comprises a set of vertices/nodes together with connected edges , is a formal structural representation of non-regular data . Due to the strong representation ability , it accommodates many potential applications , e.g. , social network ( Orsini et al. , 2017 ) , world wide data ( Page et al. , 1999 ) , knowledge graph ( Xu et al. , 2017 ) , and protein-interaction network ( Borgwardt et al. , 2007 ) . Among these , semi-supervised node classification on graphs is one of the most interesting also popular topics . Given a graph in which some nodes are labeled , the aim of semi-supervised classification is to infer the categories of those remaining unlabeled nodes by using various priors of the graph . While there have been numerous previous works ( Brandes et al. , 2008 ; Zhou et al. , 2004 ; Zhu et al. , 2003 ; Yang et al. , 2016 ; Zhao et al. , 2019 ) devoted to semi-supervised node classification based on explicit graph Laplacian regularizations , it is hard to efficiently boost the performance of label prediction due to the strict assumption that connected nodes are likely to share the same label information . With the progress of deep learning on grid-shaped images/videos ( He et al. , 2016 ) , a few of graph convolutional neural networks ( CNN ) based methods , including spectral ( Kipf & Welling , 2017 ) and spatial methods ( Niepert et al. , 2016 ; Pan et al. , 2018 ; Yu et al. , 2018 ) , have been proposed to learn local convolution filters on graphs in order to extract more discriminative node representations . Although graph CNN based methods have achieved considerable capabilities of graph embedding by optimizing filters , they are limited into a conventionally semi-supervised framework and lack of an efficient inference mechanism on graphs . Especially , in the case of few-shot learning , where a small number of training nodes are labeled , this kind of methods would drastically compromise the performance . For example , the Pubmed graph dataset ( Sen et al. , 2008 ) consists ∗Corresponding author : Zhen Cui . ( b ) The process of Graph inference learning . We extract the local representation from the local subgraph ( the circle with dashed line The red wave line denote the node reachability from to . d t th h bilit f d t th d of 19,717 nodes and 44,338 edges , but only 0.3 % nodes are labeled for the semi-supervised node classification task . These aforementioned works usually boil down to a general classification task , where the model is learnt on a training set and selected by checking a validation set . However , they do not put great efforts on how to learn to infer from one node to another node on a topological graph , especially in the few-shot regime . In this paper , we propose a graph inference learning ( GIL ) framework to teach the model itself to adaptively infer from reference labeled nodes to those query unlabeled nodes , and finally boost the performance of semi-supervised node classification in the case of a few number of labeled samples . Given an input graph , GIL attempts to infer the unlabeled nodes from those observed nodes by building between-node relations . The between-node relations are structured as the integration of node attributes , connection paths , and graph topological structures . It means that the similarity between two nodes is decided from three aspects : the consistency of node attributes , the consistency of local topological structures , and the between-node path reachability , as shown in Fig . 1 . The local structures anchored around each node as well as the attributes of nodes therein are jointly encoded with graph convolution ( Defferrard et al. , 2016 ) for the sake of high-level feature extraction . For the between-node path reachability , we adopt the random walk algorithm to obtain the characteristics from a labeled reference node vi to a query unlabeled node vj in a given graph . Based on the computed node representation and between-node reachability , the structure relations can be obtained by computing the similar scores/relationships from reference nodes to unlabeled nodes in a graph . Inspired by the recent meta-learning strategy ( Finn et al. , 2017 ) , we learn to infer the structure relations from a training set to a validation set , which can benefit the generalization capability of the learned model . In other words , our proposed GIL attempts to learn some transferable knowledge underlying in the structure relations from training samples to validation samples , such that the learned structure relations can be better self-adapted to the new testing stage . We summarize the main contributions of this work as three folds : • We propose a novel graph inference learning framework by building structure relations to infer unknown node labels from those labeled nodes in an end-to-end way . The structure relations are well defined by jointly considering node attributes , between-node paths , and graph topological structures . • To make the inference model better generalize to test nodes , we introduce a meta-learning procedure to optimize structure relations , which could be the first time for graph node classification to the best of our knowledge . • Comprehensive evaluations on three citation network datasets ( including Cora , Citeseer , and Pubmed ) and one knowledge graph data ( i.e. , NELL ) demonstrate the superiority of our proposed GIL in contrast with other state-of-the-art methods on the semi-supervised classification task . 2 RELATED WORK . Graph CNNs : With the rapid development of deep learning methods , various graph convolution neural networks ( Kashima et al. , 2003 ; Morris et al. , 2017 ; Shervashidze et al. , 2009 ; Yanardag & Vishwanathan , 2015 ; Jiang et al. , 2019 ; Zhang et al. , 2020 ) have been exploited to analyze the irregular graph-structured data . For better extending general convolutional neural networks to graph domains , two broad strategies have been proposed , including spectral and spatial convolution methods . Specifically , spectral filtering methods ( Henaff et al. , 2015 ; Kipf & Welling , 2017 ) develop convolution-like operators in the spectral domain , and then perform a series of spectral filters by decomposing the graph Laplacian . Unfortunately , the spectral-based approaches often lead to a high computational complex due to the operation of eigenvalue decomposition , especially for a large number of graph nodes . To alleviate this computation burden , local spectral filtering methods ( Defferrard et al. , 2016 ) are then proposed by parameterizing the frequency responses as a Chebyshev polynomial approximation . Another type of graph CNNs , namely spatial methods ( Li et al. , 2016 ; Niepert et al. , 2016 ) , can perform the filtering operation by defining the spatial structures of adjacent vertices . Various approaches can be employed to aggregate or sort neighboring vertices , such as diffusion CNNs ( Atwood & Towsley , 2016 ) , GraphSAGE ( Hamilton et al. , 2017 ) , PSCN ( Niepert et al. , 2016 ) , and NgramCNN ( Luo et al. , 2017 ) . From the perspective of data distribution , recently , the Gaussian induced convolution model ( Jiang et al. , 2019 ) is proposed to disentangle the aggregation process through encoding adjacent regions with Gaussian mixture model . Semi-supervised node classification : Among various graph-related applications , semi-supervised node classification has gained increasing attention recently , and various approaches have been proposed to deal with this problem , including explicit graph Laplacian regularization and graphembedding approaches . Several classic algorithms with graph Laplacian regularization contain the label propagation method using Gaussian random fields ( Zhu et al. , 2003 ) , the regularization framework by relying on the local/global consistency ( Zhou et al. , 2004 ) , and the random walkbased sampling algorithm for acquiring the context information ( Yang et al. , 2016 ) . To further address scalable semi-supervised learning issues ( Liu et al. , 2012 ) , the Anchor Graph regularization approach ( Liu et al. , 2010 ) is proposed to scale linearly with the number of graph nodes and then applied to massive-scale graph datasets . Several graph convolution network methods ( Abu-El-Haija et al. , 2018 ; Du et al. , 2017 ; Thekumparampil et al. , 2018 ; Velickovic et al. , 2018 ; Zhuang & Ma , 2018 ) are then developed to obtain discriminative representations of input graphs . For example , Kipf et al . ( Kipf & Welling , 2017 ) proposed a scalable graph CNN model , which can scale linearly in the number of graph edges and learn graph representations by encoding both local graph structures and node attributes . Graph attention networks ( GAT ) ( Velickovic et al. , 2018 ) are proposed to compute hidden representations of each node for attending to its neighbors with a self-attention strategy . By jointly considering the local- and global-consistency information , dual graph convolutional networks ( Zhuang & Ma , 2018 ) are presented to deal with semi-supervised node classification . The critical difference between our proposed GIL and those previous semi-supervised node classification methods is to adopt a graph inference strategy by defining structure relations on graphs and then leverage a meta optimization mechanism to learn an inference model , which could be the first time to the best of our knowledge , while the existing graph CNNs take semi-supervised node classification as a general classification task . 3 THE PROPOSED MODEL . 3.1 PROBLEM DEFINITION . Formally , we denote an undirected/directed graph as G = { V , E , X , Y } , where V = { vi } ni=1 is the finite set of n ( or |V| ) vertices , E ∈ Rn×n defines the adjacency relationships ( i.e. , edges ) between vertices representing the topology of G , X ∈ Rn×d records the explicit/implicit attributes/signals of vertices , and Y ∈ Rn is the vertex labels of C classes . The edge Eij = E ( vi , vj ) = 0 if and only if vertices vi , vj are not connected , otherwise Eij 6= 0 . The attribute matrix X is attached to the vertex set V , whose i-th row Xvi ( or Xi· ) represents the attribute of the i-th vertex vi . It means that vi ∈ V carries a vector of d-dimensional signals . Associated with each node vi ∈ V , there is a discrete label yi ∈ { 1 , 2 , · · · , C } . We consider the task of semi-supervised node classification over graph data , where only a small number of vertices are labeled for the model learning , i.e. , |VLabel| |V| . Generally , we have three node sets : a training set Vtr , a validation set Vval , and a testing set Vte . In the standard protocol of prior literatures ( Yang et al. , 2016 ) , the three node sets share the same label space . We follow but do not restrict this protocol for our proposed method . Given the training and validation node sets , the aim is to predict the node labels of testing nodes by using node attributes as well as edge connections . A sophisticated machine learning technique used in most existing methods ( Kipf & Welling , 2017 ; Zhou et al. , 2004 ) is to choose the optimal classifier ( trained on a training set ) after checking the performance on the validation set . However , these methods essentially ignore how to extract transferable knowledge from these known labeled nodes to unlabeled nodes , as the graph structure itself implies node connectivity/reachability . Moreover , due to the scarcity of labeled samples , the performance of such a classifier is usually not satisfying . To address these issues , we introduce a meta-learning mechanism ( Finn et al. , 2017 ; Ravi & Larochelle , 2017 ; Sung et al. , 2017 ) to learn to infer node labels on graphs . Specifically , the graph structure , between-node path reachability , and node attributes are jointly modeled into the learning process . Our aim is to learn to infer from labeled nodes to unlabeled nodes , so that the learner can perform better on a validation set and thus classify a testing set more accurately .
This paper presents a semi-supervised classification method for classifying unlabeled nodes in graph data. The authors propose a Graph Inference Learning (GIL) framework to learn node labels on graph topology. The node labeling is based of three aspects: 1) node representation to measure the similarity between the centralized subgraph around the unlabeled node and reference node; 2) structure relation that measures the similarity between node attributes; and 3) the reachability between unlabeled query node and reference node.
SP:06bbc70edab65f046adb46bc364c3b91f5880845
Ridge Regression: Structure, Cross-Validation, and Sketching
We study the following three fundamental problems about ridge regression : ( 1 ) what is the structure of the estimator ? ( 2 ) how to correctly use cross-validation to choose the regularization parameter ? and ( 3 ) how to accelerate computation without losing too much accuracy ? We consider the three problems in a unified large-data linear model . We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise . We study the bias of K-fold cross-validation for choosing the regularization parameter , and propose a simple bias-correction . We analyze the accuracy of primal and dual sketching for ridge regression , showing they are surprisingly accurate . Our results are illustrated by simulations and by analyzing empirical data . 1 INTRODUCTION . Ridge or ` 2-regularized regression is a widely used method for prediction and estimation when the data dimension p is large compared to the number of datapoints n. This is especially so in problems with many good features , where sparsity assumptions may not be justified . A great deal is known about ridge regression . It is Bayes optimal for any quadratic loss in a Bayesian linear model where the parameters and noise are Gaussian . The asymptotic properties of ridge have been widely studied ( e.g. , Tulino & Verdú , 2004 ; Serdobolskii , 2007 ; Couillet & Debbah , 2011 ; Dicker , 2016 ; Dobriban & Wager , 2018 , etc ) . For choosing the regularization parameter in practice , cross-validation ( CV ) is widely used . In addition , there is an exact shortcut ( e.g. , Hastie et al. , 2009 , p. 243 ) , which has good consistency properties ( Hastie et al. , 2019 ) . There is also a lot of work on fast approximate algorithms for ridge , e.g. , using sketching methods ( e.g. , el Alaoui & Mahoney , 2015 ; Chen et al. , 2015 ; Wang et al. , 2018 ; Chowdhury et al. , 2018 , among others ) . Here we seek to develop a deeper understanding of ridge regression , going beyond existing work in multiple aspects . We work in linear models under a popular asymptotic regime where n , p→∞ at the same rate ( Marchenko & Pastur , 1967 ; Serdobolskii , 2007 ; Couillet & Debbah , 2011 ; Yao et al. , 2015 ) . In this framework , we develop a fundamental representation for ridge regression , which shows that it is well approximated by a linear scaling of the true parameters perturbed by noise . The scaling matrices are functions of the population-level covariance of the features . As a consequence , we derive formulas for the training error and bias-variance tradeoff of ridge . Second , we study commonly used methods for choosing the regularization parameter . Inspired by the observation that CV has a bias for estimating the error rate ( e.g. , Hastie et al. , 2009 , p. 243 ) , we study the bias of CV for selecting the regularization parameter . We discover a surprisingly simple form for the bias , and propose a downward scaling bias correction procedure . Third , we study the accuracy loss of a class of randomized sketching algorithms for ridge regression . These algorithms approximate the sample covariance matrix by sketching or random projection . We show they can be surprisingly accurate , e.g. , they can sometimes cut computational cost in half , only incurring 5 % extra error . Even more , they can sometimes improve the MSE if a suboptimal regularization parameter is originally used . Our work leverages recent results from asymptotic random matrix theory and free probability theory . One challenge in our analysis is to find the limit of the trace tr ( Σ1 + Σ−12 ) −1/p , where Σ1 and Σ2 are p × p independent sample covariance matrices of Gaussian random vectors . The calculation requires nontrivial aspects of freely additive convolutions ( e.g. , Voiculescu et al. , 1992 ; Nica & Speicher , 2006 ) . Our work is connected to prior works on ridge regression in high-dimensional statistics ( Serdobolskii , 2007 ) and wireless communications ( Tulino & Verdú , 2004 ; Couillet & Debbah , 2011 ) . Among other related works , El Karoui & Kösters ( 2011 ) discuss the implications of the geometric sensitivity of random matrix theory for ridge regression , without considering our problems . El Karoui ( 2018 ) and Dicker ( 2016 ) study ridge regression estimators , but focus only on the risk for identity covariance . Hastie et al . ( 2019 ) study “ ridgeless ” regression , where the regularization parameter tends to zero . Sketching is an increasingly popular research topic , see Vempala ( 2005 ) ; Halko et al . ( 2011 ) ; Mahoney ( 2011 ) ; Woodruff ( 2014 ) ; Drineas & Mahoney ( 2017 ) and references therein . For sketched ridge regression , Zhang et al . ( 2013a ; b ) study the dual problem in a complementary finite-sample setting , and their results are hard to compare . Chen et al . ( 2015 ) propose an algorithm combining sparse embedding and the subsampled randomized Hadamard transform ( SRHT ) , proving relative approximation bounds . Wang et al . ( 2017 ) study iterative sketching algorithms from an optimization point of view , for both the primal and the dual problems . Dobriban & Liu ( 2018 ) study sketching using asymptotic random matrix theory , but only for unregularized linear regression . Chowdhury et al . ( 2018 ) propose a data-dependent algorithm in light of the ridge leverage scores . Other related works include Sarlos ( 2006 ) ; Ailon & Chazelle ( 2006 ) ; Drineas et al . ( 2006 ; 2011 ) ; Dhillon et al . ( 2013 ) ; Ma et al . ( 2015 ) ; Raskutti & Mahoney ( 2016 ) ; Gonen et al . ( 2016 ) ; Thanei et al . ( 2017 ) ; Ahfock et al . ( 2017 ) ; Lopes et al . ( 2018 ) ; Huang ( 2018 ) . The structure of the paper is as follows : We state our results on representation , risk , and biasvariance tradeoff in Section 2 . We study the bias of cross-validation for choosing the regularization parameter in Section 3 . We study the accuracy of randomized primal and dual sketching for both orthogonal and Gaussian sketches in Section 4 . We provide proofs and additional simulations in the Appendix . Code reproducing the experiments in the paper are available at https : //github . com/liusf15/RidgeRegression . 2 RIDGE REGRESSION . We work in the usual linear regression model Y = Xβ + ε , where each row xi of X ∈ Rn×p is a datapoint in p dimensions , and so there are p features . The corresponding element yi of Y ∈ Rn is its continous response ( or outcome ) . We assume mean zero uncorrelated noise , so Eε = 0 , and Cov [ ε ] = σ2In . We estimate the coefficient β ∈ Rp by ridge regression , solving the optimization problem β̂ = arg min β∈Rp 1 n ‖Y −Xβ‖22 + λ‖β‖22 , where λ > 0 is a regularization parameter . The solution has the closed form β̂ = ( X > X/n+ λIp ) −1 X > Y/n . ( 1 ) We work in a ” big data ” asymptotic limit , where both the dimension p and the sample size n tend to infinity , and their aspect ratio converges to a constant , p/n → γ ∈ ( 0 , ∞ ) . Our results can be interpreted for any n and p , using γ = p/n as an approximation . We recall that the empirical spectral distribution ( ESD ) of a p×p symmetric matrix Σ is the distribution 1p ∑p i=1 δλi where λi , i = 1 , . . . , p are the eigenvalues of Σ , and δx is the point mass at x . This is a standard notion in random matrix theory , see e.g. , Marchenko & Pastur ( 1967 ) ; Tulino & Verdú ( 2004 ) ; Couillet & Debbah ( 2011 ) ; Yao et al . ( 2015 ) . The ESD is a convenient tool to summarize all information obtainable from the eigenvalues of a matrix . For instance , the trace of Σ is proportional to the mean of the distribution , while the condition number is related to the range of the support . As is common , we will work in models where there is a sequence of covariance matrices Σ = Σp , and their ESDs converges in distribution to a limiting probability distribution . The results become simpler , because they depend only on the limit . By extension , we say that the ESD of the n× p matrix X is the ESD of X > X/n . We will consider some very specific models for the data , assuming it is of the form X = UΣ1/2 , where U has iid entries of zero mean and unit variance . This means that the datapoints , i.e. , the rows of X , have the form xi = Σ1/2ui , i = 1 , . . . , p , where ui have iid entries . Then Σ is the ” true ” covariance matrix of the features , which is typically not observed . These types of models for the data are very common in random matrix theory , see the references mentioned above . Under these models , it is possible to characterize precisely the deviations between the empirical covariance matrix Σ̂ = n−1X > X and the population covariance matrix Σ , dating back to the well known classical Marchenko-Pastur law for eigenvectors ( Marchenko & Pastur , 1967 ) , extended to more general models and made more precise , including results for eigenvectors ( see e.g. , Tulino & Verdú , 2004 ; Couillet & Debbah , 2011 ; Yao et al. , 2015 , and references therein ) . This has been used to study methods for estimating the true covariance matrix , with several applications ( e.g. , Paul & Aue , 2014 ; Bun et al. , 2017 ) . More recently , such models have been used to study high dimensional statistical learning problems , including classification and regression ( e.g. , Zollanvari & Genton , 2013 ; Dobriban & Wager , 2018 ) . Our work falls in this line . We start by finding a precise representation of the ridge estimator . For random vectors un , vn of growing dimension , we say un and vn are deterministic equivalents , if for any sequence of fixed ( or random and independent of un , vn ) vectors wn such that lim sup ‖wn‖2 < ∞ almost surely , we have |w > n ( un − vn ) | → 0 almost surely . We denote this by un vn . Thus linear combinations of un are well approximated by those of vn . This is a somewhat non-standard definition , but it turns out that it is precisely the one we need to use prior results from random matrix theory such as from ( Rubio & Mestre , 2011 ) . We extend scalar functions f : R→ R to matrices in the usual way by functional calculus , applying them to the eigenvalues and keeping the eigenvectors . If M = V ΛV > is a spectral decomposition of M , then we define f ( M ) : = V f ( Λ ) V > , where f ( Λ ) is the diagonal matrix with entries f ( Λii ) . For a fixed design matrix X , we can write the estimator as β̂ = ( Σ̂ + λIp ) −1Σ̂β + ( Σ̂ + λIp ) −1X > ε n . However , for a random design , we can find a representation that depends on the true covariance Σ , which may be simpler when Σ is simple , e.g. , when Σ = Ip is isotropic . Theorem 2.1 ( Representation of ridge estimator ) . Suppose the data matrix has the form X = UΣ1/2 , where U ∈ Rn×p has iid entries of zero mean , unit variance and finite 8 + c-th moment for some c > 0 , and Σ = Σp ∈ Rp×p is a deterministic positive definite matrix . Suppose that n , p→∞ with p/n→ γ > 0 . Suppose the ESD of the sequence of Σs converges in distribution to a probability measure with compact support bounded away from the origin . Suppose that the noise is Gaussian , and that β = βp is an arbitrary sequence of deterministic vectors , such that lim sup ‖β‖2 < ∞ . Then the ridge regression estimator is asymptotically equivalent to a random vector with the following representation : β̂ ( λ ) A ( Σ , λ ) · β +B ( Σ , λ ) · σ · Z p1/2 . Here Z ∼ N ( 0 , Ip ) is a random vector that is stochastically dependent only on the noise ε , and A , B are deterministic matrices defined by applying the scalar functions below to Σ : A ( x , λ ) = ( cpx+ λ ) −2 ( cp + c ′ p ) x , B ( x , λ ) = ( cpx+ λ ) −1cpx . Here cp : = c ( n , p , Σ , λ ) is the unique positive solution of the fixed point equation 1− cp = cp n tr [ Σ ( cpΣ + λI ) −1 ] . ( 2 ) This result gives a precise representation of the ridge regression estimator . It is a sum of two terms : the true coefficient vector β scaled by the matrixA ( Σ , λ ) , and the noise vectorZ scaled by the matrix B ( Σ , λ ) . The first term captures to what extent ridge regression recovers the ” signal ” . Morever , the noise term Z is directly coupled with the noise in the original regression problem , and thus also the estimator . The result would not hold for an independent noise vector Z . However , the coefficients are not fully explicit , as they depend on the unknown population covariance matrix Σ , as well as on the fixed-point variable cp . Some comments are in order : 1 . Structure of the proof . The proof is quite non-elementary and relies on random matrix theory . Specifically , it uses the language of the recently developed ” calculus of deterministic equivalents ” ( Dobriban & Sheng , 2018 ) , and results by ( Rubio & Mestre , 2011 ) . A general takeaway is that for n not much larger than p , the empirical covariance matrix Σ̂ is not a good estimator of the true covariance matrix Σ . However , the deviation of linear functionals of Σ̂ , can be quantified . In particular , we have ( Σ̂ + λI ) −1 ( cpΣ + λI ) −1 , in the sense that linear combinations of the entries of the two matrices are close ( see the proof for more details ) . 2 . Understanding the resolvent bias factor cp . Thus , cp can be viewed as a resolvent bias factor , which tells us by what factor Σ is multiplied when evaluating the resolvent ( Σ̂ + λI ) −1 , and comparing it to its naive counterpart ( Σ + λI ) −1 . It is known that cp is well defined , and this follows by a simple monotonicity argument , see Hachem et al . ( 2007 ) ; Rubio & Mestre ( 2011 ) . Specifically , the left hand side of ( 2 ) is decreasing in cp , while the right hand size is increasing in Also c′p is the derivative of cp , when viewing it as a function of z : = −λ . An explicit expression is provided in the proof in Section A.1 , but is not crucial right now . Here we discuss some implications of this representation . For uncorrelated features , Σ = Ip , A , B reduce to multiplication by scalars . Hence , each coordinate of the ridge regression estimator is simply a scalar multiple of the corresponding coordinate of β . One can use this to find the bias in each individual coordinate . Training error and optimal regularization parameter . This theorem has implications for understanding the training error , and optimal regularization parameter of ridge regression . As it stands , the theorem itself only characterizes the behavior og linear combinations of the coordinates of the estimator . Thus , it can be directly applied to study the bias Eβ̂ ( λ ) − β of the estimator . However , it can not directly be used to study the variance ; as that would require understanding quadratic functionals of the estimator . This seems to require significant advances in random matrix theory , going beyond the results of Rubio & Mestre ( 2011 ) . However , we show below that with additional assumptions on the structure of the parameter β , we can derive the MSE of the estimator in other ways . We work in a random-effects model , where the p-dimensional regression parameter β is random , each coefficient has zero mean Eβi = 0 , and is normalized so that Varβi = α2/p . This ensures that the signal strength E‖β‖2 = α2 is fixed for any p. The asymptotically optimal λ in this setting is always λ∗ = γσ2/α2 see e.g. , Tulino & Verdú ( 2004 ) ; Dicker ( 2016 ) ; Dobriban & Wager ( 2018 ) . The ridge regression estimator with λ = pσ2/ ( nα2 ) is the posterior mean of β , when β and ε are normal random variables . For a distribution F , we define the quantities θi ( λ ) = ∫ 1 ( x+ λ ) i dFγ ( x ) , ( i = 1 , 2 , . . . ) . These are the moments of the resolvent and its derivatives ( up to constants ) . We use the following loss functions : mean squared estimation error : M ( β̂ ) = E‖β̂ − β‖22 , and residual or training error : R ( β̂ ) = E [ ‖ ] Y −Xβ̂‖22 . Theorem 2.2 ( MSE and training error of ridge ) . Suppose β has iid entries with Eβi = 0 , Var [ βi ] = α2/p , i = 1 , . . . , p and β is independent of X and ε . Suppose X is an arbitrary n × p matrix depending on n and p , and the ESD of X converges weakly to a deterministic distribution F as n , p → ∞ and p/n → γ . Then the asymptotic MSE and residual error of the ridge regression estimator β̂ ( λ ) has the form lim n→∞ M ( β̂ ( λ ) ) = α2λ2θ2 + γσ 2 [ θ1 − λθ2 ] , ( 3 ) lim n→∞ R ( β̂ ( λ ) ) = α2λ2 [ θ1 − λθ2 ] + σ2 [ 1− γ ( 1 + λθ1 − λ2θ2 ) ] , ( 4 ) Bias-variance tradeoff . Building on this , we can also study the bias-variance tradeoff of ridge regression . Qualitatively , large λ leads to more regularization , and decreases the variance . However , it also increases the bias . Our theory allows us to find the explicit formulas for the bias and variance as a function of λ . See Figure 1 for a plot and Sec . A.3 for the details . As far as we know , this is one of the few examples of high-dimensional asymptotic problems where the precise form of the bias and variance can be evaluated . Bias-variance tradeoff at optimal λ∗ = γσ2/α2 . ( see Figure 6 ) This can be viewed as the ” pure ” effect of dimensionality on the problem , keeping all other parameters fixed , and has intriguing properties . The variance first increases , then decreases with γ . In the ” classical ” low-dimensional case , most of the risk is due to variance , while in the ” modern ” high-dimensional case , most of it is due to bias . This is consistent with other phenomena in proportional-limit asymptotics , e.g. , that the map between population and sample eigenvalue distributions is asymptotically deterministic ( Marchenko & Pastur , 1967 ) . Future applications . This fundamental representation may have applications to important statistical inference questions . For instance , inference on the regression coefficient β and the noise variance σ2 are important and challenging problems . Can we use our representation to develop debiasing techniques for this task ? This will be interesting to explore in future work .
This paper deals with 3 theoretical properties of ridge regression. First, it proves that the ridge regression estimator is equivalent to a specific representation which is useful as for instance it can be used to derive the training error of the ridge estimator. Second, it provides a bias correction mechanism for ridge regression and finally it provides proofs regarding the accuracy of several sketching algorithms for ridge regression.
SP:bbcb77fc764f7e90ef6126d97d8195734fcdafe8
Ridge Regression: Structure, Cross-Validation, and Sketching
We study the following three fundamental problems about ridge regression : ( 1 ) what is the structure of the estimator ? ( 2 ) how to correctly use cross-validation to choose the regularization parameter ? and ( 3 ) how to accelerate computation without losing too much accuracy ? We consider the three problems in a unified large-data linear model . We give a precise representation of ridge regression as a covariance matrix-dependent linear combination of the true parameter and the noise . We study the bias of K-fold cross-validation for choosing the regularization parameter , and propose a simple bias-correction . We analyze the accuracy of primal and dual sketching for ridge regression , showing they are surprisingly accurate . Our results are illustrated by simulations and by analyzing empirical data . 1 INTRODUCTION . Ridge or ` 2-regularized regression is a widely used method for prediction and estimation when the data dimension p is large compared to the number of datapoints n. This is especially so in problems with many good features , where sparsity assumptions may not be justified . A great deal is known about ridge regression . It is Bayes optimal for any quadratic loss in a Bayesian linear model where the parameters and noise are Gaussian . The asymptotic properties of ridge have been widely studied ( e.g. , Tulino & Verdú , 2004 ; Serdobolskii , 2007 ; Couillet & Debbah , 2011 ; Dicker , 2016 ; Dobriban & Wager , 2018 , etc ) . For choosing the regularization parameter in practice , cross-validation ( CV ) is widely used . In addition , there is an exact shortcut ( e.g. , Hastie et al. , 2009 , p. 243 ) , which has good consistency properties ( Hastie et al. , 2019 ) . There is also a lot of work on fast approximate algorithms for ridge , e.g. , using sketching methods ( e.g. , el Alaoui & Mahoney , 2015 ; Chen et al. , 2015 ; Wang et al. , 2018 ; Chowdhury et al. , 2018 , among others ) . Here we seek to develop a deeper understanding of ridge regression , going beyond existing work in multiple aspects . We work in linear models under a popular asymptotic regime where n , p→∞ at the same rate ( Marchenko & Pastur , 1967 ; Serdobolskii , 2007 ; Couillet & Debbah , 2011 ; Yao et al. , 2015 ) . In this framework , we develop a fundamental representation for ridge regression , which shows that it is well approximated by a linear scaling of the true parameters perturbed by noise . The scaling matrices are functions of the population-level covariance of the features . As a consequence , we derive formulas for the training error and bias-variance tradeoff of ridge . Second , we study commonly used methods for choosing the regularization parameter . Inspired by the observation that CV has a bias for estimating the error rate ( e.g. , Hastie et al. , 2009 , p. 243 ) , we study the bias of CV for selecting the regularization parameter . We discover a surprisingly simple form for the bias , and propose a downward scaling bias correction procedure . Third , we study the accuracy loss of a class of randomized sketching algorithms for ridge regression . These algorithms approximate the sample covariance matrix by sketching or random projection . We show they can be surprisingly accurate , e.g. , they can sometimes cut computational cost in half , only incurring 5 % extra error . Even more , they can sometimes improve the MSE if a suboptimal regularization parameter is originally used . Our work leverages recent results from asymptotic random matrix theory and free probability theory . One challenge in our analysis is to find the limit of the trace tr ( Σ1 + Σ−12 ) −1/p , where Σ1 and Σ2 are p × p independent sample covariance matrices of Gaussian random vectors . The calculation requires nontrivial aspects of freely additive convolutions ( e.g. , Voiculescu et al. , 1992 ; Nica & Speicher , 2006 ) . Our work is connected to prior works on ridge regression in high-dimensional statistics ( Serdobolskii , 2007 ) and wireless communications ( Tulino & Verdú , 2004 ; Couillet & Debbah , 2011 ) . Among other related works , El Karoui & Kösters ( 2011 ) discuss the implications of the geometric sensitivity of random matrix theory for ridge regression , without considering our problems . El Karoui ( 2018 ) and Dicker ( 2016 ) study ridge regression estimators , but focus only on the risk for identity covariance . Hastie et al . ( 2019 ) study “ ridgeless ” regression , where the regularization parameter tends to zero . Sketching is an increasingly popular research topic , see Vempala ( 2005 ) ; Halko et al . ( 2011 ) ; Mahoney ( 2011 ) ; Woodruff ( 2014 ) ; Drineas & Mahoney ( 2017 ) and references therein . For sketched ridge regression , Zhang et al . ( 2013a ; b ) study the dual problem in a complementary finite-sample setting , and their results are hard to compare . Chen et al . ( 2015 ) propose an algorithm combining sparse embedding and the subsampled randomized Hadamard transform ( SRHT ) , proving relative approximation bounds . Wang et al . ( 2017 ) study iterative sketching algorithms from an optimization point of view , for both the primal and the dual problems . Dobriban & Liu ( 2018 ) study sketching using asymptotic random matrix theory , but only for unregularized linear regression . Chowdhury et al . ( 2018 ) propose a data-dependent algorithm in light of the ridge leverage scores . Other related works include Sarlos ( 2006 ) ; Ailon & Chazelle ( 2006 ) ; Drineas et al . ( 2006 ; 2011 ) ; Dhillon et al . ( 2013 ) ; Ma et al . ( 2015 ) ; Raskutti & Mahoney ( 2016 ) ; Gonen et al . ( 2016 ) ; Thanei et al . ( 2017 ) ; Ahfock et al . ( 2017 ) ; Lopes et al . ( 2018 ) ; Huang ( 2018 ) . The structure of the paper is as follows : We state our results on representation , risk , and biasvariance tradeoff in Section 2 . We study the bias of cross-validation for choosing the regularization parameter in Section 3 . We study the accuracy of randomized primal and dual sketching for both orthogonal and Gaussian sketches in Section 4 . We provide proofs and additional simulations in the Appendix . Code reproducing the experiments in the paper are available at https : //github . com/liusf15/RidgeRegression . 2 RIDGE REGRESSION . We work in the usual linear regression model Y = Xβ + ε , where each row xi of X ∈ Rn×p is a datapoint in p dimensions , and so there are p features . The corresponding element yi of Y ∈ Rn is its continous response ( or outcome ) . We assume mean zero uncorrelated noise , so Eε = 0 , and Cov [ ε ] = σ2In . We estimate the coefficient β ∈ Rp by ridge regression , solving the optimization problem β̂ = arg min β∈Rp 1 n ‖Y −Xβ‖22 + λ‖β‖22 , where λ > 0 is a regularization parameter . The solution has the closed form β̂ = ( X > X/n+ λIp ) −1 X > Y/n . ( 1 ) We work in a ” big data ” asymptotic limit , where both the dimension p and the sample size n tend to infinity , and their aspect ratio converges to a constant , p/n → γ ∈ ( 0 , ∞ ) . Our results can be interpreted for any n and p , using γ = p/n as an approximation . We recall that the empirical spectral distribution ( ESD ) of a p×p symmetric matrix Σ is the distribution 1p ∑p i=1 δλi where λi , i = 1 , . . . , p are the eigenvalues of Σ , and δx is the point mass at x . This is a standard notion in random matrix theory , see e.g. , Marchenko & Pastur ( 1967 ) ; Tulino & Verdú ( 2004 ) ; Couillet & Debbah ( 2011 ) ; Yao et al . ( 2015 ) . The ESD is a convenient tool to summarize all information obtainable from the eigenvalues of a matrix . For instance , the trace of Σ is proportional to the mean of the distribution , while the condition number is related to the range of the support . As is common , we will work in models where there is a sequence of covariance matrices Σ = Σp , and their ESDs converges in distribution to a limiting probability distribution . The results become simpler , because they depend only on the limit . By extension , we say that the ESD of the n× p matrix X is the ESD of X > X/n . We will consider some very specific models for the data , assuming it is of the form X = UΣ1/2 , where U has iid entries of zero mean and unit variance . This means that the datapoints , i.e. , the rows of X , have the form xi = Σ1/2ui , i = 1 , . . . , p , where ui have iid entries . Then Σ is the ” true ” covariance matrix of the features , which is typically not observed . These types of models for the data are very common in random matrix theory , see the references mentioned above . Under these models , it is possible to characterize precisely the deviations between the empirical covariance matrix Σ̂ = n−1X > X and the population covariance matrix Σ , dating back to the well known classical Marchenko-Pastur law for eigenvectors ( Marchenko & Pastur , 1967 ) , extended to more general models and made more precise , including results for eigenvectors ( see e.g. , Tulino & Verdú , 2004 ; Couillet & Debbah , 2011 ; Yao et al. , 2015 , and references therein ) . This has been used to study methods for estimating the true covariance matrix , with several applications ( e.g. , Paul & Aue , 2014 ; Bun et al. , 2017 ) . More recently , such models have been used to study high dimensional statistical learning problems , including classification and regression ( e.g. , Zollanvari & Genton , 2013 ; Dobriban & Wager , 2018 ) . Our work falls in this line . We start by finding a precise representation of the ridge estimator . For random vectors un , vn of growing dimension , we say un and vn are deterministic equivalents , if for any sequence of fixed ( or random and independent of un , vn ) vectors wn such that lim sup ‖wn‖2 < ∞ almost surely , we have |w > n ( un − vn ) | → 0 almost surely . We denote this by un vn . Thus linear combinations of un are well approximated by those of vn . This is a somewhat non-standard definition , but it turns out that it is precisely the one we need to use prior results from random matrix theory such as from ( Rubio & Mestre , 2011 ) . We extend scalar functions f : R→ R to matrices in the usual way by functional calculus , applying them to the eigenvalues and keeping the eigenvectors . If M = V ΛV > is a spectral decomposition of M , then we define f ( M ) : = V f ( Λ ) V > , where f ( Λ ) is the diagonal matrix with entries f ( Λii ) . For a fixed design matrix X , we can write the estimator as β̂ = ( Σ̂ + λIp ) −1Σ̂β + ( Σ̂ + λIp ) −1X > ε n . However , for a random design , we can find a representation that depends on the true covariance Σ , which may be simpler when Σ is simple , e.g. , when Σ = Ip is isotropic . Theorem 2.1 ( Representation of ridge estimator ) . Suppose the data matrix has the form X = UΣ1/2 , where U ∈ Rn×p has iid entries of zero mean , unit variance and finite 8 + c-th moment for some c > 0 , and Σ = Σp ∈ Rp×p is a deterministic positive definite matrix . Suppose that n , p→∞ with p/n→ γ > 0 . Suppose the ESD of the sequence of Σs converges in distribution to a probability measure with compact support bounded away from the origin . Suppose that the noise is Gaussian , and that β = βp is an arbitrary sequence of deterministic vectors , such that lim sup ‖β‖2 < ∞ . Then the ridge regression estimator is asymptotically equivalent to a random vector with the following representation : β̂ ( λ ) A ( Σ , λ ) · β +B ( Σ , λ ) · σ · Z p1/2 . Here Z ∼ N ( 0 , Ip ) is a random vector that is stochastically dependent only on the noise ε , and A , B are deterministic matrices defined by applying the scalar functions below to Σ : A ( x , λ ) = ( cpx+ λ ) −2 ( cp + c ′ p ) x , B ( x , λ ) = ( cpx+ λ ) −1cpx . Here cp : = c ( n , p , Σ , λ ) is the unique positive solution of the fixed point equation 1− cp = cp n tr [ Σ ( cpΣ + λI ) −1 ] . ( 2 ) This result gives a precise representation of the ridge regression estimator . It is a sum of two terms : the true coefficient vector β scaled by the matrixA ( Σ , λ ) , and the noise vectorZ scaled by the matrix B ( Σ , λ ) . The first term captures to what extent ridge regression recovers the ” signal ” . Morever , the noise term Z is directly coupled with the noise in the original regression problem , and thus also the estimator . The result would not hold for an independent noise vector Z . However , the coefficients are not fully explicit , as they depend on the unknown population covariance matrix Σ , as well as on the fixed-point variable cp . Some comments are in order : 1 . Structure of the proof . The proof is quite non-elementary and relies on random matrix theory . Specifically , it uses the language of the recently developed ” calculus of deterministic equivalents ” ( Dobriban & Sheng , 2018 ) , and results by ( Rubio & Mestre , 2011 ) . A general takeaway is that for n not much larger than p , the empirical covariance matrix Σ̂ is not a good estimator of the true covariance matrix Σ . However , the deviation of linear functionals of Σ̂ , can be quantified . In particular , we have ( Σ̂ + λI ) −1 ( cpΣ + λI ) −1 , in the sense that linear combinations of the entries of the two matrices are close ( see the proof for more details ) . 2 . Understanding the resolvent bias factor cp . Thus , cp can be viewed as a resolvent bias factor , which tells us by what factor Σ is multiplied when evaluating the resolvent ( Σ̂ + λI ) −1 , and comparing it to its naive counterpart ( Σ + λI ) −1 . It is known that cp is well defined , and this follows by a simple monotonicity argument , see Hachem et al . ( 2007 ) ; Rubio & Mestre ( 2011 ) . Specifically , the left hand side of ( 2 ) is decreasing in cp , while the right hand size is increasing in Also c′p is the derivative of cp , when viewing it as a function of z : = −λ . An explicit expression is provided in the proof in Section A.1 , but is not crucial right now . Here we discuss some implications of this representation . For uncorrelated features , Σ = Ip , A , B reduce to multiplication by scalars . Hence , each coordinate of the ridge regression estimator is simply a scalar multiple of the corresponding coordinate of β . One can use this to find the bias in each individual coordinate . Training error and optimal regularization parameter . This theorem has implications for understanding the training error , and optimal regularization parameter of ridge regression . As it stands , the theorem itself only characterizes the behavior og linear combinations of the coordinates of the estimator . Thus , it can be directly applied to study the bias Eβ̂ ( λ ) − β of the estimator . However , it can not directly be used to study the variance ; as that would require understanding quadratic functionals of the estimator . This seems to require significant advances in random matrix theory , going beyond the results of Rubio & Mestre ( 2011 ) . However , we show below that with additional assumptions on the structure of the parameter β , we can derive the MSE of the estimator in other ways . We work in a random-effects model , where the p-dimensional regression parameter β is random , each coefficient has zero mean Eβi = 0 , and is normalized so that Varβi = α2/p . This ensures that the signal strength E‖β‖2 = α2 is fixed for any p. The asymptotically optimal λ in this setting is always λ∗ = γσ2/α2 see e.g. , Tulino & Verdú ( 2004 ) ; Dicker ( 2016 ) ; Dobriban & Wager ( 2018 ) . The ridge regression estimator with λ = pσ2/ ( nα2 ) is the posterior mean of β , when β and ε are normal random variables . For a distribution F , we define the quantities θi ( λ ) = ∫ 1 ( x+ λ ) i dFγ ( x ) , ( i = 1 , 2 , . . . ) . These are the moments of the resolvent and its derivatives ( up to constants ) . We use the following loss functions : mean squared estimation error : M ( β̂ ) = E‖β̂ − β‖22 , and residual or training error : R ( β̂ ) = E [ ‖ ] Y −Xβ̂‖22 . Theorem 2.2 ( MSE and training error of ridge ) . Suppose β has iid entries with Eβi = 0 , Var [ βi ] = α2/p , i = 1 , . . . , p and β is independent of X and ε . Suppose X is an arbitrary n × p matrix depending on n and p , and the ESD of X converges weakly to a deterministic distribution F as n , p → ∞ and p/n → γ . Then the asymptotic MSE and residual error of the ridge regression estimator β̂ ( λ ) has the form lim n→∞ M ( β̂ ( λ ) ) = α2λ2θ2 + γσ 2 [ θ1 − λθ2 ] , ( 3 ) lim n→∞ R ( β̂ ( λ ) ) = α2λ2 [ θ1 − λθ2 ] + σ2 [ 1− γ ( 1 + λθ1 − λ2θ2 ) ] , ( 4 ) Bias-variance tradeoff . Building on this , we can also study the bias-variance tradeoff of ridge regression . Qualitatively , large λ leads to more regularization , and decreases the variance . However , it also increases the bias . Our theory allows us to find the explicit formulas for the bias and variance as a function of λ . See Figure 1 for a plot and Sec . A.3 for the details . As far as we know , this is one of the few examples of high-dimensional asymptotic problems where the precise form of the bias and variance can be evaluated . Bias-variance tradeoff at optimal λ∗ = γσ2/α2 . ( see Figure 6 ) This can be viewed as the ” pure ” effect of dimensionality on the problem , keeping all other parameters fixed , and has intriguing properties . The variance first increases , then decreases with γ . In the ” classical ” low-dimensional case , most of the risk is due to variance , while in the ” modern ” high-dimensional case , most of it is due to bias . This is consistent with other phenomena in proportional-limit asymptotics , e.g. , that the map between population and sample eigenvalue distributions is asymptotically deterministic ( Marchenko & Pastur , 1967 ) . Future applications . This fundamental representation may have applications to important statistical inference questions . For instance , inference on the regression coefficient β and the noise variance σ2 are important and challenging problems . Can we use our representation to develop debiasing techniques for this task ? This will be interesting to explore in future work .
This paper presents a theoretical study of ridge regression, focusing on the practical problems of correcting for the bias of the cross-validation based estimate of the optimal regularisation parameter, and quantification of the asymptotic risk of sketching algorithms for ridge regression, both in the p / n -> gamma in (0, 1) regime (n = # data points, p = # dimensions). The authors derive most of their results exploiting their (AFAICT) new asymptotic characterisation of the ridge regression estimator which may be of independent interest. The whole study is complemented by a series of numerical experiments.
SP:bbcb77fc764f7e90ef6126d97d8195734fcdafe8
Incorporating Horizontal Connections in Convolution by Spatial Shuffling
INCORPORATING HORIZONTAL CONNECTIONS IN CONVOLUTION BY SPATIAL SHUFFLING Anonymous authors Paper under double-blind review Convolutional Neural Networks ( CNNs ) are composed of multiple convolution layers and show elegant performance in vision tasks . The design of the regular convolution is based on the Receptive Field ( RF ) where the information within a specific region is processed . In the view of the regular convolution ’ s RF , the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF . As a result , the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information . However , in lower layers of the biological brain , the information outside of the RF changes the properties of neurons . In this work , we extend the regular convolution and propose spatially shuffled convolution ( ss convolution ) . In ss convolution , the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation . We perform experiments on CIFAR-10 and ImageNet-1k dataset , and show that ss convolution improves the classification performance across various CNNs . 1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) and their convolution layers ( Fukushima , 1980 ; Lecun et al. , 1998 ) are inspired by the finding in cat visual cortex ( Hubel & Wiesel , 1959 ) and they show the strong performance in various domains such as image recognition ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , natural language processing ( Gehring et al. , 2017 ) , and speech recognition ( Abdel-Hamid et al. , 2014 ; Zhang et al. , 2016 ) . A notable characteristic of the convolution layer is the Receptive Field ( RF ) , which is the particular input region where a convolutional output is affected by . The units ( or neurons ) in higher layers have larger RF by bundling the outputs of the units in lower layers with smaller RF . Thanks to the hierarchical architectures of CNNs , the units in high layers are able to capture the global context even though the units in low layers only see the local information . It is known that neurons in the primary visual cortex ( i.e. , V1 which is low layers ) change the selfproperties ( e.g. , the RF size ( Pettet & Gilbert , 1992 ) and the facilitation effect ( Nelson & Frost , 1985 ) ) based on the information outside of the RF ( D.Gilbert , 1992 ) . The mechanism is believed to originate from ( 1 ) feedbacks from the higher-order area ( Iacaruso et al. , 2017 ) and ( 2 ) intracortical horizontal connections ( D.Gilbert , 1992 ) . The feedbacks from the higher-order area convey broader-contextual information than the neurons in V1 , which allows the neurons in V1 to use the global context . For instance , Gilbert & Li ( 2013 ) argued that the feedback connections work as attention . Horizontal connections allow the distanced neurons in the layer to communicate with each other and are believed to play an important role in visual contour integration ( Li & Gilbert , 2002 ) and object grouping ( Schmidt et al. , 2006 ) . Though both horizontal and feedback connections are believed to be important for visual processing in the visual cortex , the regular convolution ignores the properties of these connections . In this work , we particularly focus on algorithms to introduce the function of horizontal connections for the regular convolution in CNNs . We propose spatially shuffled convolution ( ss convolution ) , where the information outside of the regular convolution ’ s RF is incorporated by spatial shuffling , which is a simple and lightweight operation . Our ss convolution is the same operation as the regular convolution except for spatial shuffling and requires no extra learnable parameters . The design of ss convolution is highly inspired by the function of horizontal connections . To test the effectiveness of the information outside of the regular convolution ’ s RF in CNNs , we perform experiments on CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet 2012 dataset ( Russakovsky et al. , 2015 ) and show that ss convolution improves the classification performance across various CNNs . These results indicate that the information outside of the RF is useful when processing local information . In addition , we conduct several analyses to examine why ss convolution improves the classification performance in CNNs and show that spatial shuffling allows the regular convolution to use the information outside of its RF . 2 RELATED WORK . 2.1 VARIANTS OF CONVOLUTION LAYERS AND NEURAL MODULES . There are two types of approaches to improve the Receptive Field ( RF ) of CNNs with the regular convolution : broadening kernel of convolution layer and modulating activation values by selfattention . Broadening Kernel : The atrous convolution ( Holschneider et al. , 1989 ; Yu & Koltun , 2016 ) is the convolution with the strided kernel . The stride is not learnable and given in advance . The atrous convolution can have larger RF compared to the regular convolution with the same computational complexity and the number of learnable parameters . The deformable convolution ( Dai et al. , 2017 ) is the atrous convolution with learnable kernel stride that depends on inputs and spatial locations . The stride of the deformable convolution is changed flexibly unlike the atrous convolution , however , the deformable convolution requires extra computations to calculate strides . Both atrous and deformable convolution contribute to broadening RF , however , it is not plausible to use the pixel information at a distant location when processing local information . Let us consider the case that the information of p pixels away is useful for processing local information at layer l. In the simple case , it is known that the size of the RF grows with k √ n where k is the size of the convolution kernel and n is the number of layers ( Luo et al. , 2016 ) . In this case , the size of kernel needs to be p√ n and k is around 45 when p = 100 and l = 5 . If the kernel size is 3 × 3 , then the stride needs to be 21 across layers . Such large stride causes both the atrous and the deformable convolution to have a sparse kernel and it is not suitable for processing local information . Self-Attention : Squeeze and Excitation module ( SE module ) ( Hu et al. , 2018 ) is proposed to modulate the activation values by using the global context which is obtained by Global Average Pooling ( GAP ) ( Lin et al. , 2014 ) . SE module allows CNNs with the regular convolution to use the information outside of its RF as our ss convolution does . In our experiments , ss convolution gives the marginal improvements on SEResNet50 ( Hu et al. , 2018 ) that is ResNet50 ( He et al. , 2016 ) with SE module . This result makes us wonder why ss convolution improves the performance of SEResNet50 , thus we conduct the analyses and find that the RF of SEResNet50 is location independent and the RF of ResNet with ss convolution is the location-dependent . This result is reasonable since the spatial information of activation values is not conserved by GAP in SE module . We conclude that such a difference may be the reason why ss convolution improves the classification performance on SEResNet50 . Attention Branch Networks ( ABN ) ( Fukui et al. , 2019 ) is proposed for a top-down visual explanation by using an attention mechanism . ABN uses the output of the side branch to modulate activation values of the main branch . The outputs of the side branch have larger RF than the one of the main branch , thus the main branch is able to modulate the activation values based on the information outside of main branch ’ s RF . In our experiments , ss convolution improves the performance on ABN and we assume that this is because ABN works as like feedbacks from higher-order areas , unlike ss convolution that is inspired by the function of horizontal connections . 2.2 UTILIZATION OF SHUFFLING IN CNNS . ShuffleNet ( Zhang et al. , 2017 ) is designed for computation-efficient CNN architecture and the group convolution ( Krizhevsky et al. , 2012 ) is heavily used . They shuffle the channel to make cross-group information flow for multiple group convolution layers . The motivation of using shuffling between ShuffleNet and our ss convolution is different . On the one hand , our ss convolution uses spatial shuffling to use the information from outside of the regular convolution ’ s RF . On the other hand , the channel shuffling in ShuffleNet does not broaden RF and not contribute to use the information outside of the RF . 3 METHOD . In this section , we introduce spatially shuffled convolution ( ss convolution ) . 3.1 SPATIALLY SHUFFLED CONVOLUTION . Horizontal connections are the mechanism to use information outside of the RF . We propose ss convolution to incorporate this mechanism into the regular convolution , which consists of two components : spatial shuffling and regular convolution . The shuffling is based on a permutation matrix that is generated at the initialization . The permutation matrix is fixed while training and testing . Our ss convolution is defined as follows : yi , j = C∑ c ∑ ∆i , ∆j∈R wc , ∆i , ∆j · P ( xc , i+∆i , j+∆j ) , ( 1 ) P ( xc , i , j ) = { π ( xc , i , j ) , c ≤ bαCc , xc , i , j , otherwise . ( 2 ) R represents the offset coordination of the kernel . For examples , the case of the 3 × 3 kernel is R = { ( −1 , −1 ) , ( −1 , 0 ) , ( −1 , 1 ) , ( 0 , −1 ) , ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , −1 ) , ( 1 , 0 ) , ( 1 , 1 ) } . x ∈ RC×I×J is the input and w ∈ RCw×Iw×Jw is the kernel weights of the regular convolution . In Eqn . ( 2 ) , the input x is shuffled by P and then the regular convolution is applied . Fig . 1- ( a ) is the visualization of Eqn . ( 2 ) . α ∈ [ 0 , 1 ] is the hyper-parameter to control how many channels are shuffled . If bαCc = 0 , then ss convolution is same as the regular convolution . At the initialization , we randomly generate the permutation matrix π ∈ { 0 , 1 } m×m where ∑m i=1 πi , j = 1 , ∑m j=1 πi , j = 1 and m = I · J · bαCc1 . The generated π at the initialization is fixed for training and testing . The result of CIFAR-10 across various α is shown in Fig . 2 . The biggest improvement of the classification performance is obtained when α is around 0.06 . 3.2 SPATIALLY SHUFFLED GROUP CONVOLUTION . The group convolution ( Krizhevsky et al. , 2012 ) is the variants of the regular convolution . We find that the shuffling operation of Eqn . 2 is not suitable for the group convolution . ResNeXt ( Xie et al. , 2017 ) is CNN to use heavily group convolutions and Table 1 shows the test error of ResNeXt in CIFAR-10 ( Krizhevsky , 2009 ) . As can be seen in Table 1 , the improvement of the classification performance is marginal with Eqn . 2 . Thus , we propose the spatial shuffling for the 1We implement Eqn . 2 by indexing , thus we hold m long int instead of m × m binary matrix . The implementation of ss convolution is shown in Appendix A.2 . group convolution as follows : P ( xc , i , j ) = { π ( xc , i , j ) , 0 ≡ C mod b 1αc , xc , i , j , otherwise . ( 3 ) Eqn . 3 represents that the shuffled parts are interleaved like the illustration in Fig . 1- ( b ) . As can be seen in Table 1 , ss convolution with Eqn . 3 improves the classification performance of ResNeXt .
The authors extended the regular convolution and proposed spatially shuffled convolution to use the information outside of its RF, which is inspired by the idea that horizontal connections are believed to be important for visual processing in the visual cortex in biological brain. The authors proposed ss convolution for regular convolution and group convolution. The authors tested the proposed ss convolution on multiple CNN models and show improvement of results. Finally, detailed analysis of spatial shuffling and ablation study was conducted.
SP:d5ccf8fdd029c2a99dac0441385f280ed3fc03fb
Incorporating Horizontal Connections in Convolution by Spatial Shuffling
INCORPORATING HORIZONTAL CONNECTIONS IN CONVOLUTION BY SPATIAL SHUFFLING Anonymous authors Paper under double-blind review Convolutional Neural Networks ( CNNs ) are composed of multiple convolution layers and show elegant performance in vision tasks . The design of the regular convolution is based on the Receptive Field ( RF ) where the information within a specific region is processed . In the view of the regular convolution ’ s RF , the outputs of neurons in lower layers with smaller RF are bundled to create neurons in higher layers with larger RF . As a result , the neurons in high layers are able to capture the global context even though the neurons in low layers only see the local information . However , in lower layers of the biological brain , the information outside of the RF changes the properties of neurons . In this work , we extend the regular convolution and propose spatially shuffled convolution ( ss convolution ) . In ss convolution , the regular convolution is able to use the information outside of its RF by spatial shuffling which is a simple and lightweight operation . We perform experiments on CIFAR-10 and ImageNet-1k dataset , and show that ss convolution improves the classification performance across various CNNs . 1 INTRODUCTION . Convolutional Neural Networks ( CNNs ) and their convolution layers ( Fukushima , 1980 ; Lecun et al. , 1998 ) are inspired by the finding in cat visual cortex ( Hubel & Wiesel , 1959 ) and they show the strong performance in various domains such as image recognition ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , natural language processing ( Gehring et al. , 2017 ) , and speech recognition ( Abdel-Hamid et al. , 2014 ; Zhang et al. , 2016 ) . A notable characteristic of the convolution layer is the Receptive Field ( RF ) , which is the particular input region where a convolutional output is affected by . The units ( or neurons ) in higher layers have larger RF by bundling the outputs of the units in lower layers with smaller RF . Thanks to the hierarchical architectures of CNNs , the units in high layers are able to capture the global context even though the units in low layers only see the local information . It is known that neurons in the primary visual cortex ( i.e. , V1 which is low layers ) change the selfproperties ( e.g. , the RF size ( Pettet & Gilbert , 1992 ) and the facilitation effect ( Nelson & Frost , 1985 ) ) based on the information outside of the RF ( D.Gilbert , 1992 ) . The mechanism is believed to originate from ( 1 ) feedbacks from the higher-order area ( Iacaruso et al. , 2017 ) and ( 2 ) intracortical horizontal connections ( D.Gilbert , 1992 ) . The feedbacks from the higher-order area convey broader-contextual information than the neurons in V1 , which allows the neurons in V1 to use the global context . For instance , Gilbert & Li ( 2013 ) argued that the feedback connections work as attention . Horizontal connections allow the distanced neurons in the layer to communicate with each other and are believed to play an important role in visual contour integration ( Li & Gilbert , 2002 ) and object grouping ( Schmidt et al. , 2006 ) . Though both horizontal and feedback connections are believed to be important for visual processing in the visual cortex , the regular convolution ignores the properties of these connections . In this work , we particularly focus on algorithms to introduce the function of horizontal connections for the regular convolution in CNNs . We propose spatially shuffled convolution ( ss convolution ) , where the information outside of the regular convolution ’ s RF is incorporated by spatial shuffling , which is a simple and lightweight operation . Our ss convolution is the same operation as the regular convolution except for spatial shuffling and requires no extra learnable parameters . The design of ss convolution is highly inspired by the function of horizontal connections . To test the effectiveness of the information outside of the regular convolution ’ s RF in CNNs , we perform experiments on CIFAR-10 ( Krizhevsky , 2009 ) and ImageNet 2012 dataset ( Russakovsky et al. , 2015 ) and show that ss convolution improves the classification performance across various CNNs . These results indicate that the information outside of the RF is useful when processing local information . In addition , we conduct several analyses to examine why ss convolution improves the classification performance in CNNs and show that spatial shuffling allows the regular convolution to use the information outside of its RF . 2 RELATED WORK . 2.1 VARIANTS OF CONVOLUTION LAYERS AND NEURAL MODULES . There are two types of approaches to improve the Receptive Field ( RF ) of CNNs with the regular convolution : broadening kernel of convolution layer and modulating activation values by selfattention . Broadening Kernel : The atrous convolution ( Holschneider et al. , 1989 ; Yu & Koltun , 2016 ) is the convolution with the strided kernel . The stride is not learnable and given in advance . The atrous convolution can have larger RF compared to the regular convolution with the same computational complexity and the number of learnable parameters . The deformable convolution ( Dai et al. , 2017 ) is the atrous convolution with learnable kernel stride that depends on inputs and spatial locations . The stride of the deformable convolution is changed flexibly unlike the atrous convolution , however , the deformable convolution requires extra computations to calculate strides . Both atrous and deformable convolution contribute to broadening RF , however , it is not plausible to use the pixel information at a distant location when processing local information . Let us consider the case that the information of p pixels away is useful for processing local information at layer l. In the simple case , it is known that the size of the RF grows with k √ n where k is the size of the convolution kernel and n is the number of layers ( Luo et al. , 2016 ) . In this case , the size of kernel needs to be p√ n and k is around 45 when p = 100 and l = 5 . If the kernel size is 3 × 3 , then the stride needs to be 21 across layers . Such large stride causes both the atrous and the deformable convolution to have a sparse kernel and it is not suitable for processing local information . Self-Attention : Squeeze and Excitation module ( SE module ) ( Hu et al. , 2018 ) is proposed to modulate the activation values by using the global context which is obtained by Global Average Pooling ( GAP ) ( Lin et al. , 2014 ) . SE module allows CNNs with the regular convolution to use the information outside of its RF as our ss convolution does . In our experiments , ss convolution gives the marginal improvements on SEResNet50 ( Hu et al. , 2018 ) that is ResNet50 ( He et al. , 2016 ) with SE module . This result makes us wonder why ss convolution improves the performance of SEResNet50 , thus we conduct the analyses and find that the RF of SEResNet50 is location independent and the RF of ResNet with ss convolution is the location-dependent . This result is reasonable since the spatial information of activation values is not conserved by GAP in SE module . We conclude that such a difference may be the reason why ss convolution improves the classification performance on SEResNet50 . Attention Branch Networks ( ABN ) ( Fukui et al. , 2019 ) is proposed for a top-down visual explanation by using an attention mechanism . ABN uses the output of the side branch to modulate activation values of the main branch . The outputs of the side branch have larger RF than the one of the main branch , thus the main branch is able to modulate the activation values based on the information outside of main branch ’ s RF . In our experiments , ss convolution improves the performance on ABN and we assume that this is because ABN works as like feedbacks from higher-order areas , unlike ss convolution that is inspired by the function of horizontal connections . 2.2 UTILIZATION OF SHUFFLING IN CNNS . ShuffleNet ( Zhang et al. , 2017 ) is designed for computation-efficient CNN architecture and the group convolution ( Krizhevsky et al. , 2012 ) is heavily used . They shuffle the channel to make cross-group information flow for multiple group convolution layers . The motivation of using shuffling between ShuffleNet and our ss convolution is different . On the one hand , our ss convolution uses spatial shuffling to use the information from outside of the regular convolution ’ s RF . On the other hand , the channel shuffling in ShuffleNet does not broaden RF and not contribute to use the information outside of the RF . 3 METHOD . In this section , we introduce spatially shuffled convolution ( ss convolution ) . 3.1 SPATIALLY SHUFFLED CONVOLUTION . Horizontal connections are the mechanism to use information outside of the RF . We propose ss convolution to incorporate this mechanism into the regular convolution , which consists of two components : spatial shuffling and regular convolution . The shuffling is based on a permutation matrix that is generated at the initialization . The permutation matrix is fixed while training and testing . Our ss convolution is defined as follows : yi , j = C∑ c ∑ ∆i , ∆j∈R wc , ∆i , ∆j · P ( xc , i+∆i , j+∆j ) , ( 1 ) P ( xc , i , j ) = { π ( xc , i , j ) , c ≤ bαCc , xc , i , j , otherwise . ( 2 ) R represents the offset coordination of the kernel . For examples , the case of the 3 × 3 kernel is R = { ( −1 , −1 ) , ( −1 , 0 ) , ( −1 , 1 ) , ( 0 , −1 ) , ( 0 , 0 ) , ( 0 , 1 ) , ( 1 , −1 ) , ( 1 , 0 ) , ( 1 , 1 ) } . x ∈ RC×I×J is the input and w ∈ RCw×Iw×Jw is the kernel weights of the regular convolution . In Eqn . ( 2 ) , the input x is shuffled by P and then the regular convolution is applied . Fig . 1- ( a ) is the visualization of Eqn . ( 2 ) . α ∈ [ 0 , 1 ] is the hyper-parameter to control how many channels are shuffled . If bαCc = 0 , then ss convolution is same as the regular convolution . At the initialization , we randomly generate the permutation matrix π ∈ { 0 , 1 } m×m where ∑m i=1 πi , j = 1 , ∑m j=1 πi , j = 1 and m = I · J · bαCc1 . The generated π at the initialization is fixed for training and testing . The result of CIFAR-10 across various α is shown in Fig . 2 . The biggest improvement of the classification performance is obtained when α is around 0.06 . 3.2 SPATIALLY SHUFFLED GROUP CONVOLUTION . The group convolution ( Krizhevsky et al. , 2012 ) is the variants of the regular convolution . We find that the shuffling operation of Eqn . 2 is not suitable for the group convolution . ResNeXt ( Xie et al. , 2017 ) is CNN to use heavily group convolutions and Table 1 shows the test error of ResNeXt in CIFAR-10 ( Krizhevsky , 2009 ) . As can be seen in Table 1 , the improvement of the classification performance is marginal with Eqn . 2 . Thus , we propose the spatial shuffling for the 1We implement Eqn . 2 by indexing , thus we hold m long int instead of m × m binary matrix . The implementation of ss convolution is shown in Appendix A.2 . group convolution as follows : P ( xc , i , j ) = { π ( xc , i , j ) , 0 ≡ C mod b 1αc , xc , i , j , otherwise . ( 3 ) Eqn . 3 represents that the shuffled parts are interleaved like the illustration in Fig . 1- ( b ) . As can be seen in Table 1 , ss convolution with Eqn . 3 improves the classification performance of ResNeXt .
In this paper, the authors proposed a shuffle strategy for convolution layers in convolutional neural networks (CNNs). Specifically, the authors argued that the receptive field (RF) of each convolutional filter should be not constrained in the small patch. Instead, it should also cover other locations beyond the local patch and also the single channel. Based on this motivation, the authors proposed a spatial shuffling layer which is aimed at shuffling the original feature responses. In the experimental results, the authors evaluated the proposed ss convolutional layer on CIFAR-10 and ImageNet-1k and compared with various baseline architectures. Besides, the authors further did some ablated analysis and visualizations for the proposed ss convolutional layer.
SP:d5ccf8fdd029c2a99dac0441385f280ed3fc03fb
Guiding Program Synthesis by Learning to Generate Examples
1 INTRODUCTION . Over the years , program synthesis has been applied to a wide variety of different tasks including string , number or date transformations ( Gulwani , 2011 ; Singh & Gulwani , 2012 ; 2016 ; Ellis et al. , 2019 ; Menon et al. , 2013 ; Ellis & Gulwani , 2017 ) , layout and graphic program generation ( Bielik et al. , 2018 ; Hempel & Chugh , 2016 ; Ellis et al. , 2019 ; 2018 ) , data extraction ( Barowy et al. , 2014 ; Le & Gulwani , 2014 ; Iyer et al. , 2019 ) , superoptimization ( Phothilimthana et al. , 2016 ; Schkufza et al. , 2016 ) , code repair ( Singh et al. , 2013 ; Nguyen et al. , 2013 ; D ’ Antoni et al. , 2016 ) , language modelling ( Bielik et al. , 2017 ) , synthesis of data processing programs ( Polosukhin & Skidanov , 2018 ; Nye et al. , 2019 ) or semantic parsing Shin et al . ( 2019a ) . To capture user intent in an easy and intuitive way , many program synthesizers let its users provide a set of input-output examples I which the synthesized program should satisfy . Generalization challenge A natural expectation of the end user in this setting is that the synthesized program works well even when I is severely limited ( e.g. , to one or two examples ) . Because of this small number of examples and the big search space of possible programs , there are often millions of programs consistent with I . However , only a small number of them generalizes well to unseen examples which makes the synthesis problem difficult . Existing methods Several approaches have provided ways to address the above challenge , including using an external model that learns to rank candidate programs returned by the synthesizer , modifying the search procedure by learning to guide the synthesizer such that it returns more likely programs directly , or neural program induction methods that replace the synthesizer with a neural network to generate outputs directly using a latent program representation . However , regardless of what other features these approaches use , such as conditioning on program traces ( Shin et al. , 2018 ; Ellis & Gulwani , 2017 ; Chen et al. , 2019 ) or pre-training on the input data ( Singh , 2016 ) , they are limited by the fact that their models are conditioned on the initial , limited user specification . This work We present a new approach for program synthesis from examples which addresses the above challenge . The key idea is to resolve ambiguity by iteratively strengthening the initial specification I with new examples . To achieve this , we start by using an existing synthesizer to find a candidate program p1 that satisfies all examples in I . Instead of returning p1 , we use it to find a distinguishing input x∗ that leads to ambiguities , i.e. , other programs pi that satisfy I but produce different outputs p1 ( x∗ ) 6= pi ( x∗ ) . To resolve this ambiguity , we first generate a set of candidate outputs for x∗ , then use a neural model ( which we train beforehand ) that acts as an oracle and selects the most likely output , and finally , add x∗ and its predicted output to the input specification I . The whole process is then repeated . These steps are similar to those used in Oracle Guided Inductive Synthesis ( Jha et al. , 2010 ) with two main differences : ( i ) we automate the entire process by learning the oracle from data instead of using a human oracle , and ( ii ) as we do not use a human oracle to produce a correct output , we need to ensure that the set of candidate outputs contains the correct one . Augmenting an existing Android layout synthesizer In this work we apply our approach to a state-of-the-art synthesizer , called InferUI ( Bielik et al. , 2018 ) , that creates an Android layout program which represents the implementation of a user interface . Given an application design consisting of a set of views ( e.g. , buttons , images , text fields , etc . ) and their location on the device screen , InferUI synthesizes a layout program that when rendered , places the views at that same location . Concretely , each input-output example ( x , y ) consists of a device screen x ∈ R4 and a set of n views y ∈ Rn×4 , all of which are represented using their coordinates in a two dimensional euclidean space . As an example , the input specification I shown in Figure 1 contains a single example with absolute view positions for a Nexus 4 device and the InferUI synthesizer easily finds multiple programs that satisfy it ( dashed box ) . To apply our method and resolve the ambiguity , we find a distinguishing input x∗ , in this case a narrower P4 Pro device , on which some of the candidate programs produce different outputs . Then , instead of asking the user to manually produce the correct output , we generate additional candidate outputs ( to ensure that the correct one is included ) and our learned neural oracle automatically selects one of these outputs ( the one it believes is correct ) and adds it to the input specification I . In this case , the oracle selects output p2 ( x∗ ) as both buttons are correctly resized and the distance between them was reduced to match the smaller device width . In contrast , p1 ( x∗ ) contains overlapping buttons while in pn ( x∗ ) , only the left button was resized . Automatically obtaining real-world datasets An important advantage of our approach is that we reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct . This is especially useful for domains , such as the Android layout synthesis , for which the output correctness depends mostly on properties of the output and not on the program used to generate it . As a result , obtaining a suitable training dataset can be easier as we do not require the hard to obtain ground-truth programs , for which currently no large real-world datasets exist ( Shin et al. , 2019b ) . In fact , it is possible to train the oracle using unsupervised learning only , with a dataset consisting of correct input-output examples DU . For example , in layout synthesis an autoencoder can be trained over a large number of views and their positions extracted from running real-world applications . However , instead of training such an unsupervised model , in our work we useDU to automatically construct a supervised datasetDS by labelling the samples inDU as positive and generating a set of negative samples by adding suitable noise to the samples in DU . Finally , we also obtain the dataset DS+ that additionally includes the input specification I . In the domain of Android layouts , although it is more difficult , such a dataset can also be collected automatically by running the same application on devices with different screen sizes . Our contributions We present a new approach to address the ambiguity in the existing Android layout program synthesizer InferUI by iteratively extending the user provided specification with new input-output examples . The key component of our method is a learned neural oracle used to generate new examples trained with datasets that do not require human annotations or ground-truth programs . To improve generalization , InferUI already contains a probabilistic model that scores programs q ( p | I ) as well as handcrafted robustness properties , achieving 35 % generalization accuracy on a dataset of Google Play Store applications . In contrast , our method significantly improves the accuracy to 71 % while using a dataset containing only correct and incorrect program outputs . We make our implementation and datasets available online at : https : //github.com/eth-sri/guiding-synthesizers 2 RELATED WORK . In this section we discuss the work most closely related to ours . Guiding program synthesis To improve scalability and generalization of program synthesizers several techniques have been proposed that guide the synthesizer towards good programs . The most widely used approach is to implement a statistical search procedure which explores candidate programs based on some type of learned probabilistic model – log-linear models ( Menon et al. , 2013 ; Long & Rinard , 2016 ) , hierarchical Bayesian prior ( Liang et al. , 2010 ) , probabilistic higher order grammar ( Lee et al. , 2018 ) or neural network ( Balog et al. , 2017 ; Sun et al. , 2018 ) . Kalyan et al . ( 2018 ) also takes advantage of probabilistic models but instead of implementing a custom search procedure , they use the learned model to guide an existing symbolic search engine . In addition to approaches that search for a good program directly ( conditioned on the input specification ) , a number of works guide the search by first selecting a high-level sketch of the program and then filling in the holes using symbolic ( Ellis et al. , 2018 ; Murali et al. , 2017 ; Nye et al. , 2019 ) , enumerative or neural search ( Bosnjak et al. , 2017 ; Gaunt et al. , 2016 ) . A similar idea is also used by ( Shin et al. , 2018 ) , but instead of generating a program sketch the authors first infer execution traces ( or condition on partial traces obtained as the program is being generated ( Chen et al. , 2019 ) ) , which are then used to guide the synthesis of the actual program . In comparison to prior work , a key aspect of our approach is to guide the synthesis by generating additional input-output examples that resolve the ambiguities in the input specification . Guiding the synthesizer in this way has several advantages – ( i ) it is interpretable and the user can inspect the generated examples , ( ii ) it can be used to extend any existing synthesizer by introducing a refinement loop around it , ( iii ) the learned model is independent of the actual synthesizer ( and its domain specific language ) and instead is focused only on learning the relation between likely and unlikely input-output examples , and ( iv ) often it is easier to obtain a dataset containing program outputs instead of a dataset consisting of the actual programs . Further , our approach is complementary to prior works as it treats the synthesizer as a black-box that can generate candidate programs . We also note that several prior works explore the design of sophisticated neural architectures that encode input-output examples ( Sun et al. , 2018 ; Devlin et al. , 2017 ; Parisotto et al. , 2017 ) and incorporating some of their ideas might lead to further improvements to our models presented in Section 4 . Learning to rank To choose among all programs that satisfy the input specification , existing program synthesizers select the syntactically shortest program ( Liang et al. , 2010 ; Polozov & Gulwani , 2015 ; Raychev et al. , 2016 ) , the semantically closest program to a reference program ( D ’ Antoni et al. , 2016 ) or a program based on a learned scoring function ( Liang et al. , 2010 ; Mandelin et al. , 2005 ; Singh & Gulwani , 2015 ; Ellis & Gulwani , 2017 ; Singh , 2016 ) . Although the scoring function usually extracts features only from the synthesized program , some approaches also take advantage of additional information – Ellis & Gulwani ( 2017 ) trains a log-linear model using a set of handcrafted features defined over program traces and program outputs while Singh ( 2016 ) leverages unlabelled data by learning common substring expressions shared across the input data . Similar to prior work , our work explores various representations over which the model is learned . Because we applied our work to a domain where outputs can be represented as images ( rather than strings or numbers ) , to achieve good performance we explore different types of models ( i.e. , convolutional neural networks ) . Further , we do not assume that the synthesizer can efficiently enumerate all programs that satisfy the input specification as in Ellis & Gulwani ( 2017 ) ; Singh ( 2016 ) . For such synthesizers , applying a ranking of the returned candidates will often fail since the correct program is simply not included in the set of synthesized programs . Therefore , the neural oracle is defined over program outputs instead of actual programs . This reduces the search space for the synthesizer as well as the complexity of the machine learning models . Neural program induction Devlin et al . ( 2017 ) and Parisotto et al . ( 2017 ) , as well as related line of work on neural machines ( Graves et al. , 2016 ; Reed & de Freitas , 2016 ; Bošnjak et al. , 2017 ; Chen et al. , 2018 ) , explore the design of end-to-end neural approaches that generate the program output for a new input without the need for an explicit search . In this case the goal of the neural network is not to find the correct program explicitly , but rather to generate the most likely output for a given input based on the input specification . These approaches can be integrated in our work as one technique for generating a set of candidate outputs for a given distinguishing input instead of obtaining them using a symbolic synthesizer . However , the model requirements in our work are much weaker – it is enough if the correct output is among the top n most likely candidates rather than requiring 100 % precision for all possible inputs as in program induction .
A method for a refinement loop for program synthesizers operating on input/ouput specifications is presented. The core idea is to generate several candidate solutions, execute them on several inputs, and then use a learned component to judge which of the resulting input/output pairs are most likely to be correct. This avoids having to judge the correctness of the generated programs and instead focuses on the easier task of judging the correctness of outputs. An implementation of the idea in a tool for synthesizing programs generating UIs is evaluated, showing impressive improvements over the baseline.
SP:aec7ce88f21b38c205522c88b3a3253e24754182
Guiding Program Synthesis by Learning to Generate Examples
1 INTRODUCTION . Over the years , program synthesis has been applied to a wide variety of different tasks including string , number or date transformations ( Gulwani , 2011 ; Singh & Gulwani , 2012 ; 2016 ; Ellis et al. , 2019 ; Menon et al. , 2013 ; Ellis & Gulwani , 2017 ) , layout and graphic program generation ( Bielik et al. , 2018 ; Hempel & Chugh , 2016 ; Ellis et al. , 2019 ; 2018 ) , data extraction ( Barowy et al. , 2014 ; Le & Gulwani , 2014 ; Iyer et al. , 2019 ) , superoptimization ( Phothilimthana et al. , 2016 ; Schkufza et al. , 2016 ) , code repair ( Singh et al. , 2013 ; Nguyen et al. , 2013 ; D ’ Antoni et al. , 2016 ) , language modelling ( Bielik et al. , 2017 ) , synthesis of data processing programs ( Polosukhin & Skidanov , 2018 ; Nye et al. , 2019 ) or semantic parsing Shin et al . ( 2019a ) . To capture user intent in an easy and intuitive way , many program synthesizers let its users provide a set of input-output examples I which the synthesized program should satisfy . Generalization challenge A natural expectation of the end user in this setting is that the synthesized program works well even when I is severely limited ( e.g. , to one or two examples ) . Because of this small number of examples and the big search space of possible programs , there are often millions of programs consistent with I . However , only a small number of them generalizes well to unseen examples which makes the synthesis problem difficult . Existing methods Several approaches have provided ways to address the above challenge , including using an external model that learns to rank candidate programs returned by the synthesizer , modifying the search procedure by learning to guide the synthesizer such that it returns more likely programs directly , or neural program induction methods that replace the synthesizer with a neural network to generate outputs directly using a latent program representation . However , regardless of what other features these approaches use , such as conditioning on program traces ( Shin et al. , 2018 ; Ellis & Gulwani , 2017 ; Chen et al. , 2019 ) or pre-training on the input data ( Singh , 2016 ) , they are limited by the fact that their models are conditioned on the initial , limited user specification . This work We present a new approach for program synthesis from examples which addresses the above challenge . The key idea is to resolve ambiguity by iteratively strengthening the initial specification I with new examples . To achieve this , we start by using an existing synthesizer to find a candidate program p1 that satisfies all examples in I . Instead of returning p1 , we use it to find a distinguishing input x∗ that leads to ambiguities , i.e. , other programs pi that satisfy I but produce different outputs p1 ( x∗ ) 6= pi ( x∗ ) . To resolve this ambiguity , we first generate a set of candidate outputs for x∗ , then use a neural model ( which we train beforehand ) that acts as an oracle and selects the most likely output , and finally , add x∗ and its predicted output to the input specification I . The whole process is then repeated . These steps are similar to those used in Oracle Guided Inductive Synthesis ( Jha et al. , 2010 ) with two main differences : ( i ) we automate the entire process by learning the oracle from data instead of using a human oracle , and ( ii ) as we do not use a human oracle to produce a correct output , we need to ensure that the set of candidate outputs contains the correct one . Augmenting an existing Android layout synthesizer In this work we apply our approach to a state-of-the-art synthesizer , called InferUI ( Bielik et al. , 2018 ) , that creates an Android layout program which represents the implementation of a user interface . Given an application design consisting of a set of views ( e.g. , buttons , images , text fields , etc . ) and their location on the device screen , InferUI synthesizes a layout program that when rendered , places the views at that same location . Concretely , each input-output example ( x , y ) consists of a device screen x ∈ R4 and a set of n views y ∈ Rn×4 , all of which are represented using their coordinates in a two dimensional euclidean space . As an example , the input specification I shown in Figure 1 contains a single example with absolute view positions for a Nexus 4 device and the InferUI synthesizer easily finds multiple programs that satisfy it ( dashed box ) . To apply our method and resolve the ambiguity , we find a distinguishing input x∗ , in this case a narrower P4 Pro device , on which some of the candidate programs produce different outputs . Then , instead of asking the user to manually produce the correct output , we generate additional candidate outputs ( to ensure that the correct one is included ) and our learned neural oracle automatically selects one of these outputs ( the one it believes is correct ) and adds it to the input specification I . In this case , the oracle selects output p2 ( x∗ ) as both buttons are correctly resized and the distance between them was reduced to match the smaller device width . In contrast , p1 ( x∗ ) contains overlapping buttons while in pn ( x∗ ) , only the left button was resized . Automatically obtaining real-world datasets An important advantage of our approach is that we reduce the problem of selecting which program generalizes well to the simpler task of deciding which output is correct . This is especially useful for domains , such as the Android layout synthesis , for which the output correctness depends mostly on properties of the output and not on the program used to generate it . As a result , obtaining a suitable training dataset can be easier as we do not require the hard to obtain ground-truth programs , for which currently no large real-world datasets exist ( Shin et al. , 2019b ) . In fact , it is possible to train the oracle using unsupervised learning only , with a dataset consisting of correct input-output examples DU . For example , in layout synthesis an autoencoder can be trained over a large number of views and their positions extracted from running real-world applications . However , instead of training such an unsupervised model , in our work we useDU to automatically construct a supervised datasetDS by labelling the samples inDU as positive and generating a set of negative samples by adding suitable noise to the samples in DU . Finally , we also obtain the dataset DS+ that additionally includes the input specification I . In the domain of Android layouts , although it is more difficult , such a dataset can also be collected automatically by running the same application on devices with different screen sizes . Our contributions We present a new approach to address the ambiguity in the existing Android layout program synthesizer InferUI by iteratively extending the user provided specification with new input-output examples . The key component of our method is a learned neural oracle used to generate new examples trained with datasets that do not require human annotations or ground-truth programs . To improve generalization , InferUI already contains a probabilistic model that scores programs q ( p | I ) as well as handcrafted robustness properties , achieving 35 % generalization accuracy on a dataset of Google Play Store applications . In contrast , our method significantly improves the accuracy to 71 % while using a dataset containing only correct and incorrect program outputs . We make our implementation and datasets available online at : https : //github.com/eth-sri/guiding-synthesizers 2 RELATED WORK . In this section we discuss the work most closely related to ours . Guiding program synthesis To improve scalability and generalization of program synthesizers several techniques have been proposed that guide the synthesizer towards good programs . The most widely used approach is to implement a statistical search procedure which explores candidate programs based on some type of learned probabilistic model – log-linear models ( Menon et al. , 2013 ; Long & Rinard , 2016 ) , hierarchical Bayesian prior ( Liang et al. , 2010 ) , probabilistic higher order grammar ( Lee et al. , 2018 ) or neural network ( Balog et al. , 2017 ; Sun et al. , 2018 ) . Kalyan et al . ( 2018 ) also takes advantage of probabilistic models but instead of implementing a custom search procedure , they use the learned model to guide an existing symbolic search engine . In addition to approaches that search for a good program directly ( conditioned on the input specification ) , a number of works guide the search by first selecting a high-level sketch of the program and then filling in the holes using symbolic ( Ellis et al. , 2018 ; Murali et al. , 2017 ; Nye et al. , 2019 ) , enumerative or neural search ( Bosnjak et al. , 2017 ; Gaunt et al. , 2016 ) . A similar idea is also used by ( Shin et al. , 2018 ) , but instead of generating a program sketch the authors first infer execution traces ( or condition on partial traces obtained as the program is being generated ( Chen et al. , 2019 ) ) , which are then used to guide the synthesis of the actual program . In comparison to prior work , a key aspect of our approach is to guide the synthesis by generating additional input-output examples that resolve the ambiguities in the input specification . Guiding the synthesizer in this way has several advantages – ( i ) it is interpretable and the user can inspect the generated examples , ( ii ) it can be used to extend any existing synthesizer by introducing a refinement loop around it , ( iii ) the learned model is independent of the actual synthesizer ( and its domain specific language ) and instead is focused only on learning the relation between likely and unlikely input-output examples , and ( iv ) often it is easier to obtain a dataset containing program outputs instead of a dataset consisting of the actual programs . Further , our approach is complementary to prior works as it treats the synthesizer as a black-box that can generate candidate programs . We also note that several prior works explore the design of sophisticated neural architectures that encode input-output examples ( Sun et al. , 2018 ; Devlin et al. , 2017 ; Parisotto et al. , 2017 ) and incorporating some of their ideas might lead to further improvements to our models presented in Section 4 . Learning to rank To choose among all programs that satisfy the input specification , existing program synthesizers select the syntactically shortest program ( Liang et al. , 2010 ; Polozov & Gulwani , 2015 ; Raychev et al. , 2016 ) , the semantically closest program to a reference program ( D ’ Antoni et al. , 2016 ) or a program based on a learned scoring function ( Liang et al. , 2010 ; Mandelin et al. , 2005 ; Singh & Gulwani , 2015 ; Ellis & Gulwani , 2017 ; Singh , 2016 ) . Although the scoring function usually extracts features only from the synthesized program , some approaches also take advantage of additional information – Ellis & Gulwani ( 2017 ) trains a log-linear model using a set of handcrafted features defined over program traces and program outputs while Singh ( 2016 ) leverages unlabelled data by learning common substring expressions shared across the input data . Similar to prior work , our work explores various representations over which the model is learned . Because we applied our work to a domain where outputs can be represented as images ( rather than strings or numbers ) , to achieve good performance we explore different types of models ( i.e. , convolutional neural networks ) . Further , we do not assume that the synthesizer can efficiently enumerate all programs that satisfy the input specification as in Ellis & Gulwani ( 2017 ) ; Singh ( 2016 ) . For such synthesizers , applying a ranking of the returned candidates will often fail since the correct program is simply not included in the set of synthesized programs . Therefore , the neural oracle is defined over program outputs instead of actual programs . This reduces the search space for the synthesizer as well as the complexity of the machine learning models . Neural program induction Devlin et al . ( 2017 ) and Parisotto et al . ( 2017 ) , as well as related line of work on neural machines ( Graves et al. , 2016 ; Reed & de Freitas , 2016 ; Bošnjak et al. , 2017 ; Chen et al. , 2018 ) , explore the design of end-to-end neural approaches that generate the program output for a new input without the need for an explicit search . In this case the goal of the neural network is not to find the correct program explicitly , but rather to generate the most likely output for a given input based on the input specification . These approaches can be integrated in our work as one technique for generating a set of candidate outputs for a given distinguishing input instead of obtaining them using a symbolic synthesizer . However , the model requirements in our work are much weaker – it is enough if the correct output is among the top n most likely candidates rather than requiring 100 % precision for all possible inputs as in program induction .
This paper handles the challenge of generating generalizable programs from input-output specifications when the size of the specification can be quite limited and therefore ambiguous. When proposed candidate programs lead to divergent outputs on a new input, the paper proposes to use a learned neural oracle that can evaluate which of the outputs are most likely. The paper applies their technique to the task of synthesizing Android UI layout code from labels of components and their positions.
SP:aec7ce88f21b38c205522c88b3a3253e24754182
Meta-Learning by Hallucinating Useful Examples
1 INTRODUCTION . Modern deep learning models rely heavily on large amounts of annotated examples ( Deng et al. , 2009 ) . Their data-hungry nature limits their applicability to real-world scenarios , where the cost of annotating examples is prohibitive , or they involve rare concepts ( Zhu et al. , 2014 ; Fink , 2011 ) . In contrast , humans can grasp a new concept rapidly and make meaningful generalizations , even from a single example ( Schmidt , 2009 ) . To bridge this gap , there has been a recent resurgence of interest in few-shot learning that aims to learn novel concepts from very few labeled examples ( Fei-Fei et al. , 2006 ; Vinyals et al. , 2016 ; Wang & Hebert , 2016 ; Snell et al. , 2017 ; Finn et al. , 2017 ) . Existing work tries to solve this problem from the perspective of meta-learning ( Thrun , 1998 ; Schmidhuber , 1987 ) , which is motivated by the human ability to leverage prior experiences when tackling a new task . Unlike the standard machine learning paradigm , where a model is trained on a set of exemplars , meta-learning is performed on a set of tasks , each consisting of its own training and test sets ( Vinyals et al. , 2016 ) . By sampling small training and test sets from a large collection of labeled examples of base classes , meta-learning based few-shot classification approaches learn to extract task-agnostic knowledge , and apply it to a new few-shot learning task of novel classes . One notable type of task-agnostic ( or meta ) knowledge comes from the shared mechanism of data augmentation or hallucination across categories ( Wang et al. , 2018 ; Gao et al. , 2018 ; Schwartz et al. , 2018 ; Zhang et al. , 2018a ) . Hallucinating additional training data by generating images may seem like an easy solution for few-shot learning , but it is often challenging . In fact , the success of this paradigm is usually restricted to certain domains like handwritten characters ( Lake et al. , 2013 ) , or requires additional supervision ( Dixit et al. , 2017 ; Zhang et al. , 2018b ) or sophisticated heuristics ( Hariharan & Girshick , 2017 ) . An alternative to generating raw data in the form of visually realistic images is to hallucinate examples in a learned feature space ( Wang et al. , 2018 ; Gao et al. , 2018 ; Schwartz et al. , 2018 ; Zhang et al. , 2018a ; Xian et al. , 2019 ) . This can be achieved by , for example , integrating a “ hallucinator ” module into a meta-learning framework , where it generates hallucinated examples , guided by real examples ( Wang et al. , 2018 ) . The learner then uses an augmented training set which includes both the real and the hallucinated examples to learn classifiers . While the existing approaches showed that it is possible to adjust the hallucinator to generate examples that are helpful for classification , the generation process is still far from producing effective samples in the few-shot regime . Our key insight is that , to facilitate data hallucination to improve the performance of new classification tasks , two important requirements should be satisfied : ( i ) precision : the generated examples should lead to good classifier performance , and ( ii ) collaboration : all the components including the hallucinator and the learner need to be trained jointly . In this work , we propose PrecisE Collaborative hAlluciNator ( PECAN ) , which integrates these requirements into a general meta-learning with hallucination framework , as shown in Figure 1 . Assume that we have a hallucinator to generate additional examples from the original small training set . A precise hallucinator indicates that a classifier trained on both the hallucinated and the few real examples should produce superior validation accuracy . This can be achieved by training the hallucinator end-to-end with the learner , and back-propagating a classification loss based on groundtruth labels of validation data ( Wang et al. , 2018 ) . Since this precision is measured using ground-truth labels , we term it as hard precision . And more importantly , if the hallucinator perfectly captures the target distribution , a classifier trained on a set of hallucinated examples , despite being generated from a small set of real examples , should produce roughly the same validation accuracy as a classifier trained on a large set of real examples , when these two sets are of the same sample size ( Shmelkov et al. , 2018 ) . This indicates similar level of realism and diversity between the generated and the real examples , as shown in Figure 1a . Motivated by this observation , we introduce an additional precision-inducing loss function , which explicitly encourages the hallucinator to generate examples so that a classifier trained on them makes predictions similar to the one trained on a large amount of real examples . Given that this precision is measured based on classifier predictions , we term it as soft precision . This precision , which is complementary to hard precision and effective , as shown in our experiment , is lacking in current approaches ( Wang et al. , 2018 ) . Satisfying the precision requirement alone is not sufficient , since the classification objective is still directly associated with the learner , and thus the hallucinator continues to rely on the back-propagated signal to update its parameters . This leads to a potential undesirable effect of imbalanced training between the hallucinator and the learner : the learner tends to be stronger and makes allowances for errors in the hallucination , whereas the hallucinator becomes “ lazy ” and does not make its best effort to capture the data distributions , which is empirically observed in our experiments ( See Figure 3 ) . To address this issue , our key insight is to enforce direct and early supervision for the hallucinator , and make its contribution to the overall classification transparent , as shown in Figure 1b . Hence , we introduce a collaborative objective for the hallucinator , which allows us to directly influence the generation process to favor highly discriminative examples right after hallucination , and to strengthen the cooperation between the hallucinator and the learner . Our contributions are three-fold . ( 1 ) We propose a novel loss that helps produce precise hallucinated examples , by using the classifier trained on real examples as a guidance , and encouraging the classifier trained on hallucinated examples to mimic its behavior . ( 2 ) We introduce a collaborative objective for the hallucinator as early supervision , which directly facilitates the generation process and improves the cooperation between the hallucinator and the learner . ( 3 ) By integrating these properties , we develop a general meta-learning with hallucination framework , which is model-agnostic and can be combined with any meta-learning models to consistently boost their few-shot learning performance . Here we mainly focus on few-shot classification tasks , and we show that our approach applies to few-shot regression tasks as well in the appendix A.7 . 2 RELATED WORK . As one of the unsolved problems in machine learning and computer vision , few-shot learning is attracting growing interest in the deep learning era ( Miller et al. , 2000 ; Fei-Fei et al. , 2006 ; Lake et al. , 2015 ; Santoro et al. , 2016 ; Wang & Hebert , 2016 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Finn et al. , 2017 ; Hariharan & Girshick , 2017 ; George et al. , 2017 ; Triantafillou et al. , 2017 ; Edwards & Storkey , 2017 ; Mishra et al. , 2018 ; Douze et al. , 2018 ; Wang et al. , 2018 ; Chen et al. , 2019a ; Dvornik et al. , 2019 ) . Successful generalization from few training samples requires appropriate “ inductive biases ” or shared knowledge from related tasks ( Baxter , 1997 ) , which is commonly acquired through transfer learning and more recently meta-learning ( Thrun , 1998 ; Schmidhuber , 1987 ; Schmidhuber et al. , 1997 ; Bengio et al. , 1992 ) . By explicitly “ learning-to-learn ” over a series of few-shot learning tasks ( i.e. , episodes ) , which are simulated from base classes , meta-learning exploits accumulated task-agnostic knowledge to target few-shot learning problems of novel classes . Within this paradigm of approaches , various types of meta-knowledge has been recently explored , including ( 1 ) a generic feature embedding or metric space , in which images are easy to classify using a distance-based classifier such as cosine similarity or nearest neighbor ( Koch et al. , 2015 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Sung et al. , 2018 ; Ren et al. , 2018 ; Oreshkin et al. , 2018 ) ; ( 2 ) a common initialization of network parameters ( Finn et al. , 2017 ; Nichol & Schulman , 2018 ; Finn et al. , 2018 ) or learned update rules ( Andrychowicz et al. , 2016 ; Ravi & Larochelle , 2017 ; Munkhdalai & Yu , 2017 ; Li et al. , 2017 ; Rusu et al. , 2019 ) ; ( 3 ) a transferable strategy to estimate model parameters based on few novel class examples ( Bertinetto et al. , 2016 ; Qiao et al. , 2018 ; Qi et al. , 2018 ; Gidaris & Komodakis , 2018 ) , or from an initial small dataset model ( Wang & Hebert , 2016 ; Wang et al. , 2017 ) . Complementary to these discriminative approaches , our work focuses on synthesizing samples to deal with data scarcity . There has been progress in this direction of data hallucination , either in pixel or feature spaces ( Salakhutdinov et al. , 2012 ; George et al. , 2017 ; Lake et al. , 2013 ; 2015 ; Wong & Yuille , 2015 ; Rezende et al. , 2014 ; Goodfellow et al. , 2014 ; Radford et al. , 2016 ; Dixit et al. , 2017 ; Hariharan & Girshick , 2017 ; Wang et al. , 2018 ; Gao et al. , 2018 ; Schwartz et al. , 2018 ; Zhang et al. , 2018a ) . However , it is still challenging for modern generative models to capture the entirety of data distribution ( Salimans et al. , 2016 ) and produce useful examples that maximally boost the recognition performance ( Wang et al. , 2018 ) , especially in the small sample-size regime . In the context of generative adversarial networks ( GANs ) , Shmelkov et al . ( 2018 ) show that images synthesized by state-of-the-art approaches , despite their impressive visual quality , are insufficient to tackle recognition tasks , and encourage the use of quantitative measures based on classification results to evaluate GAN models . Rather than using classification results as a performance measure , we go a step further in this paper by leveraging classification objectives to guide the generation process . Other related work such as Wang et al . ( 2018 ) proposed a general data hallucination framework based on meta-learning , which is a special case of our approach . A GAN-like hallucinator takes a seed example and a random noise vector as input to generate a new sample . This hallucinator is trained jointly with the classifier in an end-to-end manner . Delta-encoder ( Schwartz et al. , 2018 ) is a variant of Wang et al . ( 2018 ) , where instead of using noise vectors , it modifies an auto-encoder to extract transferable intra-class deformations , i.e. , “ deltas ” , and applies them to novel samples to generate new instances . Unlike the above approaches that directly use the produced samples to train the classifier , MetaGAN ( Zhang et al. , 2018a ) trains the classifier in an adversarial manner to augment the classifier with the ability to discriminate between real and synthesized data . Another variant ( Gao et al. , 2018 ) explicitly preserves covariance information to enable better augmentation . Our work investigates critical yet unexplored properties in this paradigm that the data hallucinator should satisfy . These properties are general and can be flexibly incorporated into existing meta-learning approaches and hallucination methods , providing significant gains irrespective of these choices .
In this paper, the authors address few-shot learning via a precise collaborative hallucinator. In particular, they follow the framework of (Wang et al., 2018), and introduce two kinds of training regularization. The soft precision-inducing loss follows the spirit of adversarial learning, by using knowledge distillation. Additionally, a collaborative objective is introduced as middle supervision to enhance the learning capacity of hallucinator.
SP:ca085e8e2675fe579df4187290b7b7dc37b8a729
Meta-Learning by Hallucinating Useful Examples
1 INTRODUCTION . Modern deep learning models rely heavily on large amounts of annotated examples ( Deng et al. , 2009 ) . Their data-hungry nature limits their applicability to real-world scenarios , where the cost of annotating examples is prohibitive , or they involve rare concepts ( Zhu et al. , 2014 ; Fink , 2011 ) . In contrast , humans can grasp a new concept rapidly and make meaningful generalizations , even from a single example ( Schmidt , 2009 ) . To bridge this gap , there has been a recent resurgence of interest in few-shot learning that aims to learn novel concepts from very few labeled examples ( Fei-Fei et al. , 2006 ; Vinyals et al. , 2016 ; Wang & Hebert , 2016 ; Snell et al. , 2017 ; Finn et al. , 2017 ) . Existing work tries to solve this problem from the perspective of meta-learning ( Thrun , 1998 ; Schmidhuber , 1987 ) , which is motivated by the human ability to leverage prior experiences when tackling a new task . Unlike the standard machine learning paradigm , where a model is trained on a set of exemplars , meta-learning is performed on a set of tasks , each consisting of its own training and test sets ( Vinyals et al. , 2016 ) . By sampling small training and test sets from a large collection of labeled examples of base classes , meta-learning based few-shot classification approaches learn to extract task-agnostic knowledge , and apply it to a new few-shot learning task of novel classes . One notable type of task-agnostic ( or meta ) knowledge comes from the shared mechanism of data augmentation or hallucination across categories ( Wang et al. , 2018 ; Gao et al. , 2018 ; Schwartz et al. , 2018 ; Zhang et al. , 2018a ) . Hallucinating additional training data by generating images may seem like an easy solution for few-shot learning , but it is often challenging . In fact , the success of this paradigm is usually restricted to certain domains like handwritten characters ( Lake et al. , 2013 ) , or requires additional supervision ( Dixit et al. , 2017 ; Zhang et al. , 2018b ) or sophisticated heuristics ( Hariharan & Girshick , 2017 ) . An alternative to generating raw data in the form of visually realistic images is to hallucinate examples in a learned feature space ( Wang et al. , 2018 ; Gao et al. , 2018 ; Schwartz et al. , 2018 ; Zhang et al. , 2018a ; Xian et al. , 2019 ) . This can be achieved by , for example , integrating a “ hallucinator ” module into a meta-learning framework , where it generates hallucinated examples , guided by real examples ( Wang et al. , 2018 ) . The learner then uses an augmented training set which includes both the real and the hallucinated examples to learn classifiers . While the existing approaches showed that it is possible to adjust the hallucinator to generate examples that are helpful for classification , the generation process is still far from producing effective samples in the few-shot regime . Our key insight is that , to facilitate data hallucination to improve the performance of new classification tasks , two important requirements should be satisfied : ( i ) precision : the generated examples should lead to good classifier performance , and ( ii ) collaboration : all the components including the hallucinator and the learner need to be trained jointly . In this work , we propose PrecisE Collaborative hAlluciNator ( PECAN ) , which integrates these requirements into a general meta-learning with hallucination framework , as shown in Figure 1 . Assume that we have a hallucinator to generate additional examples from the original small training set . A precise hallucinator indicates that a classifier trained on both the hallucinated and the few real examples should produce superior validation accuracy . This can be achieved by training the hallucinator end-to-end with the learner , and back-propagating a classification loss based on groundtruth labels of validation data ( Wang et al. , 2018 ) . Since this precision is measured using ground-truth labels , we term it as hard precision . And more importantly , if the hallucinator perfectly captures the target distribution , a classifier trained on a set of hallucinated examples , despite being generated from a small set of real examples , should produce roughly the same validation accuracy as a classifier trained on a large set of real examples , when these two sets are of the same sample size ( Shmelkov et al. , 2018 ) . This indicates similar level of realism and diversity between the generated and the real examples , as shown in Figure 1a . Motivated by this observation , we introduce an additional precision-inducing loss function , which explicitly encourages the hallucinator to generate examples so that a classifier trained on them makes predictions similar to the one trained on a large amount of real examples . Given that this precision is measured based on classifier predictions , we term it as soft precision . This precision , which is complementary to hard precision and effective , as shown in our experiment , is lacking in current approaches ( Wang et al. , 2018 ) . Satisfying the precision requirement alone is not sufficient , since the classification objective is still directly associated with the learner , and thus the hallucinator continues to rely on the back-propagated signal to update its parameters . This leads to a potential undesirable effect of imbalanced training between the hallucinator and the learner : the learner tends to be stronger and makes allowances for errors in the hallucination , whereas the hallucinator becomes “ lazy ” and does not make its best effort to capture the data distributions , which is empirically observed in our experiments ( See Figure 3 ) . To address this issue , our key insight is to enforce direct and early supervision for the hallucinator , and make its contribution to the overall classification transparent , as shown in Figure 1b . Hence , we introduce a collaborative objective for the hallucinator , which allows us to directly influence the generation process to favor highly discriminative examples right after hallucination , and to strengthen the cooperation between the hallucinator and the learner . Our contributions are three-fold . ( 1 ) We propose a novel loss that helps produce precise hallucinated examples , by using the classifier trained on real examples as a guidance , and encouraging the classifier trained on hallucinated examples to mimic its behavior . ( 2 ) We introduce a collaborative objective for the hallucinator as early supervision , which directly facilitates the generation process and improves the cooperation between the hallucinator and the learner . ( 3 ) By integrating these properties , we develop a general meta-learning with hallucination framework , which is model-agnostic and can be combined with any meta-learning models to consistently boost their few-shot learning performance . Here we mainly focus on few-shot classification tasks , and we show that our approach applies to few-shot regression tasks as well in the appendix A.7 . 2 RELATED WORK . As one of the unsolved problems in machine learning and computer vision , few-shot learning is attracting growing interest in the deep learning era ( Miller et al. , 2000 ; Fei-Fei et al. , 2006 ; Lake et al. , 2015 ; Santoro et al. , 2016 ; Wang & Hebert , 2016 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Finn et al. , 2017 ; Hariharan & Girshick , 2017 ; George et al. , 2017 ; Triantafillou et al. , 2017 ; Edwards & Storkey , 2017 ; Mishra et al. , 2018 ; Douze et al. , 2018 ; Wang et al. , 2018 ; Chen et al. , 2019a ; Dvornik et al. , 2019 ) . Successful generalization from few training samples requires appropriate “ inductive biases ” or shared knowledge from related tasks ( Baxter , 1997 ) , which is commonly acquired through transfer learning and more recently meta-learning ( Thrun , 1998 ; Schmidhuber , 1987 ; Schmidhuber et al. , 1997 ; Bengio et al. , 1992 ) . By explicitly “ learning-to-learn ” over a series of few-shot learning tasks ( i.e. , episodes ) , which are simulated from base classes , meta-learning exploits accumulated task-agnostic knowledge to target few-shot learning problems of novel classes . Within this paradigm of approaches , various types of meta-knowledge has been recently explored , including ( 1 ) a generic feature embedding or metric space , in which images are easy to classify using a distance-based classifier such as cosine similarity or nearest neighbor ( Koch et al. , 2015 ; Vinyals et al. , 2016 ; Snell et al. , 2017 ; Sung et al. , 2018 ; Ren et al. , 2018 ; Oreshkin et al. , 2018 ) ; ( 2 ) a common initialization of network parameters ( Finn et al. , 2017 ; Nichol & Schulman , 2018 ; Finn et al. , 2018 ) or learned update rules ( Andrychowicz et al. , 2016 ; Ravi & Larochelle , 2017 ; Munkhdalai & Yu , 2017 ; Li et al. , 2017 ; Rusu et al. , 2019 ) ; ( 3 ) a transferable strategy to estimate model parameters based on few novel class examples ( Bertinetto et al. , 2016 ; Qiao et al. , 2018 ; Qi et al. , 2018 ; Gidaris & Komodakis , 2018 ) , or from an initial small dataset model ( Wang & Hebert , 2016 ; Wang et al. , 2017 ) . Complementary to these discriminative approaches , our work focuses on synthesizing samples to deal with data scarcity . There has been progress in this direction of data hallucination , either in pixel or feature spaces ( Salakhutdinov et al. , 2012 ; George et al. , 2017 ; Lake et al. , 2013 ; 2015 ; Wong & Yuille , 2015 ; Rezende et al. , 2014 ; Goodfellow et al. , 2014 ; Radford et al. , 2016 ; Dixit et al. , 2017 ; Hariharan & Girshick , 2017 ; Wang et al. , 2018 ; Gao et al. , 2018 ; Schwartz et al. , 2018 ; Zhang et al. , 2018a ) . However , it is still challenging for modern generative models to capture the entirety of data distribution ( Salimans et al. , 2016 ) and produce useful examples that maximally boost the recognition performance ( Wang et al. , 2018 ) , especially in the small sample-size regime . In the context of generative adversarial networks ( GANs ) , Shmelkov et al . ( 2018 ) show that images synthesized by state-of-the-art approaches , despite their impressive visual quality , are insufficient to tackle recognition tasks , and encourage the use of quantitative measures based on classification results to evaluate GAN models . Rather than using classification results as a performance measure , we go a step further in this paper by leveraging classification objectives to guide the generation process . Other related work such as Wang et al . ( 2018 ) proposed a general data hallucination framework based on meta-learning , which is a special case of our approach . A GAN-like hallucinator takes a seed example and a random noise vector as input to generate a new sample . This hallucinator is trained jointly with the classifier in an end-to-end manner . Delta-encoder ( Schwartz et al. , 2018 ) is a variant of Wang et al . ( 2018 ) , where instead of using noise vectors , it modifies an auto-encoder to extract transferable intra-class deformations , i.e. , “ deltas ” , and applies them to novel samples to generate new instances . Unlike the above approaches that directly use the produced samples to train the classifier , MetaGAN ( Zhang et al. , 2018a ) trains the classifier in an adversarial manner to augment the classifier with the ability to discriminate between real and synthesized data . Another variant ( Gao et al. , 2018 ) explicitly preserves covariance information to enable better augmentation . Our work investigates critical yet unexplored properties in this paradigm that the data hallucinator should satisfy . These properties are general and can be flexibly incorporated into existing meta-learning approaches and hallucination methods , providing significant gains irrespective of these choices .
This paper describes a method that builds upon the work of Wang et al. It meta-learns to hallucinate additional samples for few-shot learning for classification tasks. Their two main insights of this paper are to propose a soft-precision term which compares the classifiers' predictions for all classes other than the ground truth class for both a few-shot training set and the hallucinated set and b) to introduce the idea of applying direct early supervision in the feature space in which the hallucination is conducted in addition to in the classifier embedding space. This allows for stronger supervision and prevents the hallucinated samples from not being representative of the classes. The authors show small, but consistent improvement in performance on two benchmarks: ImageNet and miniImageNet with two different network architectures versus various state-of-the-art meta-learning algorithms with and without hallucination. The authors have adequately cited and reviewed the existing literature. They have also conducted many experiments (both in the main paper and in the supplementary material) to show the superior performance of their approach versus the existing ones. Furthermore their ablation studies both for the type of soft precision loss and for their various individual losses are quite nice and thorough.
SP:ca085e8e2675fe579df4187290b7b7dc37b8a729
DyNet: Dynamic Convolution for Accelerating Convolution Neural Networks
1 INTRODUCTION . Convolutional neural networks ( CNNs ) have achieved state-of-the-art performance in many computer vision tasks ( Krizhevsky et al. , 2012 ; Szegedy et al. , 2013 ) , and the neural architectures of CNNs are evolving over the years ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Hu et al. , 2018 ; Zhong et al. , 2018a ; b ) . However , modern high-performance CNNs often require a lot of computation resources to execute large amount of convolution kernel operations . Aside from the accuracy , to make CNNs applicable on mobile devices , building lightweight and efficient deep models has attracting much more attention recently ( Howard et al. , 2017 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ; Ma et al. , 2018 ) . These methods can be roughly categorized into two types : efficient network design and model compression . Representative methods for the former category are MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) and ShuffleNet ( Ma et al. , 2018 ; Zhang et al. , 2018 ) , which use depth-wise separable convolution and channel-level shuffle techniques to reduce computation cost . On the other hand , model compression based methods tend to obtain a smaller network by compressing a larger network via pruning , factorization or mimic ( Chen et al. , 2015 ; Han et al. , 2015a ; Jaderberg et al. , 2014 ; Lebedev et al. , 2014 ; Ba & Caruana , 2014 ) . Although some handcrafted efficient network structures have been designed , we observe that the significant correlations still exist among convolutional kernels , and introduce large amount of redundant calculations . Moreover , these small networks are hard to compress . For example , Liu et al . ( 2019 ) compress MobileNetV2 to 124M , but the accuracy drops by 5.4 % on ImageNet . We theoretically analyze above observation , and find that this phenomenon is caused by the nature of static convolution , where correlated kernels are cooperated to extract noise-irrelevant features . Thus it is hard to compress the fixed convolution kernels without information loss . We also find that if we linearly fuse several convolution kernels to generate one dynamic kernel based on the input , we can obtain the noise-irrelevant features without the cooperation of multiple kernels , and further reduce the computation cost of convolution layer remarkably . Based on above observations and analysis , in this paper , we propose a novel dynamic convolution method named DyNet . The overall framework of DyNet is shown in Figure 1 , which consists of a coefficient prediction module and a dynamic generation module . The coefficient prediction module is trainable and designed to predict the coefficients of fixed convolution kernels . Then the dynamic generation module further generates a dynamic kernel based on the predicted coefficients . Our proposed dynamic convolution method is simple to implement , and can be used as a drop-in plugin for any convolution layer to reduce computation cost . We evaluate the proposed DyNet on state-of-the-art networks such as MobileNetV2 , ShuffleNetV2 and ResNets . Experiment results show that DyNet reduces 37.0 % FLOPs of ShuffleNetV2 ( 1.0 ) while further improve the Top-1 accuracy on ImageNet by 1.0 % . For MobileNetV2 ( 1.0 ) , ResNet18 and ResNet50 , DyNet reduces 54.7 % , 67.2 % and 71.3 % FLOPs respectively , the Top-1 accuracy on ImageNet changes by −0.27 % , −0.6 % and −0.08 % . Meanwhile , DyNet further accelerates the inference speed of MobileNetV2 ( 1.0 ) , ResNet18 and ResNet50 by 1.87×,1.32×and 1.48× on CPU platform respectively . 2 RELATED WORK . We review related works from three aspects : efficient convolution neural network design , model compression and dynamic convolutional kernels . 2.1 EFFICIENT CONVOLUTION NEURAL NETWORK DESIGN . In many computer vision tasks ( Krizhevsky et al. , 2012 ; Szegedy et al. , 2013 ) , model design plays a key role . The increasing demands of high quality networks on mobile/embedding devices have driven the study on efficient network design ( He & Sun , 2015 ) . For example , GoogleNet ( Szegedy et al. , 2015 ) increases the depth of networks with lower complexity compared to simply stacking convolution layers ; SqueezeNet ( Iandola et al. , 2016 ) deploys a bottleneck approach to design a very small network ; Xception ( Chollet , 2017 ) , MobileNet ( Howard et al. , 2017 ) and MobileNetV2 ( Sandler et al. , 2018 ) use depth-wise separable convolution to reduce computation and model size . ShuffleNet ( Zhang et al. , 2018 ) and ShuffleNetV2 ( Ma et al. , 2018 ) shuffle channels to reduce computation of 1× 1 convolution kernel and improve accuracy . Despite the progress made by these efforts , we find that there still exists redundancy between convolution kernels and cause redundant computation . 2.2 MODEL COMPRESSION . Another trend to obtaining small network is model compression . Factorization based methods ( Jaderberg et al. , 2014 ; Lebedev et al. , 2014 ) try to speed up convolution operation by using tensor decomposition to approximate original convolution operation . Knowledge distillation based methods ( Ba & Caruana , 2014 ; Romero et al. , 2014 ; Hinton et al. , 2015 ) learn a small network to mimic a larger teacher network . Pruning based methods ( Han et al. , 2015a ; b ; Wen et al. , 2016 ; Liu et al. , 2019 ) try to reduce computation by pruning the redundant connections or convolution channels . Compared with those methods , DyNet is more effective especially when the target network is already efficient enough . For example , in ( Liu et al. , 2019 ) , they get a smaller model of 124M FLOPs by pruning the MobileNetV2 , however it drops the accuracy by 5.4 % on ImageNet compared with the model with 291M FLOPs . While in DyNet , we can reduce the FLOPs of MobileNetV2 ( 1.0 ) from 298M to 129M with the accuracy drops only 0.27 % . 2.3 DYNAMIC CONVOLUTION KERNEL . Generating dynamic convolution kernels appears in both computer vision and natural language processing ( NLP ) tasks . In computer vision domain , Klein et al . ( Klein et al. , 2015 ) and Brabandere et al . ( Jia et al. , 2016 ) directly generate convolution kernels via a linear layer based on the feature maps of previous layers . Because convolution kernels has a large amount of parameters , the linear layer will be inefficient on the hardware . Our proposed method solves this problem via merely predicting the coefficients for linearly combining static kernels and achieve real speed up for CNN on hardware . The idea of linearly combining static kernels using predicted coefficients has been proposed by Yang et al . ( Yang et al. , 2019 ) , but they focus on using more parameters to make models more expressive while we focus on reducing redundant calculations in convolution . We make theoretical analysis and conduct correlation experiment to prove that correlations among convolutional kernels can be reduced via dynamically fusing several kernels . In NLP domain , some works ( Shen et al. , 2018 ; Wu et al. , 2019 ; Gong et al. , 2018 ) incorporate context information to generate input-aware convolution filters which can be changed according to input sentences with various lengths . These methods also directly generate convolution kernels via a linear layer , etc . Because the size of CNN in NLP is smaller and the dimension of convolution kernel is one , the inefficiency issue for the linear layer is alleviated . Moreover , Wu et al . ( Wu et al. , 2019 ) alleviate this issue utilizing the depthwise convolution and the strategy of sharing weight across layers . These methods are designed to improve the adaptivity and flexibility of language modeling , while our method aims to cut down the redundant computation cost . 3 DYNET : DYNAMIC CONVOLUTION IN CNNS . In this section , we first describe the motivation of DyNet . Then we explain the proposed dynamic convolution in detail . Finally , we illustrate the DyNet based architectures of our proposed Dy-mobile , Dy-shuffle , Dy-ResNet18 , Dy-ResNet50 . 3.1 MOTIVATION . As illustrated in previous works ( Han et al. , 2015a ; b ; Wen et al. , 2016 ; Liu et al. , 2019 ) , convolutional kernels are naturally correlated in deep models . For some of the well known networks , we plot the distribution of Pearson product-moment correlation coefficient between feature maps in Figure 2 . Most existing works try to reduce correlations by compressing . However , efficient and small networks like MobileNets are harder to prune despite the correlation is still significant . We think these correlations are vital for maintaining the performance because they are cooperated to obtain noiseirrelevant features . We take face recognition as an example , where the pose or the illumination is not supposed to change the classification results . Therefore , the feature maps will gradually become noise-irrelevant when they go deeper . Based on the theoretical analysis in appendix A , we find that if we dynamically fuse several kernels , we can get noise-irrelevant feature without the cooperation of redundant kernels . In this paper , we propose dynamic convolution method , which learns the coefficients to fuse multiple kernels into a dynamic one based on image contents . We give more in depth analysis about our motivation in appendix A . 3.2 DYNAMIC CONVOLUTION . The goal of dynamic convolution is to learn a group of kernel coefficients , which fuse multiple fixed kernels to a dynamic one . We demonstrate the overall framework of dynamic convolution in Figure 1 . We first utilize a trainable coefficient prediction module to predict coefficients . Then we further propose a dynamic generation module to fuse fixed kernels to a dynamic one . We will illustrate the coefficient prediction module and dynamic generation module in detail in the following of this section . Coefficient prediction module Coefficient prediction module is proposed to predict coefficients based on image contents . As shown in Figure 3 , the coefficient prediction module can be composed by a global average pooling layer and a fully connected layer with Sigmoid as activation function . Global average pooling layer aggregates the input feature maps into a 1 × 1 × Cin vector , which serves as a feature extraction layer . Then the fully connected layer further maps the feature into a 1× 1×C vector , which are the coefficients for fixed convolution kernels of several dynamic convolution layers . Dynamic generation module For a dynamic convolution layer with weight [ Cout × gt , Cin , k , k ] , it corresponds with Cout × gt fixed kernels and Cout dynamic kernels , the shape of each kernel is [ Cin , k , k ] . gt denotes the group size , it is a hyperparameter . We denote the fixed kernels as wit , the dynamic kernels as w̃t , the coefficients as η i t , where t = 0 , ... , Cout , i = 0 , ... , gt . After the coefficients are obtained , we generate dynamic kernels as follows : w̃t = gt∑ i=1 ηit · wit ( 1 ) Training algorithm For the training of the proposed dynamic convolution , it is not suitable to use batch based training scheme . It is because the convolution kernel is different for different input images in the same mini-batch . Therefore , we fuse feature maps based on the coefficients rather than kernels during training . They are mathematically equivalent as shown in Eq . 2 : Õt = w̃t ⊗ x = gt∑ i=1 ( ηit · wit ) ⊗ x = gt∑ i=1 ( ηit · wit ⊗ x ) = gt∑ i=1 ( ηit · ( wit ⊗ x ) ) = gt∑ i=1 ( ηit ·Oit ) , ( 2 ) where x denotes the input , Õt denotes the output of dynamic kernel w̃t , Oit denotes the output of fixed kernel wit .
The authors propose to use dynamic convolutional kernels as a means to reduce the computation cost in static CNNs while maintaining their performance. The dynamic kernels are obtained by a linear combination of static kernels where the weights of the linear combination are input-dependent (they are obtained similarly to the coefficients in squeeze-and-excite). The authors also include a theoretical and experimental study of the correlation.
SP:28a2ee0012e23223b2c3501a94a5e72e0c718c66
DyNet: Dynamic Convolution for Accelerating Convolution Neural Networks
1 INTRODUCTION . Convolutional neural networks ( CNNs ) have achieved state-of-the-art performance in many computer vision tasks ( Krizhevsky et al. , 2012 ; Szegedy et al. , 2013 ) , and the neural architectures of CNNs are evolving over the years ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ; He et al. , 2016 ; Hu et al. , 2018 ; Zhong et al. , 2018a ; b ) . However , modern high-performance CNNs often require a lot of computation resources to execute large amount of convolution kernel operations . Aside from the accuracy , to make CNNs applicable on mobile devices , building lightweight and efficient deep models has attracting much more attention recently ( Howard et al. , 2017 ; Sandler et al. , 2018 ; Zhang et al. , 2018 ; Ma et al. , 2018 ) . These methods can be roughly categorized into two types : efficient network design and model compression . Representative methods for the former category are MobileNet ( Howard et al. , 2017 ; Sandler et al. , 2018 ) and ShuffleNet ( Ma et al. , 2018 ; Zhang et al. , 2018 ) , which use depth-wise separable convolution and channel-level shuffle techniques to reduce computation cost . On the other hand , model compression based methods tend to obtain a smaller network by compressing a larger network via pruning , factorization or mimic ( Chen et al. , 2015 ; Han et al. , 2015a ; Jaderberg et al. , 2014 ; Lebedev et al. , 2014 ; Ba & Caruana , 2014 ) . Although some handcrafted efficient network structures have been designed , we observe that the significant correlations still exist among convolutional kernels , and introduce large amount of redundant calculations . Moreover , these small networks are hard to compress . For example , Liu et al . ( 2019 ) compress MobileNetV2 to 124M , but the accuracy drops by 5.4 % on ImageNet . We theoretically analyze above observation , and find that this phenomenon is caused by the nature of static convolution , where correlated kernels are cooperated to extract noise-irrelevant features . Thus it is hard to compress the fixed convolution kernels without information loss . We also find that if we linearly fuse several convolution kernels to generate one dynamic kernel based on the input , we can obtain the noise-irrelevant features without the cooperation of multiple kernels , and further reduce the computation cost of convolution layer remarkably . Based on above observations and analysis , in this paper , we propose a novel dynamic convolution method named DyNet . The overall framework of DyNet is shown in Figure 1 , which consists of a coefficient prediction module and a dynamic generation module . The coefficient prediction module is trainable and designed to predict the coefficients of fixed convolution kernels . Then the dynamic generation module further generates a dynamic kernel based on the predicted coefficients . Our proposed dynamic convolution method is simple to implement , and can be used as a drop-in plugin for any convolution layer to reduce computation cost . We evaluate the proposed DyNet on state-of-the-art networks such as MobileNetV2 , ShuffleNetV2 and ResNets . Experiment results show that DyNet reduces 37.0 % FLOPs of ShuffleNetV2 ( 1.0 ) while further improve the Top-1 accuracy on ImageNet by 1.0 % . For MobileNetV2 ( 1.0 ) , ResNet18 and ResNet50 , DyNet reduces 54.7 % , 67.2 % and 71.3 % FLOPs respectively , the Top-1 accuracy on ImageNet changes by −0.27 % , −0.6 % and −0.08 % . Meanwhile , DyNet further accelerates the inference speed of MobileNetV2 ( 1.0 ) , ResNet18 and ResNet50 by 1.87×,1.32×and 1.48× on CPU platform respectively . 2 RELATED WORK . We review related works from three aspects : efficient convolution neural network design , model compression and dynamic convolutional kernels . 2.1 EFFICIENT CONVOLUTION NEURAL NETWORK DESIGN . In many computer vision tasks ( Krizhevsky et al. , 2012 ; Szegedy et al. , 2013 ) , model design plays a key role . The increasing demands of high quality networks on mobile/embedding devices have driven the study on efficient network design ( He & Sun , 2015 ) . For example , GoogleNet ( Szegedy et al. , 2015 ) increases the depth of networks with lower complexity compared to simply stacking convolution layers ; SqueezeNet ( Iandola et al. , 2016 ) deploys a bottleneck approach to design a very small network ; Xception ( Chollet , 2017 ) , MobileNet ( Howard et al. , 2017 ) and MobileNetV2 ( Sandler et al. , 2018 ) use depth-wise separable convolution to reduce computation and model size . ShuffleNet ( Zhang et al. , 2018 ) and ShuffleNetV2 ( Ma et al. , 2018 ) shuffle channels to reduce computation of 1× 1 convolution kernel and improve accuracy . Despite the progress made by these efforts , we find that there still exists redundancy between convolution kernels and cause redundant computation . 2.2 MODEL COMPRESSION . Another trend to obtaining small network is model compression . Factorization based methods ( Jaderberg et al. , 2014 ; Lebedev et al. , 2014 ) try to speed up convolution operation by using tensor decomposition to approximate original convolution operation . Knowledge distillation based methods ( Ba & Caruana , 2014 ; Romero et al. , 2014 ; Hinton et al. , 2015 ) learn a small network to mimic a larger teacher network . Pruning based methods ( Han et al. , 2015a ; b ; Wen et al. , 2016 ; Liu et al. , 2019 ) try to reduce computation by pruning the redundant connections or convolution channels . Compared with those methods , DyNet is more effective especially when the target network is already efficient enough . For example , in ( Liu et al. , 2019 ) , they get a smaller model of 124M FLOPs by pruning the MobileNetV2 , however it drops the accuracy by 5.4 % on ImageNet compared with the model with 291M FLOPs . While in DyNet , we can reduce the FLOPs of MobileNetV2 ( 1.0 ) from 298M to 129M with the accuracy drops only 0.27 % . 2.3 DYNAMIC CONVOLUTION KERNEL . Generating dynamic convolution kernels appears in both computer vision and natural language processing ( NLP ) tasks . In computer vision domain , Klein et al . ( Klein et al. , 2015 ) and Brabandere et al . ( Jia et al. , 2016 ) directly generate convolution kernels via a linear layer based on the feature maps of previous layers . Because convolution kernels has a large amount of parameters , the linear layer will be inefficient on the hardware . Our proposed method solves this problem via merely predicting the coefficients for linearly combining static kernels and achieve real speed up for CNN on hardware . The idea of linearly combining static kernels using predicted coefficients has been proposed by Yang et al . ( Yang et al. , 2019 ) , but they focus on using more parameters to make models more expressive while we focus on reducing redundant calculations in convolution . We make theoretical analysis and conduct correlation experiment to prove that correlations among convolutional kernels can be reduced via dynamically fusing several kernels . In NLP domain , some works ( Shen et al. , 2018 ; Wu et al. , 2019 ; Gong et al. , 2018 ) incorporate context information to generate input-aware convolution filters which can be changed according to input sentences with various lengths . These methods also directly generate convolution kernels via a linear layer , etc . Because the size of CNN in NLP is smaller and the dimension of convolution kernel is one , the inefficiency issue for the linear layer is alleviated . Moreover , Wu et al . ( Wu et al. , 2019 ) alleviate this issue utilizing the depthwise convolution and the strategy of sharing weight across layers . These methods are designed to improve the adaptivity and flexibility of language modeling , while our method aims to cut down the redundant computation cost . 3 DYNET : DYNAMIC CONVOLUTION IN CNNS . In this section , we first describe the motivation of DyNet . Then we explain the proposed dynamic convolution in detail . Finally , we illustrate the DyNet based architectures of our proposed Dy-mobile , Dy-shuffle , Dy-ResNet18 , Dy-ResNet50 . 3.1 MOTIVATION . As illustrated in previous works ( Han et al. , 2015a ; b ; Wen et al. , 2016 ; Liu et al. , 2019 ) , convolutional kernels are naturally correlated in deep models . For some of the well known networks , we plot the distribution of Pearson product-moment correlation coefficient between feature maps in Figure 2 . Most existing works try to reduce correlations by compressing . However , efficient and small networks like MobileNets are harder to prune despite the correlation is still significant . We think these correlations are vital for maintaining the performance because they are cooperated to obtain noiseirrelevant features . We take face recognition as an example , where the pose or the illumination is not supposed to change the classification results . Therefore , the feature maps will gradually become noise-irrelevant when they go deeper . Based on the theoretical analysis in appendix A , we find that if we dynamically fuse several kernels , we can get noise-irrelevant feature without the cooperation of redundant kernels . In this paper , we propose dynamic convolution method , which learns the coefficients to fuse multiple kernels into a dynamic one based on image contents . We give more in depth analysis about our motivation in appendix A . 3.2 DYNAMIC CONVOLUTION . The goal of dynamic convolution is to learn a group of kernel coefficients , which fuse multiple fixed kernels to a dynamic one . We demonstrate the overall framework of dynamic convolution in Figure 1 . We first utilize a trainable coefficient prediction module to predict coefficients . Then we further propose a dynamic generation module to fuse fixed kernels to a dynamic one . We will illustrate the coefficient prediction module and dynamic generation module in detail in the following of this section . Coefficient prediction module Coefficient prediction module is proposed to predict coefficients based on image contents . As shown in Figure 3 , the coefficient prediction module can be composed by a global average pooling layer and a fully connected layer with Sigmoid as activation function . Global average pooling layer aggregates the input feature maps into a 1 × 1 × Cin vector , which serves as a feature extraction layer . Then the fully connected layer further maps the feature into a 1× 1×C vector , which are the coefficients for fixed convolution kernels of several dynamic convolution layers . Dynamic generation module For a dynamic convolution layer with weight [ Cout × gt , Cin , k , k ] , it corresponds with Cout × gt fixed kernels and Cout dynamic kernels , the shape of each kernel is [ Cin , k , k ] . gt denotes the group size , it is a hyperparameter . We denote the fixed kernels as wit , the dynamic kernels as w̃t , the coefficients as η i t , where t = 0 , ... , Cout , i = 0 , ... , gt . After the coefficients are obtained , we generate dynamic kernels as follows : w̃t = gt∑ i=1 ηit · wit ( 1 ) Training algorithm For the training of the proposed dynamic convolution , it is not suitable to use batch based training scheme . It is because the convolution kernel is different for different input images in the same mini-batch . Therefore , we fuse feature maps based on the coefficients rather than kernels during training . They are mathematically equivalent as shown in Eq . 2 : Õt = w̃t ⊗ x = gt∑ i=1 ( ηit · wit ) ⊗ x = gt∑ i=1 ( ηit · wit ⊗ x ) = gt∑ i=1 ( ηit · ( wit ⊗ x ) ) = gt∑ i=1 ( ηit ·Oit ) , ( 2 ) where x denotes the input , Õt denotes the output of dynamic kernel w̃t , Oit denotes the output of fixed kernel wit .
This paper proposed dynamic convolution (DyNet) to accelerating convolution networks. The new method is tested on the ImageNet dataset with three different backbones. It reduces the computation flops by a large margin while keeps similar classification accuracy. The additional segmentation experiment on the Cityscapes dataset also shows the new module can save computation a lot while maintaining similar segmentation accuracy.
SP:28a2ee0012e23223b2c3501a94a5e72e0c718c66
Star-Convexity in Non-Negative Matrix Factorization
1 INTRODUCTION . Non-negative matrix factorization ( NMF ) is a ubiquitous technique for data analysis where one attempts to factorize a measurement matrix X into the product of non-negative matrices U , V ( Lee and Seung , 1999 ) . This simple problem has applications in recommender systems ( Luo et al. , 2014 ) , scientific analysis ( Berne et al. , 2007 ; Trindade et al. , 2017 ) , computer vision ( Gillis , 2012 ) , internet distance prediction ( Mao et al. , 2006 ) , audio processing ( Schmidt et al. , 2007 ) and many more domains . Often , the non-negativity is crucial for interpretability , for example , in the context of crystallography , the light sources , which are represented as matrix factors , have non-negative intensity ( Suram et al. , 2016 ) . Like many other non-convex optimization problems , finding the exact solution to NMF is NP-hard ( Pardalos and Vavasis , 1991 ; Vavasis , 2009 ) . NMF ’ s tremendous practical success is however at odds with such worst-case analysis , and simple algorithms based upon gradient descent are known to find good solutions in real-world settings ( Lee and Seung , 2001 ) . At the time when NMF was proposed , most analyses of optimization problems within machine learning focused on convex formulations such as SVMs ( Cortes and Vapnik , 1995 ) , but owing to the success of neural networks , non-convex optimization has experienced a resurgence in interest . Here , we revisit NMF from a fresh perspective , utilizing recent tools developed in the context of optimization in deep learning . Specifically , our main inspiration is the recent work of Kleinberg et al . ( 2018 ) and Zhou et al . ( 2019 ) that empirically demonstrate that gradients typically point towards the final minimizer for neural networks trained on real-world datasets and analyze the implications of such convexity properties for efficient optimization . In this paper , we show theoretically and empirically that a similar property called star-convexity holds in NMF . From a theoretical perspective , we consider an NMF instance with planted solution , inspired by the stochastic block model for social networks ( Holland et al. , 1983 ; Decelle et al. , 2011 ) and the planted clique problem studied in sum-of-squares literature ( Barak et al. , 2016 ) . We prove that between two points the loss is convex with high probability , and conclude that the loss surface is star-convex in the typical case — even if the loss is computed over unobserved data . From an empirical perspective , we verify that our theoretical results hold for an extensive collection < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > of real-world datasets spanning collaborative filtering ( Zhou et al. , 2008 ; Kula , 2017 ; Harper and Konstan , 2016 ) , signal decomposition ( Zhu , 2016 ; Li and Ngom , 2013 ; Li et al. , 2001 ; Erichson et al. , 2018 ) and audio processing ( Flenner and Hunter , 2017 ; canto Foundation ) , and demonstrate that the star-convex behavior results in efficient optimization . Finally , we show that star-convex behavior becomes more likely with growing number of parameters , suggesting that a similar result may hold as neural networks become wider . We provide supporting empirical evidence for this hypothesis on modern network architectures . 2 NMF AND STAR-CONVEXITY . The aim of NMF is to decompose some large measurement matrix X P Rnˆm into two non-negative matrices U P Rnˆr ` and V P Rrˆm ` such that X « UV . The canonical formulation of NMF is min U , Vě0 ` pU , Vq , where ` pU , Vq “ 1 2 } UV ´X } 2F ( 1 ) NMF is commonly used in recommender systems where entries pi , jq of X for example correspond to the rating user i has given to movie j ( Luo et al. , 2014 ) . In such settings , data might be missing as all users have not rated all movies . In those cases , it is common to only consider the loss over observed data ( Zhang et al. , 2006 ; Candès and Recht , 2009 ) . We let 1̂pi , jq be an indicator variable that is 1 if entry pi , jq is `` observed '' and 0 otherwise . The loss function is then min U , Vě0 ` pU , Vq “ 1 2 ÿ i , j 1̂pi , jq ˆ “ UV ‰ ij ´Xij ˙2 ( 2 ) NMF is similar to PCA which admits spectral strategies ; however , the non-negative constraints in NMF prevent such solutions and result in NP-hardness ( Vavasis , 2009 ) . Work on the computational complexity of NMF has shown that the problem is tractable for small constant dimensions r via algebraic methods ( Arora et al. , 2012 ) . In practice , however , these algorithms are not used , and simple variants of gradient descent , possibly via multiplicative updates ( Lee and Seung , 2001 ) , are popular and are known to work reliably ( Koren et al. , 2009 ) . This gap between theoretical hardness and practical performance is also found in deep learning . Optimizing neural networks is NP-hard in general ( Blum and Rivest , 1989 ) , but in practice they can be optimized with simple stochastic gradient descent algorithms to outmatch humans in tasks such as face verification ( Lu and Tang , 2015 ) and playing Atari-games ( Mnih et al. , 2015 ) . Recent work on understanding the geometry of neural network loss surfaces has promoted the idea of convexity properties . The work of Izmailov et al . ( 2018 ) shows that the loss surface is convex around the local optimum , while Zhou et al . ( 2019 ) and Kleinberg et al . ( 2018 ) show that the gradients during optimization typically point towards the local minima the network eventually converges to . Of central importance in this line of work is star-convexity , which is a property of a function f that guarantees that it is convex along straight paths towards the optima x˚ . See Figure 2 for an example . Formally , it is defined as follows . Definition 1 . A function f : Rn Ñ R is star-convex towards x˚ if for all λ P r0 , 1s and x P Rn , we have f ` λx ` p1´ λq x˚ ˘ ď λfpxq ` p1´ λqfpx˚q . Optimizing star-convex functions can be done in polynomial time ( Lee and Valiant , 2016 ) , in Kleinberg et al . ( 2018 ) it is shown how the function only needs to be star-convex under a natural noise model . NMF is not star-convex in general as it is NP-hard , however , it is natural to conjecture that NMF is starconvex in the typical case . Such a property could explain the practical success of NMF on real-world datasets , which is not worst-case . This will be the working hypothesis of this paper , where the typical case is formalized in Theorem 1 . Indeed , one can verify numerically that NMF is typically star-convex for natural distributions and realistically sized matrices , see Figure 1 where we consider a rank 10 decomposition of p100 , 100qmatrices with iid half-normal entries and a planted solution , sampled as per Assumption 1 given in the next section . Following sections will be dedicated to proving that NMF is starconvex with high probability in a planted model , and to confirm that this phenomenon generalizes to datasets from the real world , which are far from worst-case . 3 PROVING TYPICAL-CASE STAR-CONVEXITY . Our aim now is to prove that the NMF loss-function is typically star-convex for natural non-worstcase distribution of NMF instances . We will consider a slighly weaker notation of star-convexity where f ` λx ` p1 ´ λq x˚ ˘ ď λfpxq ` p1 ´ λqfpx˚q holds not for all x , but for random x with high probability . This is in fact the best achievable , an NMF instance with u1 “ 1 , u˚ “ 0 and v1 “ 0 , v˚ “ 1 is not star-convex . Our results hold with high probability in high dimensions , similar to Dvoretzky ’ s theorem in convex geometry ( Dvoredsky , 1961 ; Davis ) . Inspired by the stochastic block model of social networks ( Holland et al. , 1983 ; Decelle et al. , 2011 ) and the planted clique problem ( Barak et al. , 2016 ) , we will focus on a setting with a planted random solution . In section 4 we verify conclusions drawn from this model transfers to real-world datasets . We will assume that there is a planted optimal solution pU˚ , V˚q , where entries of these matrices are sampled iid from a class of distributions with good concentration properties that include the halfnormal distribution and bounded distributions . As is standard in random matrix theory ( Vershynin , 2010 ) , we will develop non-asymptotic results that hold with a probability that grows as the matrices of shape pn , rq and pr , mq become large . For this reason , we will need to specify how r and m depend on n. Assumption 1 . For pU , Vq P Rnˆr ˆ Rrˆm we assume that the entries of the matrices U , V are sampled iid from a continuous distribution with non-negative support that either piq is bounded or piiq can be expressed as a 1-Lipschitz function of a Gaussian distribution . As n Ñ 8 , we assume that r grows as nγ up to a constant factor for γ P r1 { 2 , 1s , and m as n up to a constant factor . We are now ready to state our main results , that the loss function equation 1 is convex on a straight line between points samples as per Assumption 1 , and thus satisfy our slightly weaker notion of star-convexity , with high probability . The probability increases as the size of the problem increases , suggesting a surprising benefit of high dimensionality . We also show similar results for the loss function of equation 2 with unobserved data under the assumption that the event that any entry is observed occurs independently with constant probability p. Below we sketch the proof idea and key ingredients , the formal proof is given in Appendix D. Theorem 1 . ( Main ) Let matrices U1 , V1 , U2 , V2 , U˚ , V˚ be sampled according to Assumption 1 . Then there exists positive constants c1 , c2 , such that with probability ě 1 ´ c1 expp´c2n1 { 3q , the loss function ` pU , Vq in equation 1 is convex on the straight line pU1 , V1q Ñ pU2 , V2q . The same holds along the line pU1 , V1q Ñ pU˚ , V˚q . It also holds if any entry pi , jq is observed independently with constant probability p , but with probability ě 1´ c1 expp´c2r1 { 3q .
The paper derives results for nonnegative-matrix factorization along the lines of recent results on SGD for DNNs, showing that the loss is star-convex towards randomized planted solutions. The star-convexity property is also shown to hold to some degree on real world datasets. The paper argues that these results explain the good performance that usual gradient descent procedures achieve in practice. The paper also puts forward a conjecture that more parameters make the loss function easier to optimize by making it more likely that star convexity holds, and that a similar conclusion could hold for DNNs.
SP:9e712c6f60b19d9309721eea514589755b4ce648
Star-Convexity in Non-Negative Matrix Factorization
1 INTRODUCTION . Non-negative matrix factorization ( NMF ) is a ubiquitous technique for data analysis where one attempts to factorize a measurement matrix X into the product of non-negative matrices U , V ( Lee and Seung , 1999 ) . This simple problem has applications in recommender systems ( Luo et al. , 2014 ) , scientific analysis ( Berne et al. , 2007 ; Trindade et al. , 2017 ) , computer vision ( Gillis , 2012 ) , internet distance prediction ( Mao et al. , 2006 ) , audio processing ( Schmidt et al. , 2007 ) and many more domains . Often , the non-negativity is crucial for interpretability , for example , in the context of crystallography , the light sources , which are represented as matrix factors , have non-negative intensity ( Suram et al. , 2016 ) . Like many other non-convex optimization problems , finding the exact solution to NMF is NP-hard ( Pardalos and Vavasis , 1991 ; Vavasis , 2009 ) . NMF ’ s tremendous practical success is however at odds with such worst-case analysis , and simple algorithms based upon gradient descent are known to find good solutions in real-world settings ( Lee and Seung , 2001 ) . At the time when NMF was proposed , most analyses of optimization problems within machine learning focused on convex formulations such as SVMs ( Cortes and Vapnik , 1995 ) , but owing to the success of neural networks , non-convex optimization has experienced a resurgence in interest . Here , we revisit NMF from a fresh perspective , utilizing recent tools developed in the context of optimization in deep learning . Specifically , our main inspiration is the recent work of Kleinberg et al . ( 2018 ) and Zhou et al . ( 2019 ) that empirically demonstrate that gradients typically point towards the final minimizer for neural networks trained on real-world datasets and analyze the implications of such convexity properties for efficient optimization . In this paper , we show theoretically and empirically that a similar property called star-convexity holds in NMF . From a theoretical perspective , we consider an NMF instance with planted solution , inspired by the stochastic block model for social networks ( Holland et al. , 1983 ; Decelle et al. , 2011 ) and the planted clique problem studied in sum-of-squares literature ( Barak et al. , 2016 ) . We prove that between two points the loss is convex with high probability , and conclude that the loss surface is star-convex in the typical case — even if the loss is computed over unobserved data . From an empirical perspective , we verify that our theoretical results hold for an extensive collection < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > < latexit sha1_base64= '' /JI5iynvNWOEKZjLq6ujHWhdsHw= '' > AAAB7nicbVDLSsNAFL2pr1pfVZduBovgqiQi1GXRjcsK9gFtKJPJpB06mYSZG6GEfoQbF4q49Xvc+TdO2yy09cDA4ZxzmXtPkEph0HW/ndLG5tb2Tnm3srd/cHhUPT7pmCTTjLdZIhPdC6jhUijeRoGS91LNaRxI3g0md3O/+8S1EYl6xGnK/ZiOlIgEo2il7kDaaEiH1Zpbdxcg68QrSA0KtIbVr0GYsCzmCpmkxvQ9N0U/pxoFk3xWGWSGp5RN6Ij3LVU05sbPF+vOyIVVQhIl2j6FZKH+nshpbMw0Dmwypjg2q95c/M/rZxjd+LlQaYZcseVHUSYJJmR+OwmF5gzl1BLKtLC7EjammjK0DVVsCd7qyeukc1X33Lr3cF1r3hZ1lOEMzuESPGhAE+6hBW1gMIFneIU3J3VenHfnYxktOcXMKfyB8/kDPMWPfQ== < /latexit > of real-world datasets spanning collaborative filtering ( Zhou et al. , 2008 ; Kula , 2017 ; Harper and Konstan , 2016 ) , signal decomposition ( Zhu , 2016 ; Li and Ngom , 2013 ; Li et al. , 2001 ; Erichson et al. , 2018 ) and audio processing ( Flenner and Hunter , 2017 ; canto Foundation ) , and demonstrate that the star-convex behavior results in efficient optimization . Finally , we show that star-convex behavior becomes more likely with growing number of parameters , suggesting that a similar result may hold as neural networks become wider . We provide supporting empirical evidence for this hypothesis on modern network architectures . 2 NMF AND STAR-CONVEXITY . The aim of NMF is to decompose some large measurement matrix X P Rnˆm into two non-negative matrices U P Rnˆr ` and V P Rrˆm ` such that X « UV . The canonical formulation of NMF is min U , Vě0 ` pU , Vq , where ` pU , Vq “ 1 2 } UV ´X } 2F ( 1 ) NMF is commonly used in recommender systems where entries pi , jq of X for example correspond to the rating user i has given to movie j ( Luo et al. , 2014 ) . In such settings , data might be missing as all users have not rated all movies . In those cases , it is common to only consider the loss over observed data ( Zhang et al. , 2006 ; Candès and Recht , 2009 ) . We let 1̂pi , jq be an indicator variable that is 1 if entry pi , jq is `` observed '' and 0 otherwise . The loss function is then min U , Vě0 ` pU , Vq “ 1 2 ÿ i , j 1̂pi , jq ˆ “ UV ‰ ij ´Xij ˙2 ( 2 ) NMF is similar to PCA which admits spectral strategies ; however , the non-negative constraints in NMF prevent such solutions and result in NP-hardness ( Vavasis , 2009 ) . Work on the computational complexity of NMF has shown that the problem is tractable for small constant dimensions r via algebraic methods ( Arora et al. , 2012 ) . In practice , however , these algorithms are not used , and simple variants of gradient descent , possibly via multiplicative updates ( Lee and Seung , 2001 ) , are popular and are known to work reliably ( Koren et al. , 2009 ) . This gap between theoretical hardness and practical performance is also found in deep learning . Optimizing neural networks is NP-hard in general ( Blum and Rivest , 1989 ) , but in practice they can be optimized with simple stochastic gradient descent algorithms to outmatch humans in tasks such as face verification ( Lu and Tang , 2015 ) and playing Atari-games ( Mnih et al. , 2015 ) . Recent work on understanding the geometry of neural network loss surfaces has promoted the idea of convexity properties . The work of Izmailov et al . ( 2018 ) shows that the loss surface is convex around the local optimum , while Zhou et al . ( 2019 ) and Kleinberg et al . ( 2018 ) show that the gradients during optimization typically point towards the local minima the network eventually converges to . Of central importance in this line of work is star-convexity , which is a property of a function f that guarantees that it is convex along straight paths towards the optima x˚ . See Figure 2 for an example . Formally , it is defined as follows . Definition 1 . A function f : Rn Ñ R is star-convex towards x˚ if for all λ P r0 , 1s and x P Rn , we have f ` λx ` p1´ λq x˚ ˘ ď λfpxq ` p1´ λqfpx˚q . Optimizing star-convex functions can be done in polynomial time ( Lee and Valiant , 2016 ) , in Kleinberg et al . ( 2018 ) it is shown how the function only needs to be star-convex under a natural noise model . NMF is not star-convex in general as it is NP-hard , however , it is natural to conjecture that NMF is starconvex in the typical case . Such a property could explain the practical success of NMF on real-world datasets , which is not worst-case . This will be the working hypothesis of this paper , where the typical case is formalized in Theorem 1 . Indeed , one can verify numerically that NMF is typically star-convex for natural distributions and realistically sized matrices , see Figure 1 where we consider a rank 10 decomposition of p100 , 100qmatrices with iid half-normal entries and a planted solution , sampled as per Assumption 1 given in the next section . Following sections will be dedicated to proving that NMF is starconvex with high probability in a planted model , and to confirm that this phenomenon generalizes to datasets from the real world , which are far from worst-case . 3 PROVING TYPICAL-CASE STAR-CONVEXITY . Our aim now is to prove that the NMF loss-function is typically star-convex for natural non-worstcase distribution of NMF instances . We will consider a slighly weaker notation of star-convexity where f ` λx ` p1 ´ λq x˚ ˘ ď λfpxq ` p1 ´ λqfpx˚q holds not for all x , but for random x with high probability . This is in fact the best achievable , an NMF instance with u1 “ 1 , u˚ “ 0 and v1 “ 0 , v˚ “ 1 is not star-convex . Our results hold with high probability in high dimensions , similar to Dvoretzky ’ s theorem in convex geometry ( Dvoredsky , 1961 ; Davis ) . Inspired by the stochastic block model of social networks ( Holland et al. , 1983 ; Decelle et al. , 2011 ) and the planted clique problem ( Barak et al. , 2016 ) , we will focus on a setting with a planted random solution . In section 4 we verify conclusions drawn from this model transfers to real-world datasets . We will assume that there is a planted optimal solution pU˚ , V˚q , where entries of these matrices are sampled iid from a class of distributions with good concentration properties that include the halfnormal distribution and bounded distributions . As is standard in random matrix theory ( Vershynin , 2010 ) , we will develop non-asymptotic results that hold with a probability that grows as the matrices of shape pn , rq and pr , mq become large . For this reason , we will need to specify how r and m depend on n. Assumption 1 . For pU , Vq P Rnˆr ˆ Rrˆm we assume that the entries of the matrices U , V are sampled iid from a continuous distribution with non-negative support that either piq is bounded or piiq can be expressed as a 1-Lipschitz function of a Gaussian distribution . As n Ñ 8 , we assume that r grows as nγ up to a constant factor for γ P r1 { 2 , 1s , and m as n up to a constant factor . We are now ready to state our main results , that the loss function equation 1 is convex on a straight line between points samples as per Assumption 1 , and thus satisfy our slightly weaker notion of star-convexity , with high probability . The probability increases as the size of the problem increases , suggesting a surprising benefit of high dimensionality . We also show similar results for the loss function of equation 2 with unobserved data under the assumption that the event that any entry is observed occurs independently with constant probability p. Below we sketch the proof idea and key ingredients , the formal proof is given in Appendix D. Theorem 1 . ( Main ) Let matrices U1 , V1 , U2 , V2 , U˚ , V˚ be sampled according to Assumption 1 . Then there exists positive constants c1 , c2 , such that with probability ě 1 ´ c1 expp´c2n1 { 3q , the loss function ` pU , Vq in equation 1 is convex on the straight line pU1 , V1q Ñ pU2 , V2q . The same holds along the line pU1 , V1q Ñ pU˚ , V˚q . It also holds if any entry pi , jq is observed independently with constant probability p , but with probability ě 1´ c1 expp´c2r1 { 3q .
This paper studies loss landscape of Non-negative matrix factorization (NMF) when the matrix is very large. It shows that with high probability, the landscape is quasi-convex under some conditions. This suggests that the optimization problem would become easier as the size of the matrix becomes very large. Implications on deep networks are also discussed.
SP:9e712c6f60b19d9309721eea514589755b4ce648
SemanticAdv: Generating Adversarial Examples via Attribute-Conditional Image Editing
1 INTRODUCTION . Deep neural networks ( DNNs ) have demonstrated great successes in advancing the state-of-the-art performance of discriminative tasks ( Krizhevsky et al. , 2012 ; Goodfellow et al. , 2016 ; He et al. , 2016 ; Collobert & Weston , 2008 ; Deng et al. , 2013 ; Silver et al. , 2016 ) . However , recent research found that DNNs are vulnerable to adversarial examples which are carefully crafted instances aiming to induce arbitrary prediction errors for learning systems . Such adversarial examples containing small magnitude of perturbation have shed light on understanding and discovering potential vulnerabilities of DNNs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016 ; Carlini & Wagner , 2017 ; Xiao et al. , 2018b ; c ; a ; 2019 ) . Most existing work focused on constructing adversarial examples by adding Lp bounded pixel-wise perturbations ( Goodfellow et al. , 2014b ) or spatially transforming the image ( Xiao et al. , 2018c ; Engstrom et al. , 2017 ) ( e.g. , in-plane rotation or out-of-plane rotation ) . Generating unrestricted perturbations with semantically meaningful patterns is an important yet under-explored field . At the same time , deep generative models have demonstrated impressive performance in learning disentangled semantic factors through data generation in an unsupervised ( Radford et al. , 2015 ; Karras et al. , 2018 ; Brock et al. , 2019 ) or weakly-supervised manner based on semantic attributes ( Yan et al. , 2016 ; Choi et al. , 2018 ) . Empirical findings in ( Yan et al. , 2016 ; Zhu et al. , 2016a ; Radford et al. , 2015 ) demonstrated that a simple linear interpolation on the learned image manifold can produce smooth visual transitions between a pair of input images . In this paper , we introduce a novel attack SemanticAdv which generates unrestricted perturbations with semantically meaningful patterns . Motivated by the findings mentioned above , we leverage an attribute-conditional image editing model ( Choi et al. , 2018 ) to synthesize adversarial examples by interpolating between source and target images in the feature-map space . Here , we focus on changing a single attribute dimension to achieve adversarial goals while keeping the generated adversarial image reasonably-looking ( e.g. , see Figure 1 ) . To validate the effectiveness of the proposed attack method , we consider two tasks , namely , face verification and landmark detection , as face recognition field has been extensively explored and the commercially used face models are relatively robust +blonde hair Adversarial Image Synthesized Image Target Image Mr. Bob Mr. BobAttribute-conditional Image Generator Identity VerificationOriginal Image Miss Alice Reconstruction via Generation Original Attribute Augmented Attribute Attribute-conditional Image Editing via Generation Feature-map Interpolation Adversarial Image Original Image Target Image SemanticAdv +pale skin Figure 1 : Left : Overview of the proposed SemanticAdv . Right : Illustration of our SemanticAdv in the real world face verification platform . Note that the confidence denotes the likelihood that two faces belong to the same person . since they require a low false positive rate . We conduct both qualitative and quantitative evaluations on CelebA dataset ( Liu et al. , 2015 ) . To demonstrate the applicability of SemanticAdv beyond face domain , we further extend SemanticAdv to generate adversarial street-view images . We treat semantic layouts as input attributes and use the image editing model ( Hong et al. , 2018 ) pre-trained on Cityscape dataset ( Cordts et al. , 2016 ) . Please find more visualization results on the anonymous website : https : //sites.google.com/view/generate-semantic-adv-example . The contributions of the proposed SemanticAdv are three-folds . First , we propose a novel semanticbased attack method to generate unrestricted adversarial examples by feature-space interpolation . Second , the proposed method is able to generate semantically-controllable perturbations due to the attribute-conditioned modeling . This allows us to analyze the robustness of a recognition system against different types of semantic attacks . Third , as a side benefit , the proposed attack exhibits high transferability and leads to a 65 % query-free black-box attack success rate on a real-world face verification platform , which outperforms the pixel-wise perturbations in attacking existing defense methods . 2 RELATED WORK . Semantic image editing . Semantic image synthesis and manipulation is a popular research topic in machine learning , graphics and vision . Thanks to recent advances in deep generative models ( Kingma & Welling , 2014 ; Goodfellow et al. , 2014a ; Oord et al. , 2016 ) and the empirical analysis of deep classification networks ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ) , past few years have witnessed tremendous breakthroughs towards high-fidelity pure image generation ( Radford et al. , 2015 ; Karras et al. , 2018 ; Brock et al. , 2019 ) , attribute-to-image generation ( Yan et al. , 2016 ; Choi et al. , 2018 ) , text-to-image generation ( Mansimov et al. , 2015 ; Reed et al. , 2016 ; Van den Oord et al. , 2016 ; Odena et al. , 2017 ; Zhang et al. , 2017 ; Johnson et al. , 2018 ) , and imageto-image translation ( Isola et al. , 2017 ; Zhu et al. , 2017 ; Liu et al. , 2017 ; Wang et al. , 2018b ; Hong et al. , 2018 ) . Adversarial examples . Generating Lp bounded adversarial perturbation has been extensively studied recently ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016 ; Carlini & Wagner , 2017 ; Xiao et al. , 2018b ) . To further explore diverse adversarial attacks and potentially help inspire defense mechanisms , it is important to generate the socalled “ unrestricted '' adversarial examples which contain unrestricted magnitude of perturbation while still preserve perceptual realism Brown et al . ( 2018 ) . Recently , Xiao et al . ( 2018c ) ; Engstrom et al . ( 2017 ) propose to spatially transform the image patches instead of adding pixel-wise perturbation , while such spatial transformation does not consider semantic information . Our proposed semanticAdv focuses on generating unrestricted perturbation with semantically meaningful patterns guided by visual attributes . Relevant to our work , Song et al . ( 2018 ) proposed to synthesize adversarial examples with an unconditional generative model . Bhattad et al . ( 2019 ) studied semantic transformation in only the color or texture space . Compared to these works , semanticAdv is able to generate adversarial examples in a controllable fashion using specific visual attributes by performing manipulation in the feature space . We further analyze the robustness of the recognition system by generating adversarial examples guided by different visual attributes . Concurrent to our work , Joshi et al . ( 2019 ) proposed to generate semantic-based attacks against a restricted binary classifier , while our attack is able to mislead the model towards arbitrary adversarial targets . They conduct the manipulation within the attribution space which is less flexible and effective than our proposed feature-space interpolation . 3 SEMANTIC ADVERSARIAL EXAMPLES . 3.1 PROBLEM DEFINITION . LetM be a machine learning model trained on a dataset D = { ( x , y ) } consisting of image-label pairs , where x ∈ RH×W×DI and y ∈ RDL denote the image and the ground-truth label , respectively . Here , H , W , DI , andDL denote the image height , image width , number of image channels , and label dimensions , respectively . For each image x , our modelM makes a prediction ŷ =M ( x ) ∈ RDL . Given a target image-label pair ( xtgt , ytgt ) and y 6= ytgt , a traditional attacker aims to synthesize adversarial examples { xadv } by adding pixel-wise perturbations to or spatially transforming the original image x such thatM ( xadv ) = ytgt . In this work , we introduce the concept of semantic attacker that aims at generating adversarial examples by adding semantically meaningful perturbation with a conditional generative model G. Compared to traditional attacker that usually produces pixel-wise perturbations , the proposed method is able to produce semantically meaningful perturbations . Semantic image editing . For simplicity , we start with the formulation where the input attribute is represented as a compact vector . This formulation can be directly extended to other input attribute formats including semantic layouts . Let c ∈ RDC be an attribute representation reflecting the semantic factors ( e.g. , expression or hair color of a portrait image ) of image x , where DC indicates the attribute dimension and ci ∈ { 0 , 1 } indicates the appearance of i-th attribute . Here , our goal is to use the conditional generator for semantic image editing . For example , given a portrait image of a girl with black hair and blonde hair as the new attribute , our generator is supposed to synthesize a new image that turns the girl ’ s hair from black to blonde . More specifically , we denote the new attribute as cnew ∈ RDC such that the synthesized image is given by xnew = G ( x , cnew ) . In the special case when there is no attribute change ( c = cnew ) , the generator simply reconstructs the input : x = G ( x , c ) . Supported by the findings mentioned in ( Bengio et al. , 2013 ; Reed et al. , 2014 ) , our synthesized image xnew should fall close to the data manifold if we constrain the change of attribute values to be sufficiently small ( e.g. , we only update one semantic attribute at a time ) . In addition , we can potentially generate many such images by linearly interpolating between the semantic embeddings of the conditional generator G using original image x and the synthesized image xnew with the augmented attribute . Attribute-space interpolation . We start with a simple solution ( detailed in Eq . 1 ) assuming the adversarial example can be found by directly interpolating in the attribute-space . Given a pair of attributes c and cnew , we introduce an interpolation parameter α ∈ ( 0 , 1 ) to generate the augmented attribute vector c∗ ∈ RDC ( see Eq . 1 ) . Given augmented attribute c∗ and original image x , we produce the synthesized image by the generator G. For our notation purpose , we also introduce a delegated function TG as a re-parametrization for the generator G. Our formulation is also supported by the empirical results on attribute-conditioned image progression ( Yan et al. , 2016 ; Radford et al. , 2015 ) that a well-trained generative model has the capability to synthesize a sequence of images with smooth attribute transitions . xadv = argminαL ( TG ( α ; x , c , cnew ) ) where TG ( α ; x , c , cnew ) = G ( x , c∗ ) and c∗ = α · c + ( 1− α ) · cnew ( 1 ) Feature-map interpolation . Alternatively , we propose to interpolate using the feature map produced by the generator G = Gdec ◦ Genc . Here , Genc is the encoder module that takes the image as input and outputs the feature map . Similarly , Gdec is the decoder module that takes the feature map as input and outputs the synthesized image . Let f∗ = Genc ( x , c ) ∈ RHF×WF×CF be the feature map of an intermediate layer in the generator , where HF , WF and CF indicate the height , width , and number of channels in the feature map . xadv = argminαL ( TG ( α ; x , c , cnew ) ) where TG ( α ; x , c , cnew ) = Gdec ( f∗ ) , f∗ = α Genc ( x , c ) + ( 1− α ) Genc ( x , cnew ) ( 2 ) Compared to attribute-space interpolation which is parameterized by a scalar , we parameterize feature-map interpolation by a tensor α ∈ RHF×WF×CF ( αh , w , k ∈ ( 0 , 1 ) , where 1 ≤ h ≤ HF , 1 ≤ w ≤ WF , and 1 ≤ k ≤ CF ) with the same shape as the feature map . Compared to linear interpolation over attribute-space , such design introduces more flexibility when interpolating between the original image and the synthesized image . Empirical results in Section 4.2 show our design is critical to the adversarial attack success rate .
The authors describe a method for adversarially modifying a given (test) example that 1) still retains the correct label on the example, but 2) causes a model to make an incorrect prediction on it. The novelty of their proposed method is that their adversarial modifications are along a provided semantic axis (e.g., changing the color of someone's skin in a face recognition task) instead of the standard $L_p$ perturbations that the existing literature has focused on (e.g., making a very small change to each individual pixel). The adversarial examples that the authors construct, experimentally, are impressive and striking. I'd especially like to acknowledge the work that the authors put in to construct an anonymous link where they showcase results from their experiments. Thank you!
SP:37c8908c43beda4efc9db25216225f0106fe009c
SemanticAdv: Generating Adversarial Examples via Attribute-Conditional Image Editing
1 INTRODUCTION . Deep neural networks ( DNNs ) have demonstrated great successes in advancing the state-of-the-art performance of discriminative tasks ( Krizhevsky et al. , 2012 ; Goodfellow et al. , 2016 ; He et al. , 2016 ; Collobert & Weston , 2008 ; Deng et al. , 2013 ; Silver et al. , 2016 ) . However , recent research found that DNNs are vulnerable to adversarial examples which are carefully crafted instances aiming to induce arbitrary prediction errors for learning systems . Such adversarial examples containing small magnitude of perturbation have shed light on understanding and discovering potential vulnerabilities of DNNs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016 ; Carlini & Wagner , 2017 ; Xiao et al. , 2018b ; c ; a ; 2019 ) . Most existing work focused on constructing adversarial examples by adding Lp bounded pixel-wise perturbations ( Goodfellow et al. , 2014b ) or spatially transforming the image ( Xiao et al. , 2018c ; Engstrom et al. , 2017 ) ( e.g. , in-plane rotation or out-of-plane rotation ) . Generating unrestricted perturbations with semantically meaningful patterns is an important yet under-explored field . At the same time , deep generative models have demonstrated impressive performance in learning disentangled semantic factors through data generation in an unsupervised ( Radford et al. , 2015 ; Karras et al. , 2018 ; Brock et al. , 2019 ) or weakly-supervised manner based on semantic attributes ( Yan et al. , 2016 ; Choi et al. , 2018 ) . Empirical findings in ( Yan et al. , 2016 ; Zhu et al. , 2016a ; Radford et al. , 2015 ) demonstrated that a simple linear interpolation on the learned image manifold can produce smooth visual transitions between a pair of input images . In this paper , we introduce a novel attack SemanticAdv which generates unrestricted perturbations with semantically meaningful patterns . Motivated by the findings mentioned above , we leverage an attribute-conditional image editing model ( Choi et al. , 2018 ) to synthesize adversarial examples by interpolating between source and target images in the feature-map space . Here , we focus on changing a single attribute dimension to achieve adversarial goals while keeping the generated adversarial image reasonably-looking ( e.g. , see Figure 1 ) . To validate the effectiveness of the proposed attack method , we consider two tasks , namely , face verification and landmark detection , as face recognition field has been extensively explored and the commercially used face models are relatively robust +blonde hair Adversarial Image Synthesized Image Target Image Mr. Bob Mr. BobAttribute-conditional Image Generator Identity VerificationOriginal Image Miss Alice Reconstruction via Generation Original Attribute Augmented Attribute Attribute-conditional Image Editing via Generation Feature-map Interpolation Adversarial Image Original Image Target Image SemanticAdv +pale skin Figure 1 : Left : Overview of the proposed SemanticAdv . Right : Illustration of our SemanticAdv in the real world face verification platform . Note that the confidence denotes the likelihood that two faces belong to the same person . since they require a low false positive rate . We conduct both qualitative and quantitative evaluations on CelebA dataset ( Liu et al. , 2015 ) . To demonstrate the applicability of SemanticAdv beyond face domain , we further extend SemanticAdv to generate adversarial street-view images . We treat semantic layouts as input attributes and use the image editing model ( Hong et al. , 2018 ) pre-trained on Cityscape dataset ( Cordts et al. , 2016 ) . Please find more visualization results on the anonymous website : https : //sites.google.com/view/generate-semantic-adv-example . The contributions of the proposed SemanticAdv are three-folds . First , we propose a novel semanticbased attack method to generate unrestricted adversarial examples by feature-space interpolation . Second , the proposed method is able to generate semantically-controllable perturbations due to the attribute-conditioned modeling . This allows us to analyze the robustness of a recognition system against different types of semantic attacks . Third , as a side benefit , the proposed attack exhibits high transferability and leads to a 65 % query-free black-box attack success rate on a real-world face verification platform , which outperforms the pixel-wise perturbations in attacking existing defense methods . 2 RELATED WORK . Semantic image editing . Semantic image synthesis and manipulation is a popular research topic in machine learning , graphics and vision . Thanks to recent advances in deep generative models ( Kingma & Welling , 2014 ; Goodfellow et al. , 2014a ; Oord et al. , 2016 ) and the empirical analysis of deep classification networks ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; Szegedy et al. , 2015 ) , past few years have witnessed tremendous breakthroughs towards high-fidelity pure image generation ( Radford et al. , 2015 ; Karras et al. , 2018 ; Brock et al. , 2019 ) , attribute-to-image generation ( Yan et al. , 2016 ; Choi et al. , 2018 ) , text-to-image generation ( Mansimov et al. , 2015 ; Reed et al. , 2016 ; Van den Oord et al. , 2016 ; Odena et al. , 2017 ; Zhang et al. , 2017 ; Johnson et al. , 2018 ) , and imageto-image translation ( Isola et al. , 2017 ; Zhu et al. , 2017 ; Liu et al. , 2017 ; Wang et al. , 2018b ; Hong et al. , 2018 ) . Adversarial examples . Generating Lp bounded adversarial perturbation has been extensively studied recently ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014b ; Moosavi-Dezfooli et al. , 2016 ; Papernot et al. , 2016 ; Carlini & Wagner , 2017 ; Xiao et al. , 2018b ) . To further explore diverse adversarial attacks and potentially help inspire defense mechanisms , it is important to generate the socalled “ unrestricted '' adversarial examples which contain unrestricted magnitude of perturbation while still preserve perceptual realism Brown et al . ( 2018 ) . Recently , Xiao et al . ( 2018c ) ; Engstrom et al . ( 2017 ) propose to spatially transform the image patches instead of adding pixel-wise perturbation , while such spatial transformation does not consider semantic information . Our proposed semanticAdv focuses on generating unrestricted perturbation with semantically meaningful patterns guided by visual attributes . Relevant to our work , Song et al . ( 2018 ) proposed to synthesize adversarial examples with an unconditional generative model . Bhattad et al . ( 2019 ) studied semantic transformation in only the color or texture space . Compared to these works , semanticAdv is able to generate adversarial examples in a controllable fashion using specific visual attributes by performing manipulation in the feature space . We further analyze the robustness of the recognition system by generating adversarial examples guided by different visual attributes . Concurrent to our work , Joshi et al . ( 2019 ) proposed to generate semantic-based attacks against a restricted binary classifier , while our attack is able to mislead the model towards arbitrary adversarial targets . They conduct the manipulation within the attribution space which is less flexible and effective than our proposed feature-space interpolation . 3 SEMANTIC ADVERSARIAL EXAMPLES . 3.1 PROBLEM DEFINITION . LetM be a machine learning model trained on a dataset D = { ( x , y ) } consisting of image-label pairs , where x ∈ RH×W×DI and y ∈ RDL denote the image and the ground-truth label , respectively . Here , H , W , DI , andDL denote the image height , image width , number of image channels , and label dimensions , respectively . For each image x , our modelM makes a prediction ŷ =M ( x ) ∈ RDL . Given a target image-label pair ( xtgt , ytgt ) and y 6= ytgt , a traditional attacker aims to synthesize adversarial examples { xadv } by adding pixel-wise perturbations to or spatially transforming the original image x such thatM ( xadv ) = ytgt . In this work , we introduce the concept of semantic attacker that aims at generating adversarial examples by adding semantically meaningful perturbation with a conditional generative model G. Compared to traditional attacker that usually produces pixel-wise perturbations , the proposed method is able to produce semantically meaningful perturbations . Semantic image editing . For simplicity , we start with the formulation where the input attribute is represented as a compact vector . This formulation can be directly extended to other input attribute formats including semantic layouts . Let c ∈ RDC be an attribute representation reflecting the semantic factors ( e.g. , expression or hair color of a portrait image ) of image x , where DC indicates the attribute dimension and ci ∈ { 0 , 1 } indicates the appearance of i-th attribute . Here , our goal is to use the conditional generator for semantic image editing . For example , given a portrait image of a girl with black hair and blonde hair as the new attribute , our generator is supposed to synthesize a new image that turns the girl ’ s hair from black to blonde . More specifically , we denote the new attribute as cnew ∈ RDC such that the synthesized image is given by xnew = G ( x , cnew ) . In the special case when there is no attribute change ( c = cnew ) , the generator simply reconstructs the input : x = G ( x , c ) . Supported by the findings mentioned in ( Bengio et al. , 2013 ; Reed et al. , 2014 ) , our synthesized image xnew should fall close to the data manifold if we constrain the change of attribute values to be sufficiently small ( e.g. , we only update one semantic attribute at a time ) . In addition , we can potentially generate many such images by linearly interpolating between the semantic embeddings of the conditional generator G using original image x and the synthesized image xnew with the augmented attribute . Attribute-space interpolation . We start with a simple solution ( detailed in Eq . 1 ) assuming the adversarial example can be found by directly interpolating in the attribute-space . Given a pair of attributes c and cnew , we introduce an interpolation parameter α ∈ ( 0 , 1 ) to generate the augmented attribute vector c∗ ∈ RDC ( see Eq . 1 ) . Given augmented attribute c∗ and original image x , we produce the synthesized image by the generator G. For our notation purpose , we also introduce a delegated function TG as a re-parametrization for the generator G. Our formulation is also supported by the empirical results on attribute-conditioned image progression ( Yan et al. , 2016 ; Radford et al. , 2015 ) that a well-trained generative model has the capability to synthesize a sequence of images with smooth attribute transitions . xadv = argminαL ( TG ( α ; x , c , cnew ) ) where TG ( α ; x , c , cnew ) = G ( x , c∗ ) and c∗ = α · c + ( 1− α ) · cnew ( 1 ) Feature-map interpolation . Alternatively , we propose to interpolate using the feature map produced by the generator G = Gdec ◦ Genc . Here , Genc is the encoder module that takes the image as input and outputs the feature map . Similarly , Gdec is the decoder module that takes the feature map as input and outputs the synthesized image . Let f∗ = Genc ( x , c ) ∈ RHF×WF×CF be the feature map of an intermediate layer in the generator , where HF , WF and CF indicate the height , width , and number of channels in the feature map . xadv = argminαL ( TG ( α ; x , c , cnew ) ) where TG ( α ; x , c , cnew ) = Gdec ( f∗ ) , f∗ = α Genc ( x , c ) + ( 1− α ) Genc ( x , cnew ) ( 2 ) Compared to attribute-space interpolation which is parameterized by a scalar , we parameterize feature-map interpolation by a tensor α ∈ RHF×WF×CF ( αh , w , k ∈ ( 0 , 1 ) , where 1 ≤ h ≤ HF , 1 ≤ w ≤ WF , and 1 ≤ k ≤ CF ) with the same shape as the feature map . Compared to linear interpolation over attribute-space , such design introduces more flexibility when interpolating between the original image and the synthesized image . Empirical results in Section 4.2 show our design is critical to the adversarial attack success rate .
This paper proposes to generate "unrestricted adversarial examples" via attribute-conditional image editing. Their method, SemanticAdv, leverages disentangled semantic factors and interpolates feature-map with higher freedom than attribute-space. Their adversarial optimization objectives combine both attack effectiveness and interpolation smoothness. They conduct extensive experiments for several tasks compared with CW-attack, showing broad applicability of the proposed method.
SP:37c8908c43beda4efc9db25216225f0106fe009c
Annealed Denoising score matching: learning Energy based model in high-dimensional spaces
1 INTRODUCTION AND MOTIVATION . Treating data as stochastic samples from a probability distribution and developing models that can learn such distributions is at the core for solving a large variety of application problems , such as error correction/denoising ( Vincent et al. , 2010 ) , outlier/novelty detection ( Zhai et al. , 2016 ; Choi and Jang , 2018 ) , sample generation ( Nijkamp et al. , 2019 ; Du and Mordatch , 2019 ) , invariant pattern recognition , Bayesian reasoning ( Welling and Teh , 2011 ) which relies on good data priors , and many others . Energy-Based Models ( EBMs ) ( LeCun et al. , 2006 ; Ngiam et al. , 2011 ) assign an energy E ( x ) to each data point x which implicitly defines a probability by the Boltzmann distribution pm ( x ) = e−E ( x ) /Z . Sampling from this distribution can be used as a generative process that yield plausible samples of x . Compared to other generative models , like GANs ( Goodfellow et al. , 2014 ) , flowbased models ( Dinh et al. , 2015 ; Kingma and Dhariwal , 2018 ) , or auto-regressive models ( van den Oord et al. , 2016 ; Ostrovski et al. , 2018 ) , energy-based models have significant advantages . First , they provide explicit ( unnormalized ) density information , compositionality ( Hinton , 1999 ; Haarnoja et al. , 2017 ) , better mode coverage ( Kumar et al. , 2019 ) and flexibility ( Du and Mordatch , 2019 ) . Further , they do not require special model architecture , unlike auto-regressive and flow-based models . Recently , Energy-based models has been successfully trained with maximum likelihood ( Nijkamp et al. , 2019 ; Du and Mordatch , 2019 ) , but training can be very computationally demanding due to the need of sampling model distribution . Variants with a truncated sampling procedure have been proposed , such as contrastive divergence ( Hinton , 2002 ) . Such models learn much faster with the draw back of not exploring the state space thoroughly ( Tieleman , 2008 ) . 1.1 SCORE MATCHING , DENOISING SCORE MATCHING AND DEEP ENERGY ESTIMATORS . Score matching ( SM ) ( Hyvärinen , 2005 ) circumvents the requirement of sampling the model distribution . In score matching , the score function is defined to be the gradient of log-density or the negative energy function . The expected L2 norm of difference between the model score function and the data score function are minimized . One convenient way of using score matching is learning the energy function corresponding to a Gaussian kernel Parzen density estimator ( Parzen , 1962 ) of the data : pσ0 ( x̃ ) = ∫ qσ0 ( x̃|x ) p ( x ) dx . Though hard to evaluate , the data score is well defined : sd ( x̃ ) = ∇x̃ log ( pσ0 ( x̃ ) ) , and the corresponding objective is : LSM ( θ ) = Epσ0 ( x̃ ) ‖ ∇x̃ log ( pσ0 ( x̃ ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 1 ) Vincent ( 2011 ) studied the connection between denoising auto-encoder and score matching , and proved the remarkable result that the following objective , named Denoising Score Matching ( DSM ) , is equivalent to the objective above : LDSM ( θ ) = Epσ0 ( x̃ , x ) ‖ ∇x̃ log ( qσ0 ( x̃|x ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 2 ) Note that in ( 2 ) the Parzen density score is replaced by the derivative of log density of the single noise kernel ∇x̃ log ( qσ0 ( x̃|x ) ) , which is much easier to evaluate . In the particular case of Gaussian noise , log ( qσ0 ( x̃|x ) ) = − ( x̃−x ) 2 2σ20 + C , and therefore : LDSM ( θ ) = Epσ0 ( x̃ , x ) ‖ x − x̃ + σ0 2∇x̃E ( x̃ ; θ ) ‖2 ( 3 ) The interpretation of objective ( 3 ) is simple , it forces the energy gradient to align with the vector pointing from the noisy sample to the clean data sample . To optimize an objective involving the derivative of a function defined by a neural network , Kingma and LeCun ( 2010 ) proposed the use of double backpropagation ( Drucker and Le Cun , 1991 ) . Deep energy estimator networks ( Saremi et al. , 2018 ) first applied this technique to learn an energy function defined by a deep neural network . In this work and similarly in Saremi and Hyvarinen ( 2019 ) , an energy-based model was trained to match a Parzen density estimator of data with a certain noise magnitude . The previous models were able to perform denoising task , but they were unable to generate high-quality data samples from a random input initialization . Recently , Song and Ermon ( 2019 ) trained an excellent generative model by fitting a series of score estimators coupled together in a single neural network , each matching the score of a Parzen estimator with a different noise magnitude . The questions we address here is why learning energy-based models with single noise level does not permit high-quality sample generation and what can be done to improve energy based models . Our work builds on key ideas from Saremi et al . ( 2018 ) ; Saremi and Hyvarinen ( 2019 ) ; Song and Ermon ( 2019 ) . Section 2 , provides a geometric view of the learning problem in denoising score matching and provides a theoretical explanation why training with one noise level is insufficient if the data dimension is high . Section 3 presents a novel method for training energy based model , Multiscale Denoising Score Matching ( MDSM ) . Section 4 describes empirical results of the MDSM model and comparisons with other models . 2 A GEOMETRIC VIEW OF DENOISING SCORE MATCHING . Song and Ermon ( 2019 ) used denoising score matching with a range of noise levels , achieving great empirical results . The authors explained that large noise perturbation are required to enable the learning of the score in low-data density regions . But it is still unclear why a series of different noise levels are necessary , rather than one single large noise level . Following Saremi and Hyvarinen ( 2019 ) , we analyze the learning process in denoising score matching based on measure concentration properties of high-dimensional random vectors . We adopt the common assumption that the data distribution to be learned is high-dimensional , but only has support around a relatively low-dimensional manifold ( Tenenbaum et al. , 2000 ; Roweis and Saul , 2000 ; Lawrence , 2005 ) . If the assumption holds , it causes a problem for score matching : The density , or the gradient of the density is then undefined outside the manifold , making it difficult to train a valid density model for the data distribution defined on the entire space . Saremi and Hyvarinen ( 2019 ) and Song and Ermon ( 2019 ) discussed this problem and proposed to smooth the data distribution with a Gaussian kernel to alleviate the issue . To further understand the learning in denoising score matching when the data lie on a manifold X and the data dimension is high , two elementary properties of random Gaussian vectors in highdimensional spaces are helpful : First , the length distribution of random vectors becomes concentrated at √ dσ ( Vershynin , 2018 ) , where σ2 is the variance of a single dimension . Second , a random vector is always close to orthogonal to a fixed vector ( Tao , 2012 ) . With these premises one can visualize the configuration of noisy and noiseless data points that enter the learning process : A data point x sampled from X and its noisy version x̃ always lie on a line which is almost perpendicular to the tangent space TxX and intersects X at x . Further , the distance vectors between ( x , x̃ ) pairs all have similar length √ dσ . As a consequence , the set of noisy data points concentrate on a set X̃√dσ , that has a distance with ( √ dσ − , √ dσ + ) from the data manifold X , where √ dσ . Therefore , performing denoising score matching learning with ( x , x̃ ) pairs generated with a fixed noise level σ , which is the approach taken previously except in Song and Ermon ( 2019 ) , will match the score in the set X̃√dσ , and enable denoising of noisy points in the same set . However , the learning provides little information about the density outside this set , farther or closer to the data manifold , as noisy samples outside X̃√dσ , rarely appear in the training process . An illustration is presented in Figure 1A . Let X̃C√ dσ , denote the complement of the set X̃√dσ , . Even if pσ0 ( x̃ ∈ X̃ C√ dσ , ) is very small in high-dimensional space , the score in X̃C√ dσ , still plays a critical role in sampling from random initialization . This analysis may explain why models based on denoising score matching , trained with a single noise level encounter difficulties in generating data samples when initialized at random . For an empirical support of this explanation , see our experiments with models trained with single noise magnitudes ( Appendix B ) . To remedy this problem , one has to apply a learning procedure of the sort proposed in Song and Ermon ( 2019 ) , in which samples with different noise levels are used . Depending on the dimension of the data , the different noise levels have to be spaced narrowly enough to avoid empty regions in the data space . In the following , we will use Gaussian noise and employ a Gaussian scale mixture to produce the noisy data samples for the training ( for details , See Section 3.1 and Appendix A ) . Another interesting property of denoising score matching was suggested in the denoising autoencoder literature ( Vincent et al. , 2010 ; Karklin and Simoncelli , 2011 ) . With increasing noise level , the learned features tend to have larger spatial scale . In our experiment we observe similar phenomenon when training model with denoising score matching with single noise scale . If one compare samples in Figure B.1 , Appendix B , it is evident that noise level of 0.3 produced a model that learned short range correlation that spans only a few pixels , noise level of 0.6 learns longer stroke structure without coherent overall structure , and noise level of 1 learns more coherent long range structure without details such as stroke width variations . This suggests that training with single noise level in denoising score matching is not sufficient for learning a model capable of high-quality sample synthesis , as such a model have to capture data structure of all scales . 3 LEARNING ENERGY-BASED MODEL WITH MULTISCALE DENOISING SCORE MATCHING . 3.1 MULTISCALE DENOISING SCORE MATCHING . Motivated by the analysis in section 2 , we strive to develop an EBM based on denoising score matching that can be trained with noisy samples in which the noise level is not fixed but drawn from a distribution . The model should approximate the Parzen density estimator of the data pσ0 ( x̃ ) =∫ qσ0 ( x̃|x ) p ( x ) dx . Specifically , the learning should minimize the difference between the derivative of the energy and the score of pσ0 under the expectation EpM ( x̃ ) rather than Epσ0 ( x̃ ) , the expectation taken in standard denoising score matching . Here pM ( x̃ ) = ∫ qM ( x̃|x ) p ( x ) dx is chosen to cover the signal space more evenly to avoid the measure concentration issue described above . The resulting Multiscale Score Matching ( MSM ) objective is : LMSM ( θ ) = EpM ( x̃ ) ‖ ∇x̃ log ( pσ0 ( x̃ ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 4 ) Compared to the objective of denoising score matching ( 1 ) , the only change in the new objective ( 4 ) is the expectation . Both objectives are consistent , if pM ( x̃ ) and pσ0 ( x̃ ) have the same support , as shown formally in Proposition 1 of Appendix A . In Proposition 2 , we prove that Equation 4 is equivalent to the following denoising score matching objective : LMDSM∗ = EpM ( x̃ ) qσ0 ( x|x̃ ) ‖ ∇x̃ log ( qσ0 ( x̃|x ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 5 ) The above results hold for any noise kernel qσ0 ( x̃|x ) , but Equation 5 contains the reversed expectation , which is difficult to evaluate in general . To proceed , we choose qσ0 ( x̃|x ) to be Gaussian , and also choose qM ( x̃|x ) to be a Gaussian scale mixture : qM ( x̃|x ) = ∫ qσ ( x̃|x ) p ( σ ) dσ and qσ ( x̃|x ) = N ( x , σ2Id ) . After algebraic manipulation and one approximation ( see the derivation following Proposition 2 in Appendix A ) , we can transform Equation 5 into a more convenient form , which we call Multiscale Denoising Score Matching ( MDSM ) : LMDSM = Ep ( σ ) qσ ( x̃|x ) p ( x ) ‖ ∇x̃ log ( qσ0 ( x̃|x ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 6 ) The square loss term evaluated at noisy points x̃ at larger distances from the true data points x will have larger magnitude . Therefore , in practice it is convenient to add a monotonically decreasing term l ( σ ) for balancing the different noise scales , e.g . l ( σ ) = 1σ2 . Ideally , we want our model to learn the correct gradient everywhere , so we would need to add noise of all levels . However , learning denoising score matching at very large or very small noise levels is useless . At very large noise levels the information of the original sample is completely lost . Conversely , in the limit of small noise , the noisy sample is virtually indistinguishable from real data . In neither case one can learn a gradient which is informative about the data structure . Thus , the noise range needs only to be broad enough to encourage learning of data features over all scales . Particularly , we do not sample σ but instead choose a series of fixed σ values σ1 · · ·σK . Further , substituting log ( qσ0 ( x̃|x ) ) = − ( x̃−x ) 2 2σ20 + C into Equation 4 , we arrive at the final objective : L ( θ ) = ∑ σ∈ { σ1···σK } Eqσ ( x̃|x ) p ( x ) l ( σ ) ‖ x − x̃ + σ 2 0∇x̃E ( x̃ ; θ ) ‖2 ( 7 ) It may seem that σ0 is an important hyperparameter to our model , but after our approximation σ0 become just a scaling factor in front of the energy function , and can be simply set to one as long as the temperature range during sampling is scaled accordingly ( See Section 3.2 ) . Therefore the only hyper-parameter is the rang of noise levels used during training . On the surface , objective ( 7 ) looks similar to the one in Song and Ermon ( 2019 ) . The important difference is that Equation 7 approximates a single distribution , namely pσ0 ( x̃ ) , the data smoothed with one fixed kernel qσ0 ( x̃|x ) . In contrast , Song and Ermon ( 2019 ) approximate the score of multiple distributions , the family of distributions { pσi ( x̃ ) : i = 1 , ... , n } , resulting from the data smoothed by kernels of different widths σi . Because our model learns only a single target distribution , it does not require noise magnitude as input .
The paper proposes to learn an energy based generative model using an ‘annealed’ denoising score matching objective. The main contribution of the paper is to show that denoising score matching can be trained on a range of noise scales concurrently using a small modification to the loss. Compared to approximate likelihood learning of Energy based models the key benefit is to sidestep the need for sampling from the model distribution which has proven to be very challenging in practice. Using a slightly modified Langevin Sampler the paper further demonstrated encouraging sample qualities on CIFAR10 as measured by FID and IS scores.
SP:e84523133b0c393a7d673a3faef8cd2d6368830a
Annealed Denoising score matching: learning Energy based model in high-dimensional spaces
1 INTRODUCTION AND MOTIVATION . Treating data as stochastic samples from a probability distribution and developing models that can learn such distributions is at the core for solving a large variety of application problems , such as error correction/denoising ( Vincent et al. , 2010 ) , outlier/novelty detection ( Zhai et al. , 2016 ; Choi and Jang , 2018 ) , sample generation ( Nijkamp et al. , 2019 ; Du and Mordatch , 2019 ) , invariant pattern recognition , Bayesian reasoning ( Welling and Teh , 2011 ) which relies on good data priors , and many others . Energy-Based Models ( EBMs ) ( LeCun et al. , 2006 ; Ngiam et al. , 2011 ) assign an energy E ( x ) to each data point x which implicitly defines a probability by the Boltzmann distribution pm ( x ) = e−E ( x ) /Z . Sampling from this distribution can be used as a generative process that yield plausible samples of x . Compared to other generative models , like GANs ( Goodfellow et al. , 2014 ) , flowbased models ( Dinh et al. , 2015 ; Kingma and Dhariwal , 2018 ) , or auto-regressive models ( van den Oord et al. , 2016 ; Ostrovski et al. , 2018 ) , energy-based models have significant advantages . First , they provide explicit ( unnormalized ) density information , compositionality ( Hinton , 1999 ; Haarnoja et al. , 2017 ) , better mode coverage ( Kumar et al. , 2019 ) and flexibility ( Du and Mordatch , 2019 ) . Further , they do not require special model architecture , unlike auto-regressive and flow-based models . Recently , Energy-based models has been successfully trained with maximum likelihood ( Nijkamp et al. , 2019 ; Du and Mordatch , 2019 ) , but training can be very computationally demanding due to the need of sampling model distribution . Variants with a truncated sampling procedure have been proposed , such as contrastive divergence ( Hinton , 2002 ) . Such models learn much faster with the draw back of not exploring the state space thoroughly ( Tieleman , 2008 ) . 1.1 SCORE MATCHING , DENOISING SCORE MATCHING AND DEEP ENERGY ESTIMATORS . Score matching ( SM ) ( Hyvärinen , 2005 ) circumvents the requirement of sampling the model distribution . In score matching , the score function is defined to be the gradient of log-density or the negative energy function . The expected L2 norm of difference between the model score function and the data score function are minimized . One convenient way of using score matching is learning the energy function corresponding to a Gaussian kernel Parzen density estimator ( Parzen , 1962 ) of the data : pσ0 ( x̃ ) = ∫ qσ0 ( x̃|x ) p ( x ) dx . Though hard to evaluate , the data score is well defined : sd ( x̃ ) = ∇x̃ log ( pσ0 ( x̃ ) ) , and the corresponding objective is : LSM ( θ ) = Epσ0 ( x̃ ) ‖ ∇x̃ log ( pσ0 ( x̃ ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 1 ) Vincent ( 2011 ) studied the connection between denoising auto-encoder and score matching , and proved the remarkable result that the following objective , named Denoising Score Matching ( DSM ) , is equivalent to the objective above : LDSM ( θ ) = Epσ0 ( x̃ , x ) ‖ ∇x̃ log ( qσ0 ( x̃|x ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 2 ) Note that in ( 2 ) the Parzen density score is replaced by the derivative of log density of the single noise kernel ∇x̃ log ( qσ0 ( x̃|x ) ) , which is much easier to evaluate . In the particular case of Gaussian noise , log ( qσ0 ( x̃|x ) ) = − ( x̃−x ) 2 2σ20 + C , and therefore : LDSM ( θ ) = Epσ0 ( x̃ , x ) ‖ x − x̃ + σ0 2∇x̃E ( x̃ ; θ ) ‖2 ( 3 ) The interpretation of objective ( 3 ) is simple , it forces the energy gradient to align with the vector pointing from the noisy sample to the clean data sample . To optimize an objective involving the derivative of a function defined by a neural network , Kingma and LeCun ( 2010 ) proposed the use of double backpropagation ( Drucker and Le Cun , 1991 ) . Deep energy estimator networks ( Saremi et al. , 2018 ) first applied this technique to learn an energy function defined by a deep neural network . In this work and similarly in Saremi and Hyvarinen ( 2019 ) , an energy-based model was trained to match a Parzen density estimator of data with a certain noise magnitude . The previous models were able to perform denoising task , but they were unable to generate high-quality data samples from a random input initialization . Recently , Song and Ermon ( 2019 ) trained an excellent generative model by fitting a series of score estimators coupled together in a single neural network , each matching the score of a Parzen estimator with a different noise magnitude . The questions we address here is why learning energy-based models with single noise level does not permit high-quality sample generation and what can be done to improve energy based models . Our work builds on key ideas from Saremi et al . ( 2018 ) ; Saremi and Hyvarinen ( 2019 ) ; Song and Ermon ( 2019 ) . Section 2 , provides a geometric view of the learning problem in denoising score matching and provides a theoretical explanation why training with one noise level is insufficient if the data dimension is high . Section 3 presents a novel method for training energy based model , Multiscale Denoising Score Matching ( MDSM ) . Section 4 describes empirical results of the MDSM model and comparisons with other models . 2 A GEOMETRIC VIEW OF DENOISING SCORE MATCHING . Song and Ermon ( 2019 ) used denoising score matching with a range of noise levels , achieving great empirical results . The authors explained that large noise perturbation are required to enable the learning of the score in low-data density regions . But it is still unclear why a series of different noise levels are necessary , rather than one single large noise level . Following Saremi and Hyvarinen ( 2019 ) , we analyze the learning process in denoising score matching based on measure concentration properties of high-dimensional random vectors . We adopt the common assumption that the data distribution to be learned is high-dimensional , but only has support around a relatively low-dimensional manifold ( Tenenbaum et al. , 2000 ; Roweis and Saul , 2000 ; Lawrence , 2005 ) . If the assumption holds , it causes a problem for score matching : The density , or the gradient of the density is then undefined outside the manifold , making it difficult to train a valid density model for the data distribution defined on the entire space . Saremi and Hyvarinen ( 2019 ) and Song and Ermon ( 2019 ) discussed this problem and proposed to smooth the data distribution with a Gaussian kernel to alleviate the issue . To further understand the learning in denoising score matching when the data lie on a manifold X and the data dimension is high , two elementary properties of random Gaussian vectors in highdimensional spaces are helpful : First , the length distribution of random vectors becomes concentrated at √ dσ ( Vershynin , 2018 ) , where σ2 is the variance of a single dimension . Second , a random vector is always close to orthogonal to a fixed vector ( Tao , 2012 ) . With these premises one can visualize the configuration of noisy and noiseless data points that enter the learning process : A data point x sampled from X and its noisy version x̃ always lie on a line which is almost perpendicular to the tangent space TxX and intersects X at x . Further , the distance vectors between ( x , x̃ ) pairs all have similar length √ dσ . As a consequence , the set of noisy data points concentrate on a set X̃√dσ , that has a distance with ( √ dσ − , √ dσ + ) from the data manifold X , where √ dσ . Therefore , performing denoising score matching learning with ( x , x̃ ) pairs generated with a fixed noise level σ , which is the approach taken previously except in Song and Ermon ( 2019 ) , will match the score in the set X̃√dσ , and enable denoising of noisy points in the same set . However , the learning provides little information about the density outside this set , farther or closer to the data manifold , as noisy samples outside X̃√dσ , rarely appear in the training process . An illustration is presented in Figure 1A . Let X̃C√ dσ , denote the complement of the set X̃√dσ , . Even if pσ0 ( x̃ ∈ X̃ C√ dσ , ) is very small in high-dimensional space , the score in X̃C√ dσ , still plays a critical role in sampling from random initialization . This analysis may explain why models based on denoising score matching , trained with a single noise level encounter difficulties in generating data samples when initialized at random . For an empirical support of this explanation , see our experiments with models trained with single noise magnitudes ( Appendix B ) . To remedy this problem , one has to apply a learning procedure of the sort proposed in Song and Ermon ( 2019 ) , in which samples with different noise levels are used . Depending on the dimension of the data , the different noise levels have to be spaced narrowly enough to avoid empty regions in the data space . In the following , we will use Gaussian noise and employ a Gaussian scale mixture to produce the noisy data samples for the training ( for details , See Section 3.1 and Appendix A ) . Another interesting property of denoising score matching was suggested in the denoising autoencoder literature ( Vincent et al. , 2010 ; Karklin and Simoncelli , 2011 ) . With increasing noise level , the learned features tend to have larger spatial scale . In our experiment we observe similar phenomenon when training model with denoising score matching with single noise scale . If one compare samples in Figure B.1 , Appendix B , it is evident that noise level of 0.3 produced a model that learned short range correlation that spans only a few pixels , noise level of 0.6 learns longer stroke structure without coherent overall structure , and noise level of 1 learns more coherent long range structure without details such as stroke width variations . This suggests that training with single noise level in denoising score matching is not sufficient for learning a model capable of high-quality sample synthesis , as such a model have to capture data structure of all scales . 3 LEARNING ENERGY-BASED MODEL WITH MULTISCALE DENOISING SCORE MATCHING . 3.1 MULTISCALE DENOISING SCORE MATCHING . Motivated by the analysis in section 2 , we strive to develop an EBM based on denoising score matching that can be trained with noisy samples in which the noise level is not fixed but drawn from a distribution . The model should approximate the Parzen density estimator of the data pσ0 ( x̃ ) =∫ qσ0 ( x̃|x ) p ( x ) dx . Specifically , the learning should minimize the difference between the derivative of the energy and the score of pσ0 under the expectation EpM ( x̃ ) rather than Epσ0 ( x̃ ) , the expectation taken in standard denoising score matching . Here pM ( x̃ ) = ∫ qM ( x̃|x ) p ( x ) dx is chosen to cover the signal space more evenly to avoid the measure concentration issue described above . The resulting Multiscale Score Matching ( MSM ) objective is : LMSM ( θ ) = EpM ( x̃ ) ‖ ∇x̃ log ( pσ0 ( x̃ ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 4 ) Compared to the objective of denoising score matching ( 1 ) , the only change in the new objective ( 4 ) is the expectation . Both objectives are consistent , if pM ( x̃ ) and pσ0 ( x̃ ) have the same support , as shown formally in Proposition 1 of Appendix A . In Proposition 2 , we prove that Equation 4 is equivalent to the following denoising score matching objective : LMDSM∗ = EpM ( x̃ ) qσ0 ( x|x̃ ) ‖ ∇x̃ log ( qσ0 ( x̃|x ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 5 ) The above results hold for any noise kernel qσ0 ( x̃|x ) , but Equation 5 contains the reversed expectation , which is difficult to evaluate in general . To proceed , we choose qσ0 ( x̃|x ) to be Gaussian , and also choose qM ( x̃|x ) to be a Gaussian scale mixture : qM ( x̃|x ) = ∫ qσ ( x̃|x ) p ( σ ) dσ and qσ ( x̃|x ) = N ( x , σ2Id ) . After algebraic manipulation and one approximation ( see the derivation following Proposition 2 in Appendix A ) , we can transform Equation 5 into a more convenient form , which we call Multiscale Denoising Score Matching ( MDSM ) : LMDSM = Ep ( σ ) qσ ( x̃|x ) p ( x ) ‖ ∇x̃ log ( qσ0 ( x̃|x ) ) +∇x̃E ( x̃ ; θ ) ‖ 2 ( 6 ) The square loss term evaluated at noisy points x̃ at larger distances from the true data points x will have larger magnitude . Therefore , in practice it is convenient to add a monotonically decreasing term l ( σ ) for balancing the different noise scales , e.g . l ( σ ) = 1σ2 . Ideally , we want our model to learn the correct gradient everywhere , so we would need to add noise of all levels . However , learning denoising score matching at very large or very small noise levels is useless . At very large noise levels the information of the original sample is completely lost . Conversely , in the limit of small noise , the noisy sample is virtually indistinguishable from real data . In neither case one can learn a gradient which is informative about the data structure . Thus , the noise range needs only to be broad enough to encourage learning of data features over all scales . Particularly , we do not sample σ but instead choose a series of fixed σ values σ1 · · ·σK . Further , substituting log ( qσ0 ( x̃|x ) ) = − ( x̃−x ) 2 2σ20 + C into Equation 4 , we arrive at the final objective : L ( θ ) = ∑ σ∈ { σ1···σK } Eqσ ( x̃|x ) p ( x ) l ( σ ) ‖ x − x̃ + σ 2 0∇x̃E ( x̃ ; θ ) ‖2 ( 7 ) It may seem that σ0 is an important hyperparameter to our model , but after our approximation σ0 become just a scaling factor in front of the energy function , and can be simply set to one as long as the temperature range during sampling is scaled accordingly ( See Section 3.2 ) . Therefore the only hyper-parameter is the rang of noise levels used during training . On the surface , objective ( 7 ) looks similar to the one in Song and Ermon ( 2019 ) . The important difference is that Equation 7 approximates a single distribution , namely pσ0 ( x̃ ) , the data smoothed with one fixed kernel qσ0 ( x̃|x ) . In contrast , Song and Ermon ( 2019 ) approximate the score of multiple distributions , the family of distributions { pσi ( x̃ ) : i = 1 , ... , n } , resulting from the data smoothed by kernels of different widths σi . Because our model learns only a single target distribution , it does not require noise magnitude as input .
This paper presents a method of learning of energy based models using denoising score matching. This technique has been used before but only with limited success. The authors hypothesize that this is due to the fact that the matching was only performed over a single noise scale. The main idea of this work is to employ a range of scales to learn a single energy function. This trick helps to alleviate the problem of noisy samples concentrating in a low-volume region of the ambient space.
SP:e84523133b0c393a7d673a3faef8cd2d6368830a
Gradientless Descent: High-Dimensional Zeroth-Order Optimization
1 INTRODUCTION . We consider the problem of zeroth-order optimization ( also known as gradient-free optimization , or bandit optimization ) , where our goal is to minimize an objective function f : Rn → R with as few evaluations of f ( x ) as possible . For many practical and interesting objective functions , gradients are difficult to compute and there is still a need for zeroth-order optimization in applications such as reinforcement learning ( Mania et al. , 2018 ; Salimans et al. , 2017 ; Choromanski et al. , 2018 ) , attacking neural networks ( Chen et al. , 2017 ; Papernot et al. , 2017 ) , hyperparameter tuning of deep networks ( Snoek et al. , 2012 ) , and network control ( Liu et al. , 2017 ) . The standard approach to zeroth-order optimization is , ironically , to estimate the gradients from function values and apply a first-order optimization algorithm ( Flaxman et al. , 2005 ) . Nesterov & Spokoiny ( 2011 ) analyze this class of algorithms as gradient descent on a Gaussian smoothing of the objective and gives an accelerated O ( n √ Q log ( ( LR2 + F ) / ) ) iteration complexity for an LLipschitz convex function with condition number Q and R = ‖x0 − x∗‖ and F = f ( x0 ) − f ( x∗ ) . They propose a two-point evaluation scheme that constructs gradient estimates from the difference between function values at two points that are close to each other . This scheme was extended by ( Duchi et al. , 2015 ) for stochastic settings , by ( Ghadimi & Lan , 2013 ) for nonconvex settings , and by ( Shamir , 2017 ) for non-smooth and non-Euclidean norm settings . Since then , first-order techniques such as variance reduction ( Liu et al. , 2018 ) , conditional gradients ( Balasubramanian & Ghadimi , 2018 ) , and diagonal preconditioning ( Mania et al. , 2018 ) have been successfully adopted in this setting . This class of algorithms are also known as stochastic search , random search , or ( natural ) evolutionary strategies and have been augmented with a variety of heuristics , such as the popular CMA-ES ( Auger & Hansen , 2005 ) . These algorithms , however , suffer from high variance due to non-robust local minima or highly non-smooth objectives , which are common in the fields of deep learning and reinforcement learn- ∗Author list in alphabetical order . ing . Mania et al . ( 2018 ) notes that gradient variance increases as training progresses due to higher variance in the objective functions , since often parameters must be tuned precisely to achieve reasonable models . Therefore , some attention has shifted into direct search algorithms that usually finds a descent direction u and moves to x + δu , where the step size is not scaled by the function difference . The first approaches for direct search were based on deterministic approaches with a positive spanning set and date back to the 1950s ( Brooks , 1958 ) . Only recently have theoretical bounds surfaced , with Gratton et al . ( 2015 ) giving an iteration complexity that is a large polynomial of n and Dodangeh & Vicente ( 2016 ) giving an improved O ( n2L2/ ) . Stochastic approaches tend to have better complexities : Stich et al . ( 2013 ) uses line search to give a O ( nQ log ( F/ ) ) iteration complexity for convex functions with condition number Q and most recently , Gorbunov et al . ( 2019 ) uses importance sampling to give a O ( nQ̄ log ( F/ ) ) complexity for convex functions with average condition number Q̄ , assuming access to sampling probabilities . Stich et al . ( 2013 ) notes that direct search algorithms are invariant under monotone transforms of the objective , a property that might explain their robustness in high-variance settings . In general , zeroth order optimization suffers an at least linear dependence on input dimension n and recent works have tried to address this limitation when n is large but f ( x ) admits a low-dimensional structure . Some papers assume that f ( x ) depends only on k coordinates and Wang et al . ( 2017 ) applies Lasso to find the important set of coordinates , whereas Balasubramanian & Ghadimi ( 2018 ) simply change the step size to achieve an O ( k ( log ( n ) / ) 2 ) iteration complexity . Other papers assume more generally that f ( x ) = g ( PAx ) only depends on a k-dimensional subspace given by the range of PA and Djolonga et al . ( 2013 ) apply low-rank approximation to find the low-dimensional subspace while Wang et al . ( 2013 ) use random embeddings . Hazan et al . ( 2017 ) assume that f ( x ) is a sparse collection of k-degree monomials on the Boolean hypercube and apply sparse recovery to achieve a O ( nk ) runtime bound . We will show that under the case that f ( x ) = g ( PAx ) , our algorithm will inherently pick up any low-dimensional structure in f ( x ) and achieve a convergence rate that depends on k log ( n ) . This initial convergence rate survives , even if we perturb f ( x ) = g ( PAx ) + h ( x ) , so long as h ( x ) is sufficiently small . We will not cover the whole variety of black-box optimization methods , such as Bayesian optimization or genetic algorithms . In general , these methods attempt to solve a broader problem ( e.g . multiple optima ) , have weaker theoretical guarantees and may require substantial computation at each step : e.g . Bayesian optimization generally has theoretical iteration complexities that grow exponentially in dimension , and CMA-ES lacks provable complexity bounds beyond convex quadratic functions . In addition to the slow runtime and weaker guarantees , Bayesian optimization assumes the success of an inner optimization loop of the acquisition function . This inner optimization is often implemented with many iterations of a simpler zeroth-order methods , justifying the need to understand gradient-less descent algorithms within its own context . 1.1 OUR CONTRIBUTIONS . In this paper , we present GradientLess Descent ( GLD ) , a class of truly gradient-free algorithms ( also known as direct search algorithms ) that are parameter free and provably fast . Our algorithms are based on a simple intuition : for well-conditioned functions , if we start from a point and take a small step in a randomly chosen direction , there is a significant probability that we will reduce the objective function value . We present a novel analysis that relies on facts in high dimensional geometry and can thus be viewed as a geometric analysis of gradient-free algorithms , recovering the standard convergence rates and step sizes . Specifically , we show that if the step size is on the order of O ( 1√ n ) , we can guarantee an expected decrease of 1 − Ω ( 1n ) in the optimality gap , based on geometric properties of the sublevel sets of a smooth and strongly convex function . Our results are invariant under monotone transformations of the objective function , thus our convergence results also hold for a large class of non-convex functions that are a subclass of quasi-convex functions . Specifically , note that monotone transformations of convex functions are not necessarily convex . However , a monotone transformation of a convex function is always quasi-convex . The maximization of quasi-concave utility functions , which is equivalent to the minimization of quasiconvex functions , is an important topic of study in economics ( e.g . Arrow & Enthoven ( 1961 ) ) . Intuition suggests that the step-size dependence on dimensionality can be improved when f ( x ) admits a low-dimensional structure . With a careful choice of sampling distribution we can show that if f ( x ) = g ( PAx ) , where PA is a rank k matrix , then our step size can be on the order ofO ( 1√k ) as our optimization behavior is preserved under projections . We call this property affine-invariance and show that the number of function evaluations needed for convergence depends logarithmically on n. Unlike most previous algorithms in the high-dimensional setting , no expensive sparse recovery or subspace finding methods are needed . Furthermore , by novel perturbation arguments , we show that our fast convergence rates are robust and holds even under the more realistic assumption when f ( x ) = g ( PAx ) + h ( x ) with h ( x ) being sufficiently small . Theorem 1 ( Convergence of GLD : Informal Restatement of Theorem 7 and Theorem 14 ) . Let f ( x ) be any monotone transform of a convex function with condition number Q and R = ‖x0 − x∗‖ . Let y be a sample from an appropriate distribution centered at x . Then , with constant probability , f ( y ) − f ( x∗ ) ≤ ( f ( x ) − f ( x∗ ) ) ( 1− 15nQ ) Therefore , we can find xT such that ‖xT−x∗‖ ≤ after T = Õ ( nQ log ( R/ ) ) function evaluations . Furthermore , for functions f ( x ) = g ( PAx ) + h ( x ) with rank k matrix PA and sufficiently small h ( x ) , we only require Õ ( kQ log ( n ) log ( R/ ) ) evaluations . Another advantage of our non-standard geometric analysis is that it allows us to deduce that our rates are optimal with a matching lower bound ( up to logarithmic factors ) , presenting theoretical evidence that gradient-free inherently requires Ω ( nQ ) function evaluations to converge . While gradient-estimation algorithms can achieve a better theoretical iteration complexity of O ( n √ Q ) , they lack the monotone and affine invariance properties . Empirically , we see that invariance properties are important to successful optimization , as validated by experiments on synthetic BBOB and MuJoCo benchmarks that show the competitiveness of GLD against standard optimization procedures . 2 PRELIMINARIES . We first define a few notations for the rest of the paper . Let X be a compact subset of Rn and let ‖ · ‖ denote the Euclidean norm . The diameter of X , denoted ‖X‖ = maxx , x′∈X ‖x − x′‖ , is the maximum distance between elements in X . Let f : X → R be a real-valued function which attains its minimum at x∗ . We use f ( X ) = { f ( x ) : x ∈ X } to denote the image of f on a subset X of Rn , and B ( c , r ) = { x ∈ Rn : ‖c− x‖ ≤ r } to denote the ball of radius r centered at c. Definition 2 . The level set of f at point x ∈ X is Lc ( f ) = { y ∈ X : f ( y ) = f ( x ) } . The sub-level set of f at point x ∈ X is L↓c ( f ) = { y ∈ X : f ( y ) ≤ f ( x ) } . When the function f is clear from the context , we omit it . Definition 3 . We say that f is α-strongly convex for α > 0 if f ( y ) ≥ f ( x ) + 〈∇f ( x ) , y − x〉 + α 2 ‖y−x‖ 2 for all x , y ∈ X and β-smooth for β > 0 if f ( y ) ≤ f ( x ) + 〈∇f ( x ) , y−x〉+ β2 ‖y−x‖ 2 for all x , y ∈ X . Definition 4 . We say that g◦f is a monotone transformation of f if g : f ( X ) → R is a monotonically ( and strictly ) increasing function . Monotone transformations preserve the level sets of a function in the sense that Lx ( f ) = Lx ( g ◦ f ) . Because our algorithms depend only on the level set properties , our results generalize to any monotone transformation of a strongly convex and strongly smooth function . This leads to our extended notion of condition number . Definition 5 . A function f has condition number Q ≥ 1 if it is the minimum ratio β/α over all functions g such that f is a monotone transformation of g and g is α-strongly convex and β smooth . When we work with low rank extensions of f , we only care about the condition number of f within a rank k subspace . Indeed , if f only varies along a rank k subspace , then it has a strong convexity value of 0 , making its condition number undefined . If f is α-strongly convex and β-smooth , then its Hessian matrix always has eigenvalues bounded between α and β . Therefore , we need a notion of a projected condition number . Let A ∈ Rd×k be some orthonormal matrix and let PA = AA > be the projection matrix onto the column space of A . Definition 6 . For some orthonormal A ∈ Rd×k with d > k , a function f has condition number restricted to A , Q ( A ) ≥ 1 , if it is the minimum ratio β/α over all functions g such that f is a monotone transformation of g and h ( y ) = g ( Ay ) is α-strongly convex and β smooth .
This paper proposes stable GradientLess Descent (GLD) algorithms that do not rely on gradient estimate. Based on the low-rank assumption on P_A, the iteration complexity is poly-logarithmically dependent on dimensionality. The theoretical analysis of the main results is based on a geometric perspective, which is interesting. The experimental results on synthetic and MuJoCo datasets validate the effectiveness of the proposed algorithms.
SP:e958fbb0b004f454b79944ca72958254087147d4
Gradientless Descent: High-Dimensional Zeroth-Order Optimization
1 INTRODUCTION . We consider the problem of zeroth-order optimization ( also known as gradient-free optimization , or bandit optimization ) , where our goal is to minimize an objective function f : Rn → R with as few evaluations of f ( x ) as possible . For many practical and interesting objective functions , gradients are difficult to compute and there is still a need for zeroth-order optimization in applications such as reinforcement learning ( Mania et al. , 2018 ; Salimans et al. , 2017 ; Choromanski et al. , 2018 ) , attacking neural networks ( Chen et al. , 2017 ; Papernot et al. , 2017 ) , hyperparameter tuning of deep networks ( Snoek et al. , 2012 ) , and network control ( Liu et al. , 2017 ) . The standard approach to zeroth-order optimization is , ironically , to estimate the gradients from function values and apply a first-order optimization algorithm ( Flaxman et al. , 2005 ) . Nesterov & Spokoiny ( 2011 ) analyze this class of algorithms as gradient descent on a Gaussian smoothing of the objective and gives an accelerated O ( n √ Q log ( ( LR2 + F ) / ) ) iteration complexity for an LLipschitz convex function with condition number Q and R = ‖x0 − x∗‖ and F = f ( x0 ) − f ( x∗ ) . They propose a two-point evaluation scheme that constructs gradient estimates from the difference between function values at two points that are close to each other . This scheme was extended by ( Duchi et al. , 2015 ) for stochastic settings , by ( Ghadimi & Lan , 2013 ) for nonconvex settings , and by ( Shamir , 2017 ) for non-smooth and non-Euclidean norm settings . Since then , first-order techniques such as variance reduction ( Liu et al. , 2018 ) , conditional gradients ( Balasubramanian & Ghadimi , 2018 ) , and diagonal preconditioning ( Mania et al. , 2018 ) have been successfully adopted in this setting . This class of algorithms are also known as stochastic search , random search , or ( natural ) evolutionary strategies and have been augmented with a variety of heuristics , such as the popular CMA-ES ( Auger & Hansen , 2005 ) . These algorithms , however , suffer from high variance due to non-robust local minima or highly non-smooth objectives , which are common in the fields of deep learning and reinforcement learn- ∗Author list in alphabetical order . ing . Mania et al . ( 2018 ) notes that gradient variance increases as training progresses due to higher variance in the objective functions , since often parameters must be tuned precisely to achieve reasonable models . Therefore , some attention has shifted into direct search algorithms that usually finds a descent direction u and moves to x + δu , where the step size is not scaled by the function difference . The first approaches for direct search were based on deterministic approaches with a positive spanning set and date back to the 1950s ( Brooks , 1958 ) . Only recently have theoretical bounds surfaced , with Gratton et al . ( 2015 ) giving an iteration complexity that is a large polynomial of n and Dodangeh & Vicente ( 2016 ) giving an improved O ( n2L2/ ) . Stochastic approaches tend to have better complexities : Stich et al . ( 2013 ) uses line search to give a O ( nQ log ( F/ ) ) iteration complexity for convex functions with condition number Q and most recently , Gorbunov et al . ( 2019 ) uses importance sampling to give a O ( nQ̄ log ( F/ ) ) complexity for convex functions with average condition number Q̄ , assuming access to sampling probabilities . Stich et al . ( 2013 ) notes that direct search algorithms are invariant under monotone transforms of the objective , a property that might explain their robustness in high-variance settings . In general , zeroth order optimization suffers an at least linear dependence on input dimension n and recent works have tried to address this limitation when n is large but f ( x ) admits a low-dimensional structure . Some papers assume that f ( x ) depends only on k coordinates and Wang et al . ( 2017 ) applies Lasso to find the important set of coordinates , whereas Balasubramanian & Ghadimi ( 2018 ) simply change the step size to achieve an O ( k ( log ( n ) / ) 2 ) iteration complexity . Other papers assume more generally that f ( x ) = g ( PAx ) only depends on a k-dimensional subspace given by the range of PA and Djolonga et al . ( 2013 ) apply low-rank approximation to find the low-dimensional subspace while Wang et al . ( 2013 ) use random embeddings . Hazan et al . ( 2017 ) assume that f ( x ) is a sparse collection of k-degree monomials on the Boolean hypercube and apply sparse recovery to achieve a O ( nk ) runtime bound . We will show that under the case that f ( x ) = g ( PAx ) , our algorithm will inherently pick up any low-dimensional structure in f ( x ) and achieve a convergence rate that depends on k log ( n ) . This initial convergence rate survives , even if we perturb f ( x ) = g ( PAx ) + h ( x ) , so long as h ( x ) is sufficiently small . We will not cover the whole variety of black-box optimization methods , such as Bayesian optimization or genetic algorithms . In general , these methods attempt to solve a broader problem ( e.g . multiple optima ) , have weaker theoretical guarantees and may require substantial computation at each step : e.g . Bayesian optimization generally has theoretical iteration complexities that grow exponentially in dimension , and CMA-ES lacks provable complexity bounds beyond convex quadratic functions . In addition to the slow runtime and weaker guarantees , Bayesian optimization assumes the success of an inner optimization loop of the acquisition function . This inner optimization is often implemented with many iterations of a simpler zeroth-order methods , justifying the need to understand gradient-less descent algorithms within its own context . 1.1 OUR CONTRIBUTIONS . In this paper , we present GradientLess Descent ( GLD ) , a class of truly gradient-free algorithms ( also known as direct search algorithms ) that are parameter free and provably fast . Our algorithms are based on a simple intuition : for well-conditioned functions , if we start from a point and take a small step in a randomly chosen direction , there is a significant probability that we will reduce the objective function value . We present a novel analysis that relies on facts in high dimensional geometry and can thus be viewed as a geometric analysis of gradient-free algorithms , recovering the standard convergence rates and step sizes . Specifically , we show that if the step size is on the order of O ( 1√ n ) , we can guarantee an expected decrease of 1 − Ω ( 1n ) in the optimality gap , based on geometric properties of the sublevel sets of a smooth and strongly convex function . Our results are invariant under monotone transformations of the objective function , thus our convergence results also hold for a large class of non-convex functions that are a subclass of quasi-convex functions . Specifically , note that monotone transformations of convex functions are not necessarily convex . However , a monotone transformation of a convex function is always quasi-convex . The maximization of quasi-concave utility functions , which is equivalent to the minimization of quasiconvex functions , is an important topic of study in economics ( e.g . Arrow & Enthoven ( 1961 ) ) . Intuition suggests that the step-size dependence on dimensionality can be improved when f ( x ) admits a low-dimensional structure . With a careful choice of sampling distribution we can show that if f ( x ) = g ( PAx ) , where PA is a rank k matrix , then our step size can be on the order ofO ( 1√k ) as our optimization behavior is preserved under projections . We call this property affine-invariance and show that the number of function evaluations needed for convergence depends logarithmically on n. Unlike most previous algorithms in the high-dimensional setting , no expensive sparse recovery or subspace finding methods are needed . Furthermore , by novel perturbation arguments , we show that our fast convergence rates are robust and holds even under the more realistic assumption when f ( x ) = g ( PAx ) + h ( x ) with h ( x ) being sufficiently small . Theorem 1 ( Convergence of GLD : Informal Restatement of Theorem 7 and Theorem 14 ) . Let f ( x ) be any monotone transform of a convex function with condition number Q and R = ‖x0 − x∗‖ . Let y be a sample from an appropriate distribution centered at x . Then , with constant probability , f ( y ) − f ( x∗ ) ≤ ( f ( x ) − f ( x∗ ) ) ( 1− 15nQ ) Therefore , we can find xT such that ‖xT−x∗‖ ≤ after T = Õ ( nQ log ( R/ ) ) function evaluations . Furthermore , for functions f ( x ) = g ( PAx ) + h ( x ) with rank k matrix PA and sufficiently small h ( x ) , we only require Õ ( kQ log ( n ) log ( R/ ) ) evaluations . Another advantage of our non-standard geometric analysis is that it allows us to deduce that our rates are optimal with a matching lower bound ( up to logarithmic factors ) , presenting theoretical evidence that gradient-free inherently requires Ω ( nQ ) function evaluations to converge . While gradient-estimation algorithms can achieve a better theoretical iteration complexity of O ( n √ Q ) , they lack the monotone and affine invariance properties . Empirically , we see that invariance properties are important to successful optimization , as validated by experiments on synthetic BBOB and MuJoCo benchmarks that show the competitiveness of GLD against standard optimization procedures . 2 PRELIMINARIES . We first define a few notations for the rest of the paper . Let X be a compact subset of Rn and let ‖ · ‖ denote the Euclidean norm . The diameter of X , denoted ‖X‖ = maxx , x′∈X ‖x − x′‖ , is the maximum distance between elements in X . Let f : X → R be a real-valued function which attains its minimum at x∗ . We use f ( X ) = { f ( x ) : x ∈ X } to denote the image of f on a subset X of Rn , and B ( c , r ) = { x ∈ Rn : ‖c− x‖ ≤ r } to denote the ball of radius r centered at c. Definition 2 . The level set of f at point x ∈ X is Lc ( f ) = { y ∈ X : f ( y ) = f ( x ) } . The sub-level set of f at point x ∈ X is L↓c ( f ) = { y ∈ X : f ( y ) ≤ f ( x ) } . When the function f is clear from the context , we omit it . Definition 3 . We say that f is α-strongly convex for α > 0 if f ( y ) ≥ f ( x ) + 〈∇f ( x ) , y − x〉 + α 2 ‖y−x‖ 2 for all x , y ∈ X and β-smooth for β > 0 if f ( y ) ≤ f ( x ) + 〈∇f ( x ) , y−x〉+ β2 ‖y−x‖ 2 for all x , y ∈ X . Definition 4 . We say that g◦f is a monotone transformation of f if g : f ( X ) → R is a monotonically ( and strictly ) increasing function . Monotone transformations preserve the level sets of a function in the sense that Lx ( f ) = Lx ( g ◦ f ) . Because our algorithms depend only on the level set properties , our results generalize to any monotone transformation of a strongly convex and strongly smooth function . This leads to our extended notion of condition number . Definition 5 . A function f has condition number Q ≥ 1 if it is the minimum ratio β/α over all functions g such that f is a monotone transformation of g and g is α-strongly convex and β smooth . When we work with low rank extensions of f , we only care about the condition number of f within a rank k subspace . Indeed , if f only varies along a rank k subspace , then it has a strong convexity value of 0 , making its condition number undefined . If f is α-strongly convex and β-smooth , then its Hessian matrix always has eigenvalues bounded between α and β . Therefore , we need a notion of a projected condition number . Let A ∈ Rd×k be some orthonormal matrix and let PA = AA > be the projection matrix onto the column space of A . Definition 6 . For some orthonormal A ∈ Rd×k with d > k , a function f has condition number restricted to A , Q ( A ) ≥ 1 , if it is the minimum ratio β/α over all functions g such that f is a monotone transformation of g and h ( y ) = g ( Ay ) is α-strongly convex and β smooth .
The paper proposes a novel zeroth-order algorithm for high-dimensional optimization. In particular, the algorithm as an instance of direct search algorithms where no attempt is made to estimate the gradient of the function during the optimization process. The authors study the optimization of monotone transformations of strongly-convex and smooth functions and they prove complexity bounds as a function of the condition number, the dimensionality and the desired accuracy. These results are also extended to the case where the function actually depends on a lower-dimensional input. Without any knowledge of the actual subspace of interest, the algorithm is able to adapt to the (lower) dimensionality of the problem. The proposed algorithms are tested on synthetic optimization problems and in a few Mujoco environments for policy optimization.
SP:e958fbb0b004f454b79944ca72958254087147d4
Analyzing Privacy Loss in Updates of Natural Language Models
1 INTRODUCTION . Over the last few years , deep learning has made sufficient progress to be integrated into intelligent , user-facing systems , which means that machine learning models are now part of the regular software development lifecycle . As part of this move towards concrete products , models are regularly re-trained to improve performance when new ( and more ) data becomes available , to handle distributional shift as usage patterns change , and to respect user requests for removal of their data . In this work , we show that model updates1 reveal a surprising amount of information about changes in the training data , in part , caused by neural network ’ s tendency to memorize input data . As a consequence , we can infer fine-grained information about differences in the training data by comparing two trained models even when the change to the data is as small as 0.0001 % of the original dataset . This has severe implications for deploying machine learning models trained on user data , some of them counter-intuitive : for example , honoring a request to remove a user ’ s data from the training corpus can mean that their data becomes exposed by releasing an updated model trained without it . This effect also needs to be considered when using public snapshots of high-capacity models ( e.g . BERT ( Devlin et al. , 2019 ) ) that are then fine-tuned on smaller , private datasets . We study the privacy implications of language model updates , motivated by their frequent deployment on end-user systems ( as opposed to cloud services ) : for instance , smartphones are routinely shipped with ( simple ) language models to power predictive keyboards . The privacy issues caused by the memorizing behavior of language models have recently been studied by Carlini et al . ( 2018 ) , who showed that it is sometimes possible to extract out-of-distribution samples inserted into the training data of a model . In contrast , we focus on in-distribution data , but consider the case of having access to two versions of the model . A similar setting has recently been investigated by Salem et al . ( 2019a ) with a focus on fully-connected and convolutional architectures applied to image classification , whereas we focus on natural language . We first introduce our setting and methodology in Section 2 , defining the notion of a differential score of token sequences with respect to two models . This score reflects the changes in the probabilities of individual tokens in a sequence . We then show how beam search can find token sequences with high differential score and thus recover information about differences in the training data . Our experiments in Section 3 show that our method works in practice on a number of datasets and model architectures including recurrent neural networks and modern transformer architectures . Specifically , we consider a ) a synthetic worst-case scenario where the data used to train two model snapshots differs only in a canary phrase that was inserted multiple times ; b ) a more realistic scenario where we compare 1We use the term “ model update “ to refer to an update in the parameters of the model , caused for example by a training run on changed data . This is distinct from an update to the model architecture , which changes the number or use of parameters . a model trained on Reddit comments with one that was trained on the same data augmented with subject-specific conversations . We show that an adversary who can query two model snapshots for predictions can recover the canary phrase in the former scenario , and fragments of discourse from conversations in the latter . Moreover , in order to learn information about such model updates , the adversary does not require any information about the data used for training of the models nor knowledge of model parameters or its architecture . Finally , we discuss mitigations such as training with differential privacy in Section 4 . While differential privacy grants some level of protection against our attacks , it incurs a substantial decrease in accuracy and a high computational cost . 2 METHODOLOGY . 2.1 NOTATION . Let T be a finite set of tokens , T ∗ be the set of finite token sequences , and Dist ( T ) denote the set of probability distributions over tokens . A language model M is a function M : T ∗ → Dist ( T ) , where M ( t1 . . . ti−1 ) ( ti ) denotes the probability that the model assigns to token ti ∈ T after reading the sequence t1 . . . ti−1 ∈ T ∗ . We often write MD to make explicit that a multiset ( i.e. , a set that can contain multiple occurrences of each element ) D ⊆ T ∗ was used to train the language model . 2.2 ADVERSARY MODEL . We consider an adversary that has query access to two language models MD , MD′ that were trained on datasets D , D′ respectively ( in the following , we use M and M ′ as shorthand for MD and MD′ ) . The adversary can query the models with any sequence s ∈ T ∗ and observe the corresponding outputs MD ( s ) , MD′ ( s ) ∈ Dist ( T ) . The goal of the adversary is to infer information about the difference between the datasets D , D′ . This scenario corresponds to the case of language models deployed to client devices , for example in “ smart ” software keyboards or more advanced applications such as grammar correction . 2.3 DIFFERENTIAL RANK . Our goal is to identify the token sequences whose probability differs most between M and M ′ , as these are most likely to be related to the differences between D and D′ . To capture this notion formally , we define the differential score DS of token sequences , which is simply the sum of the differences of ( contextualized ) per-token probabilities . We also define a relative variant D̃S based on the relative change in probabilities , which we found to be more robust w.r.t . the “ noise ” introduced by different random initializations of the models M and M ′ . Definition 1 . Given two language models M , M ′ and a token sequence t1 . . . tn ∈ T ∗ , we define the differential score of a token as the increase in its probability and the relative differential score as the relative increase in its probability . We lift these concepts to token sequences by defining DSM ′ M ( t1 . . . tn ) = n∑ i=1 M ′ ( t1 . . . ti−1 ) ( ti ) −M ( t1 . . . ti−1 ) ( ti ) , D̃S M ′ M ( t1 . . . tn ) = n∑ i=1 M ′ ( t1 . . . ti−1 ) ( ti ) −M ( t1 . . . ti−1 ) ( ti ) M ( t1 . . . ti−1 ) ( ti ) . The differential score of a token sequence is best interpreted relative to that of other token sequences . This motivates ranking sequences according to their differential score . Definition 2 . We define the differential rank DR ( s ) of s ∈ T ∗ as the number of token sequences of length |s| with differential score higher than s. DR ( s ) = ∣∣∣ { s′ ∈ T |s| ∣∣∣DSM ′M ( s′ ) > DSM ′M ( s ) } ∣∣∣ The lower the rank of s , the more s is exposed by a model update . 2.4 APPROXIMATING DIFFERENTIAL RANK . Our goal is to identify the token sequences that are most exposed by a model update , i.e. , the sequences with the lowest differential rank ( highest differential score ) . Exact computation of the differential rank for sequences of length n requires exploring a search space of size |T |n . To overcome this exponential blow-up , we propose a heuristic based on beam search . At time step i , a beam search of width k maintains a set of k candidate sequences of length i. Beam search considers all possible k |T | single token extensions of these sequences , computes their differential scores and keeps the k highest-scoring sequences of length i+ 1 among them for the next step . Eventually , the search completes and returns a set S ⊆ Tn . We approximate the differential rank DR ( s ) of a sequence s by its rank among the sequences in the set S computed by beam search , i.e . ∣∣∣ { s′ ∈ S | DSM ′M ( s′ ) > DSM ′M ( s ) } ∣∣∣ . The beam width k governs a trade-off between computational cost and precision of the result . For a sufficiently large width , S = T |s| and the result is the true rank of s. For smaller beam widths , the result is a lower bound on DR ( s ) as the search may miss sequences with higher differential score than those in S. In experiments , we found that shrinking the beam width as the search progresses speeds the search considerably without compromising on the quality of results . Initially , we use a beam width |T | , which we half at each iteration ( i.e. , we consider |T | /2 candidate phrases of length two , |T | /4 sequences of length three , . . . ) . 3 EXPERIMENTAL RESULTS . In this section we report on experiments in which we evaluate privacy in language model updates using the methodology described in Section 2 . We begin by describing the experimental setup . 3.1 SETUP . For our experiments , we consider three datasets of different size and complexity , matched with standard baseline model architectures whose capacity we adapted to the data size . All of our models are implemented in TensorFlow . Note that the random seeds of the models are not fixed , so repeated training runs of a model on an unchanged dataset will yield ( slightly ) different results . We will release the source code as well as analysis tools used in our experimental evaluation at https : //double/blind . Concretely , we use the Penn Treebank ( Marcus et al. , 1993 ) ( PTB ) dataset as a representative of low-data scenarios , as the standard training dataset has only around 900 000 tokens and a vocabulary size of 10 000 . As corresponding model , we use a two-layer recurrent neural network using LSTM cells with 200-dimensional embeddings and hidden states and no additional regularization ( this corresponds to the small configuration of Zaremba et al . ( 2014 ) ) . Second , we use a dataset of Reddit comments with 20 million tokens overall , of which we split off 5 % as validation set . We use a vocabulary size of 10 000 . As corresponding model , we rely on a one-layer recurrent neural network using an LSTM cell with 512-dimensional hidden states and 160-dimensional embeddings , using dropout on inputs and outputs with a keep rate of 0.9 as regularizer . These parameters were chosen in line with a neural language model suitable for next-word recommendations on resource-bounded mobile devices . We additionally consider a model based on the Transformer architecture ( Vaswani et al. , 2017 ) ( more concretely , using the BERT ( Devlin et al. , 2019 ) codebase ) with four layers of six attention heads each with a hidden dimension of 192 . Finally , we use the Wikitext-103 dataset ( Merity et al. , 2017 ) with 103 million training tokens as a representative of a big data regime , using a vocabulary size of 20 000 . As model , we employ a two-layer RNN with 512-dimensional LSTM cells and token embedding size 512 and again dropout on inputs and outputs with a keep rate of 0.9 as regularizer . We combined this large dataset with this relatively low-capacity model ( at least according to the standards of the state of the art in language modeling ) to test if our analysis results still hold on datasets that clearly require more model capacity than is available .
This paper looks at privacy concerns regarding data for a specific model before and after a single update. It discusses the privacy concerns thoroughly and look at language modeling as a representative task. They find that there are plenty of cases namely when the composition of the sequences involve low frequency words, that a lot of information leak occurs.
SP:9d2476df24b81661dc5ad76b13c8fd5fd1653381
Analyzing Privacy Loss in Updates of Natural Language Models
1 INTRODUCTION . Over the last few years , deep learning has made sufficient progress to be integrated into intelligent , user-facing systems , which means that machine learning models are now part of the regular software development lifecycle . As part of this move towards concrete products , models are regularly re-trained to improve performance when new ( and more ) data becomes available , to handle distributional shift as usage patterns change , and to respect user requests for removal of their data . In this work , we show that model updates1 reveal a surprising amount of information about changes in the training data , in part , caused by neural network ’ s tendency to memorize input data . As a consequence , we can infer fine-grained information about differences in the training data by comparing two trained models even when the change to the data is as small as 0.0001 % of the original dataset . This has severe implications for deploying machine learning models trained on user data , some of them counter-intuitive : for example , honoring a request to remove a user ’ s data from the training corpus can mean that their data becomes exposed by releasing an updated model trained without it . This effect also needs to be considered when using public snapshots of high-capacity models ( e.g . BERT ( Devlin et al. , 2019 ) ) that are then fine-tuned on smaller , private datasets . We study the privacy implications of language model updates , motivated by their frequent deployment on end-user systems ( as opposed to cloud services ) : for instance , smartphones are routinely shipped with ( simple ) language models to power predictive keyboards . The privacy issues caused by the memorizing behavior of language models have recently been studied by Carlini et al . ( 2018 ) , who showed that it is sometimes possible to extract out-of-distribution samples inserted into the training data of a model . In contrast , we focus on in-distribution data , but consider the case of having access to two versions of the model . A similar setting has recently been investigated by Salem et al . ( 2019a ) with a focus on fully-connected and convolutional architectures applied to image classification , whereas we focus on natural language . We first introduce our setting and methodology in Section 2 , defining the notion of a differential score of token sequences with respect to two models . This score reflects the changes in the probabilities of individual tokens in a sequence . We then show how beam search can find token sequences with high differential score and thus recover information about differences in the training data . Our experiments in Section 3 show that our method works in practice on a number of datasets and model architectures including recurrent neural networks and modern transformer architectures . Specifically , we consider a ) a synthetic worst-case scenario where the data used to train two model snapshots differs only in a canary phrase that was inserted multiple times ; b ) a more realistic scenario where we compare 1We use the term “ model update “ to refer to an update in the parameters of the model , caused for example by a training run on changed data . This is distinct from an update to the model architecture , which changes the number or use of parameters . a model trained on Reddit comments with one that was trained on the same data augmented with subject-specific conversations . We show that an adversary who can query two model snapshots for predictions can recover the canary phrase in the former scenario , and fragments of discourse from conversations in the latter . Moreover , in order to learn information about such model updates , the adversary does not require any information about the data used for training of the models nor knowledge of model parameters or its architecture . Finally , we discuss mitigations such as training with differential privacy in Section 4 . While differential privacy grants some level of protection against our attacks , it incurs a substantial decrease in accuracy and a high computational cost . 2 METHODOLOGY . 2.1 NOTATION . Let T be a finite set of tokens , T ∗ be the set of finite token sequences , and Dist ( T ) denote the set of probability distributions over tokens . A language model M is a function M : T ∗ → Dist ( T ) , where M ( t1 . . . ti−1 ) ( ti ) denotes the probability that the model assigns to token ti ∈ T after reading the sequence t1 . . . ti−1 ∈ T ∗ . We often write MD to make explicit that a multiset ( i.e. , a set that can contain multiple occurrences of each element ) D ⊆ T ∗ was used to train the language model . 2.2 ADVERSARY MODEL . We consider an adversary that has query access to two language models MD , MD′ that were trained on datasets D , D′ respectively ( in the following , we use M and M ′ as shorthand for MD and MD′ ) . The adversary can query the models with any sequence s ∈ T ∗ and observe the corresponding outputs MD ( s ) , MD′ ( s ) ∈ Dist ( T ) . The goal of the adversary is to infer information about the difference between the datasets D , D′ . This scenario corresponds to the case of language models deployed to client devices , for example in “ smart ” software keyboards or more advanced applications such as grammar correction . 2.3 DIFFERENTIAL RANK . Our goal is to identify the token sequences whose probability differs most between M and M ′ , as these are most likely to be related to the differences between D and D′ . To capture this notion formally , we define the differential score DS of token sequences , which is simply the sum of the differences of ( contextualized ) per-token probabilities . We also define a relative variant D̃S based on the relative change in probabilities , which we found to be more robust w.r.t . the “ noise ” introduced by different random initializations of the models M and M ′ . Definition 1 . Given two language models M , M ′ and a token sequence t1 . . . tn ∈ T ∗ , we define the differential score of a token as the increase in its probability and the relative differential score as the relative increase in its probability . We lift these concepts to token sequences by defining DSM ′ M ( t1 . . . tn ) = n∑ i=1 M ′ ( t1 . . . ti−1 ) ( ti ) −M ( t1 . . . ti−1 ) ( ti ) , D̃S M ′ M ( t1 . . . tn ) = n∑ i=1 M ′ ( t1 . . . ti−1 ) ( ti ) −M ( t1 . . . ti−1 ) ( ti ) M ( t1 . . . ti−1 ) ( ti ) . The differential score of a token sequence is best interpreted relative to that of other token sequences . This motivates ranking sequences according to their differential score . Definition 2 . We define the differential rank DR ( s ) of s ∈ T ∗ as the number of token sequences of length |s| with differential score higher than s. DR ( s ) = ∣∣∣ { s′ ∈ T |s| ∣∣∣DSM ′M ( s′ ) > DSM ′M ( s ) } ∣∣∣ The lower the rank of s , the more s is exposed by a model update . 2.4 APPROXIMATING DIFFERENTIAL RANK . Our goal is to identify the token sequences that are most exposed by a model update , i.e. , the sequences with the lowest differential rank ( highest differential score ) . Exact computation of the differential rank for sequences of length n requires exploring a search space of size |T |n . To overcome this exponential blow-up , we propose a heuristic based on beam search . At time step i , a beam search of width k maintains a set of k candidate sequences of length i. Beam search considers all possible k |T | single token extensions of these sequences , computes their differential scores and keeps the k highest-scoring sequences of length i+ 1 among them for the next step . Eventually , the search completes and returns a set S ⊆ Tn . We approximate the differential rank DR ( s ) of a sequence s by its rank among the sequences in the set S computed by beam search , i.e . ∣∣∣ { s′ ∈ S | DSM ′M ( s′ ) > DSM ′M ( s ) } ∣∣∣ . The beam width k governs a trade-off between computational cost and precision of the result . For a sufficiently large width , S = T |s| and the result is the true rank of s. For smaller beam widths , the result is a lower bound on DR ( s ) as the search may miss sequences with higher differential score than those in S. In experiments , we found that shrinking the beam width as the search progresses speeds the search considerably without compromising on the quality of results . Initially , we use a beam width |T | , which we half at each iteration ( i.e. , we consider |T | /2 candidate phrases of length two , |T | /4 sequences of length three , . . . ) . 3 EXPERIMENTAL RESULTS . In this section we report on experiments in which we evaluate privacy in language model updates using the methodology described in Section 2 . We begin by describing the experimental setup . 3.1 SETUP . For our experiments , we consider three datasets of different size and complexity , matched with standard baseline model architectures whose capacity we adapted to the data size . All of our models are implemented in TensorFlow . Note that the random seeds of the models are not fixed , so repeated training runs of a model on an unchanged dataset will yield ( slightly ) different results . We will release the source code as well as analysis tools used in our experimental evaluation at https : //double/blind . Concretely , we use the Penn Treebank ( Marcus et al. , 1993 ) ( PTB ) dataset as a representative of low-data scenarios , as the standard training dataset has only around 900 000 tokens and a vocabulary size of 10 000 . As corresponding model , we use a two-layer recurrent neural network using LSTM cells with 200-dimensional embeddings and hidden states and no additional regularization ( this corresponds to the small configuration of Zaremba et al . ( 2014 ) ) . Second , we use a dataset of Reddit comments with 20 million tokens overall , of which we split off 5 % as validation set . We use a vocabulary size of 10 000 . As corresponding model , we rely on a one-layer recurrent neural network using an LSTM cell with 512-dimensional hidden states and 160-dimensional embeddings , using dropout on inputs and outputs with a keep rate of 0.9 as regularizer . These parameters were chosen in line with a neural language model suitable for next-word recommendations on resource-bounded mobile devices . We additionally consider a model based on the Transformer architecture ( Vaswani et al. , 2017 ) ( more concretely , using the BERT ( Devlin et al. , 2019 ) codebase ) with four layers of six attention heads each with a hidden dimension of 192 . Finally , we use the Wikitext-103 dataset ( Merity et al. , 2017 ) with 103 million training tokens as a representative of a big data regime , using a vocabulary size of 20 000 . As model , we employ a two-layer RNN with 512-dimensional LSTM cells and token embedding size 512 and again dropout on inputs and outputs with a keep rate of 0.9 as regularizer . We combined this large dataset with this relatively low-capacity model ( at least according to the standards of the state of the art in language modeling ) to test if our analysis results still hold on datasets that clearly require more model capacity than is available .
This paper studies the privacy issue of widely used neural language models in the current literature. The authors consider the privacy implication phenomena of two model snapshots before and after an update. The updating setting considered in this paper is kind of interesting. However, the contribution of the current paper is not strong enough and there are many unclear experimental settings in the current paper.
SP:9d2476df24b81661dc5ad76b13c8fd5fd1653381
Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning
In reinforcement learning , robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions . One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space ( a form of exploration ) , yet learning a population with the “ best ” coverage is still an open problem . In this work , we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies . Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work , including Soft-Actor Critic ( SAC ) . 1 INTRODUCTION . Many important reinforcement learning ( RL ) tasks , such as those in robotics and self-driving cars , are challenging due to large action and state spaces ( Lee et al. , 2018 ) . In particular , environments with large continuous action spaces are prone to deceptive rewards , i.e . fall into local optima in learning ( Conti et al. , 2018 ) . Applying traditional policy optimization algorithms to these domains often leads to locally optimal , yet globally sub-optimal policies . The agent should then explore the reward landscape more thoroughly in order to avoid falling into these local optima . Not all RL domains that require exploration are suitable for understanding how to train agents that are robust to deceptive rewards . For example , Montezuma ’ s Revenge , a game in the Atari Learning Environment ( Bellemare et al. , 2013 ) , has sparse rewards ; algorithms that perform the best on this task encourage exploration by providing a denser intrinsic reward to the agent to encourage exploration ( Tang et al. , 2017 ) . On the other hand , many robotic control problems , such as those found in MuJoCo ( Todorov et al. , 2012 ) , provide the agent with a dense reward signal , yet their high-dimensional action spaces induce a multimodal , often deceptive , reward landscape . For example , in the biped environments , coordinating both arms and legs is crucial for performing well on even simple tasks such as forward motion . However , simply learning to maximize the reward can be detrimental across training : agents will tend to run and fall further away from the start point rather than discovering stable and efficient walking motion . In this setting , exploration serves to provide a more reliable learning signal for the agent by covering more different types of actions during learning . One way to maximize action space coverage is the maximum entropy RL framework ( Ziebart , 2010 ) , which prevents variance collapse by adding a policy entropy auxiliary objective . One such prominent algorithm , Soft Actor-Critic ( SAC , Haarnoja et al . ( 2018 ) ) , has been shown to excel in large continuous action spaces . To further improve on exploration properties of SAC , one can maintain a population of agents that cover non-identical sections of the policy space . To prevent premature convergence , a diversity-preserving mechanism is typically put in place ; balancing the objective and the diversity term becomes key to converging to a global optimum ( Hong et al. , 2018 ) . This paper studies a particular family of population-based exploration methods , which conduct coordinated local search in the policy space . Prior work on population-based strategies improves performance on robotic control domains through stochastic perturbation on a single actor ’ s parameter ( Pourchot & Sigaud , 2019 ) or a set of actor ’ s parameters ( Conti et al. , 2018 ; Khadka & Tumer , 2018 ; Liu et al. , 2017 ) . We hypothesize that exploring directly in the policy space will be more effective than perturbing the parameters of the policy , as the latter does not guarantee diversity ( i.e. , different neural network parameterizations can approximately represent the same function ) . Given a population of RL agents , we enforce local exploration using an Attraction-Repulsion ( AR ) mechanism . The later consists in adding an auxiliary loss to encourage pairwise attraction or repulsion between members of a population , as measured by a divergence term . We make use of the KullbackLeibler ( KL ) divergence because of its desirable statistical properties and its easiness of computation . However , naively maximizing the KL term between two Gaussian policies can be detrimental ( e.g . drives both means apart ) . Because of this , we parametrize the policy with a general family of distributions called Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) ; this modification allows to improve upon AR+Gaussian ( see Appendix Figure 6 ) . NFs are shown to improve the expressivity of the policies using invertible mappings while maintaining entropy guarantees ( Mazoure et al. , 2019 ; Tang & Agrawal , 2018 ) . Nonlinear density estimators have also been previously used for deep RL problems in contexts of distributional RL ( Doan et al. , 2018 ) and reward shaping ( Tang et al. , 2017 ) . The AR objective blends particularly well with SAC , since computing the KL requires stochastic policies with tractable densities for each agent . 2 PRELIMINARIES . We first formalize the RL setting in a Markov decision process ( MDP ) . A discrete-time , finite-horizon , MDP ( Bellman , 1957 ; Puterman , 2014 ) is described by a state space S , an action spaceA , a transition function P : S ×A× S 7→ R+ , and a reward function r : S ×A 7→ R.1 On each round t , an agent interacting with this MDP observes the current state st ∈ S , selects an action at ∈ A , and observes a reward r ( st , at ) ∈ R upon transitioning to a new state st+1 ∼ P ( st , at ) . Let γ ∈ [ 0 , 1 ] be a discount factor . The goal of an agent evolving in a discounted MDP is to learn a policy π : S × A 7→ [ 0 , 1 ] such as taking action at ∼ π ( ·|st ) would maximize the expected sum of discounted returns , V π ( s ) = Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 = s ] . In the following , we use ρπ to denote the trajectory distribution induced by following policy π . If S or A are vector spaces , action and space vectors are respectively denoted by a and s . 2.1 DISCOVERING NEW SOLUTIONS THROUGH POPULATION-BASED ATTRACTION-REPULSION . Consider evolving a population of M agents , also called individuals , { πθm } Mm=1 , each agent corresponding to a policy with its own parameters . In order to discover new solutions , we aim to generate agents that can mimic some target policy while following a path different from those of other policies . Let G denote an archive of policies encountered in previous generations of the population . A natural way of enforcing π to be different from or similar to the policies contained in G is by augmenting the loss of the agent with an Attraction-Repulsion ( AR ) term : LAR = − E π′∼G [ βπ′DKL [ π||π′ ] ] , ( 1 ) where π′ is an archived policy and βπ′ is a coefficient weighting the relative importance of the Kullback-Leibler ( KL ) divergence between π and π′ , which we will choose to be a function of the average reward ( see Sec . 3.2 below ) . Intuitively , Eq . 1 adds to the agent objective a weighted average distance between the current and the archived policies . For βπ′ ≥ 0 , the agent tends to move away from the archived policy ’ s behavior ( i.e . repulsion , see Figure 1 ) a ) . On the other hand , βπ′ < 0 encourages the agent π to imitate π′ ( i.e . attraction ) . Requirements for AR In order for agents within a population to be trained using the proposed AR-based loss ( Eq . 1 ) , we have the following requirements : 1 . Their policies should be stochastic , so that the KL-divergence between two policies is well-defined . 1A and S can be either discrete or continuous . 2 . Their policies should have tractable distributions , so that the KL-divergence can be computed easily , either with closed-form solution or Monte Carlo estimation . Several RL algorithms enjoy such properties ( Haarnoja et al. , 2018 ; Schulman et al. , 2015 ; 2017 ) . In particular , the soft actor-critic ( SAC , Haarnoja et al. , 2018 ) is a straightforward choice , as it currently outperforms other candidates and is off-policy , thus maintains a single critic shared among all agents ( instead of one critic per agent ) , which reduces computation costs . 2.2 SOFT ACTOR-CRITIC . SAC ( Haarnoja et al. , 2018 ) is an off-policy learning algorithm which finds the information projection of the Boltzmann Q-function onto the set of diagonal Gaussian policies Π : π = arg min π′∈Π DKL ( π′ ( .|st ) ∥∥∥∥exp ( 1αQπold ( st , . ) ) Zπold ( st ) ) , where α ∈ ( 0 , 1 ) controls the temperature , i.e . the peakedness of the distribution . The policy π , critic Q , and value function V are optimized according to the following loss functions : Lπ , SAC = Est∼B [ Eat∼π [ α log π ( at|st ) −Q ( st , at ) ] ] ( 2 ) LQ = E ( s , a , r , s′ ) ∼B [ { Q ( s , a ) − ( r + γV πν ( s′ ) ) } 2 ] ( 3 ) LV = Est∼D [ 1 2 { V πν ( st ) − Eat∼π [ Q ( st , at ) − α log π ( at|st ) ] } 2 ] , ( 4 ) where B is the replay buffer . The policy used in SAC as introduced in Haarnoja et al . ( 2018 ) is Gaussian , which is both stochastic and tractable , thus compatible with our AR loss function in Eq . 1 . Together with the AR loss in Eq . 1 , the final policy loss becomes : Lπ = Lπ , SAC + LAR ( 5 ) However , Gaussian policies are arguably of limited expressibility ; we can improve on the family of policy distributions without sacrificing qualities necessary for AR or SAC by using Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) . 2.3 NORMALIZING FLOWS . NFs ( Rezende & Mohamed , 2015 ) were introduced as a means of transforming simple distributions into more complex distributions using learnable and invertible functions . Given a random variable z0 with density q0 , they define a set of differentiable and invertible functions , { fi } Ni=1 , which generate a sequence of d-dimensional random variables , { zi } Ni=1 . Because SAC uses explicit , yet simple parametric policies , NFs can be used to transform the SAC policy into a richer one ( e.g. , multimodal ) without risk loss of information . For example , Mazoure et al . ( 2019 ) enhanced SAC using a family of radial contractions around a point z0 ∈ Rd , f ( z ) = z + β α+ ||z− z0||2 ( z− z0 ) ( 6 ) for α ∈ R+ and β ∈ R. This results in a rich set of policies comprised of an initial noise sample a0 , a state-noise embedding hθ ( a0 , st ) , and a flow { fφi } Ni=1 of arbitrary length N , parameterized by φ = { φi } Ni=1 . Sampling from the policy πφ , θ ( at|st ) can be described by the following set of equations : a0 ∼ N ( 0 , I ) ; z = hθ ( a0 , st ) ; at = fφN ◦ fφN−1 ◦ ... ◦ fφ1 ( z ) , ( 7 ) where hθ = a0σI + µ ( st ) depends on the state and the noise variance σ > 0 . Different SAC policies can thus be crafted by parameterizing their NFs layers .
The paper proposes an ensemble method for reinforcement learning in which the policy updates are modulated with a loss which encourages diversity among all experienced policies. It is a combination of SAC, normalizing flow policies, and an approach to diversity considered by Hong et al. (2018). The work seems rather incremental and the experiments have some methodological flaws. Specifically the main results (Fig. 4) are based on a comparison between 4 different codebases which makes it impossible to make meaningful conclusions as pointed out e.g. by [1]. The authors mention that their work is built on the work of Hong et al. (2018) yet the comparisons do not seem to include it as a baseline. I'm also concerned about how exactly are environment steps counted: in Algorithm 1 on line 27, it seems that the fitness which is used for training is evaluated by interacting with the environment yet these interactions are not counted towards total_step.
SP:044d99499c4a9cb383f5e39a28fc7ccb700040d1
Attraction-Repulsion Actor-Critic for Continuous Control Reinforcement Learning
In reinforcement learning , robotic control tasks are often useful for understanding how agents perform in environments with deceptive rewards where the agent can easily become trapped into suboptimal solutions . One way to avoid these local optima is to use a population of agents to ensure coverage of the policy space ( a form of exploration ) , yet learning a population with the “ best ” coverage is still an open problem . In this work , we present a novel approach to population-based RL in continuous control that leverages properties of normalizing flows to perform attractive and repulsive operations between current members of the population and previously observed policies . Empirical results on the MuJoCo suite demonstrate a high performance gain for our algorithm compared to prior work , including Soft-Actor Critic ( SAC ) . 1 INTRODUCTION . Many important reinforcement learning ( RL ) tasks , such as those in robotics and self-driving cars , are challenging due to large action and state spaces ( Lee et al. , 2018 ) . In particular , environments with large continuous action spaces are prone to deceptive rewards , i.e . fall into local optima in learning ( Conti et al. , 2018 ) . Applying traditional policy optimization algorithms to these domains often leads to locally optimal , yet globally sub-optimal policies . The agent should then explore the reward landscape more thoroughly in order to avoid falling into these local optima . Not all RL domains that require exploration are suitable for understanding how to train agents that are robust to deceptive rewards . For example , Montezuma ’ s Revenge , a game in the Atari Learning Environment ( Bellemare et al. , 2013 ) , has sparse rewards ; algorithms that perform the best on this task encourage exploration by providing a denser intrinsic reward to the agent to encourage exploration ( Tang et al. , 2017 ) . On the other hand , many robotic control problems , such as those found in MuJoCo ( Todorov et al. , 2012 ) , provide the agent with a dense reward signal , yet their high-dimensional action spaces induce a multimodal , often deceptive , reward landscape . For example , in the biped environments , coordinating both arms and legs is crucial for performing well on even simple tasks such as forward motion . However , simply learning to maximize the reward can be detrimental across training : agents will tend to run and fall further away from the start point rather than discovering stable and efficient walking motion . In this setting , exploration serves to provide a more reliable learning signal for the agent by covering more different types of actions during learning . One way to maximize action space coverage is the maximum entropy RL framework ( Ziebart , 2010 ) , which prevents variance collapse by adding a policy entropy auxiliary objective . One such prominent algorithm , Soft Actor-Critic ( SAC , Haarnoja et al . ( 2018 ) ) , has been shown to excel in large continuous action spaces . To further improve on exploration properties of SAC , one can maintain a population of agents that cover non-identical sections of the policy space . To prevent premature convergence , a diversity-preserving mechanism is typically put in place ; balancing the objective and the diversity term becomes key to converging to a global optimum ( Hong et al. , 2018 ) . This paper studies a particular family of population-based exploration methods , which conduct coordinated local search in the policy space . Prior work on population-based strategies improves performance on robotic control domains through stochastic perturbation on a single actor ’ s parameter ( Pourchot & Sigaud , 2019 ) or a set of actor ’ s parameters ( Conti et al. , 2018 ; Khadka & Tumer , 2018 ; Liu et al. , 2017 ) . We hypothesize that exploring directly in the policy space will be more effective than perturbing the parameters of the policy , as the latter does not guarantee diversity ( i.e. , different neural network parameterizations can approximately represent the same function ) . Given a population of RL agents , we enforce local exploration using an Attraction-Repulsion ( AR ) mechanism . The later consists in adding an auxiliary loss to encourage pairwise attraction or repulsion between members of a population , as measured by a divergence term . We make use of the KullbackLeibler ( KL ) divergence because of its desirable statistical properties and its easiness of computation . However , naively maximizing the KL term between two Gaussian policies can be detrimental ( e.g . drives both means apart ) . Because of this , we parametrize the policy with a general family of distributions called Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) ; this modification allows to improve upon AR+Gaussian ( see Appendix Figure 6 ) . NFs are shown to improve the expressivity of the policies using invertible mappings while maintaining entropy guarantees ( Mazoure et al. , 2019 ; Tang & Agrawal , 2018 ) . Nonlinear density estimators have also been previously used for deep RL problems in contexts of distributional RL ( Doan et al. , 2018 ) and reward shaping ( Tang et al. , 2017 ) . The AR objective blends particularly well with SAC , since computing the KL requires stochastic policies with tractable densities for each agent . 2 PRELIMINARIES . We first formalize the RL setting in a Markov decision process ( MDP ) . A discrete-time , finite-horizon , MDP ( Bellman , 1957 ; Puterman , 2014 ) is described by a state space S , an action spaceA , a transition function P : S ×A× S 7→ R+ , and a reward function r : S ×A 7→ R.1 On each round t , an agent interacting with this MDP observes the current state st ∈ S , selects an action at ∈ A , and observes a reward r ( st , at ) ∈ R upon transitioning to a new state st+1 ∼ P ( st , at ) . Let γ ∈ [ 0 , 1 ] be a discount factor . The goal of an agent evolving in a discounted MDP is to learn a policy π : S × A 7→ [ 0 , 1 ] such as taking action at ∼ π ( ·|st ) would maximize the expected sum of discounted returns , V π ( s ) = Eπ [ ∞∑ t=0 γtr ( st , at ) |s0 = s ] . In the following , we use ρπ to denote the trajectory distribution induced by following policy π . If S or A are vector spaces , action and space vectors are respectively denoted by a and s . 2.1 DISCOVERING NEW SOLUTIONS THROUGH POPULATION-BASED ATTRACTION-REPULSION . Consider evolving a population of M agents , also called individuals , { πθm } Mm=1 , each agent corresponding to a policy with its own parameters . In order to discover new solutions , we aim to generate agents that can mimic some target policy while following a path different from those of other policies . Let G denote an archive of policies encountered in previous generations of the population . A natural way of enforcing π to be different from or similar to the policies contained in G is by augmenting the loss of the agent with an Attraction-Repulsion ( AR ) term : LAR = − E π′∼G [ βπ′DKL [ π||π′ ] ] , ( 1 ) where π′ is an archived policy and βπ′ is a coefficient weighting the relative importance of the Kullback-Leibler ( KL ) divergence between π and π′ , which we will choose to be a function of the average reward ( see Sec . 3.2 below ) . Intuitively , Eq . 1 adds to the agent objective a weighted average distance between the current and the archived policies . For βπ′ ≥ 0 , the agent tends to move away from the archived policy ’ s behavior ( i.e . repulsion , see Figure 1 ) a ) . On the other hand , βπ′ < 0 encourages the agent π to imitate π′ ( i.e . attraction ) . Requirements for AR In order for agents within a population to be trained using the proposed AR-based loss ( Eq . 1 ) , we have the following requirements : 1 . Their policies should be stochastic , so that the KL-divergence between two policies is well-defined . 1A and S can be either discrete or continuous . 2 . Their policies should have tractable distributions , so that the KL-divergence can be computed easily , either with closed-form solution or Monte Carlo estimation . Several RL algorithms enjoy such properties ( Haarnoja et al. , 2018 ; Schulman et al. , 2015 ; 2017 ) . In particular , the soft actor-critic ( SAC , Haarnoja et al. , 2018 ) is a straightforward choice , as it currently outperforms other candidates and is off-policy , thus maintains a single critic shared among all agents ( instead of one critic per agent ) , which reduces computation costs . 2.2 SOFT ACTOR-CRITIC . SAC ( Haarnoja et al. , 2018 ) is an off-policy learning algorithm which finds the information projection of the Boltzmann Q-function onto the set of diagonal Gaussian policies Π : π = arg min π′∈Π DKL ( π′ ( .|st ) ∥∥∥∥exp ( 1αQπold ( st , . ) ) Zπold ( st ) ) , where α ∈ ( 0 , 1 ) controls the temperature , i.e . the peakedness of the distribution . The policy π , critic Q , and value function V are optimized according to the following loss functions : Lπ , SAC = Est∼B [ Eat∼π [ α log π ( at|st ) −Q ( st , at ) ] ] ( 2 ) LQ = E ( s , a , r , s′ ) ∼B [ { Q ( s , a ) − ( r + γV πν ( s′ ) ) } 2 ] ( 3 ) LV = Est∼D [ 1 2 { V πν ( st ) − Eat∼π [ Q ( st , at ) − α log π ( at|st ) ] } 2 ] , ( 4 ) where B is the replay buffer . The policy used in SAC as introduced in Haarnoja et al . ( 2018 ) is Gaussian , which is both stochastic and tractable , thus compatible with our AR loss function in Eq . 1 . Together with the AR loss in Eq . 1 , the final policy loss becomes : Lπ = Lπ , SAC + LAR ( 5 ) However , Gaussian policies are arguably of limited expressibility ; we can improve on the family of policy distributions without sacrificing qualities necessary for AR or SAC by using Normalizing Flows ( NFs , Rezende & Mohamed , 2015 ) . 2.3 NORMALIZING FLOWS . NFs ( Rezende & Mohamed , 2015 ) were introduced as a means of transforming simple distributions into more complex distributions using learnable and invertible functions . Given a random variable z0 with density q0 , they define a set of differentiable and invertible functions , { fi } Ni=1 , which generate a sequence of d-dimensional random variables , { zi } Ni=1 . Because SAC uses explicit , yet simple parametric policies , NFs can be used to transform the SAC policy into a richer one ( e.g. , multimodal ) without risk loss of information . For example , Mazoure et al . ( 2019 ) enhanced SAC using a family of radial contractions around a point z0 ∈ Rd , f ( z ) = z + β α+ ||z− z0||2 ( z− z0 ) ( 6 ) for α ∈ R+ and β ∈ R. This results in a rich set of policies comprised of an initial noise sample a0 , a state-noise embedding hθ ( a0 , st ) , and a flow { fφi } Ni=1 of arbitrary length N , parameterized by φ = { φi } Ni=1 . Sampling from the policy πφ , θ ( at|st ) can be described by the following set of equations : a0 ∼ N ( 0 , I ) ; z = hθ ( a0 , st ) ; at = fφN ◦ fφN−1 ◦ ... ◦ fφ1 ( z ) , ( 7 ) where hθ = a0σI + µ ( st ) depends on the state and the noise variance σ > 0 . Different SAC policies can thus be crafted by parameterizing their NFs layers .
RL in environments with deceptive rewards can produce sub-optimal policies. To remedy this, the paper proposes a method for population-based exploration. Multiple actors, each parameterized with policies based on Normalizing Flows (radial contractions), are optimized over iterations using the off-policy SAC algorithm. To encourage diverse-exploration as well as high-performance, the SAC-policy-gradient is supplemented with gradient of an “attraction” or “repulsion” term, as defined using the KL-divergence of current policy to another policy from an online archive. When applying the KL-gradient, the authors find it crucial to only update the flow layers, and not the base Gaussian policy.
SP:044d99499c4a9cb383f5e39a28fc7ccb700040d1
From Variational to Deterministic Autoencoders
1 INTRODUCTION . Generative models lie at the core of machine learning . By capturing the mechanisms behind the data generation process , one can reason about data probabilistically , access and traverse the lowdimensional manifold the data is assumed to live on , and ultimately generate new data . It is therefore not surprising that generative models have gained momentum in applications such as computer vision ( Sohn et al. , 2015 ; Brock et al. , 2019 ) , NLP ( Bowman et al. , 2016 ; Severyn et al. , 2017 ) , and chemistry ( Kusner et al. , 2017 ; Jin et al. , 2018 ; Gómez-Bombarelli et al. , 2018 ) . Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) cast learning representations for high-dimensional distributions as a variational inference problem . Learning a VAE amounts to the optimization of an objective balancing the quality of samples that are autoencoded through a stochastic encoder–decoder pair while encouraging the latent space to follow a fixed prior distribution . Since their introduction , VAEs have become one of the frameworks of choice among the different generative models . VAEs promise theoretically well-founded and more stable training than Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) and more efficient sampling mechanisms than autoregressive models ( Larochelle & Murray , 2011 ; Germain et al. , 2015 ) . However , the VAE framework is still far from delivering the promised generative mechanism , as there are several practical and theoretical challenges yet to be solved . A major weakness of VAEs is ∗Equal contribution . 1An implementation is available at : https : //github.com/ParthaEth/Regularized_ autoencoders-RAE- the tendency to strike an unsatisfying compromise between sample quality and reconstruction quality . In practice , this has been attributed to overly simplistic prior distributions ( Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ) or alternatively , to the inherent over-regularization induced by the KL divergence term in the VAE objective ( Tolstikhin et al. , 2017 ) . Most importantly , the VAE objective itself poses several challenges as it admits trivial solutions that decouple the latent space from the input ( Chen et al. , 2017 ; Zhao et al. , 2017 ) , leading to the posterior collapse phenomenon in conjunction with powerful decoders ( van den Oord et al. , 2017 ) . Furthermore , due to its variational formulation , training a VAE requires approximating expectations through sampling at the cost of increased variance in gradients ( Burda et al. , 2015 ; Tucker et al. , 2017 ) , making initialization , validation , and annealing of hyperparameters essential in practice ( Bowman et al. , 2016 ; Higgins et al. , 2017 ; Bauer & Mnih , 2019 ) . Lastly , even after a satisfactory convergence of the objective , the learned aggregated posterior distribution rarely matches the assumed latent prior in practice ( Kingma et al. , 2016 ; Bauer & Mnih , 2019 ; Dai & Wipf , 2019 ) , ultimately hurting the quality of generated samples . All in all , much of the attention around VAEs is still directed towards “ fixing ” the aforementioned drawbacks associated with them . In this work , we take a different route : we question whether the variational framework adopted by VAEs is necessary for generative modeling and , in particular , to obtain a smooth latent space . We propose to adopt a simpler , deterministic version of VAEs that scales better , is simpler to optimize , and , most importantly , still produces a meaningful latent space and equivalently good or better samples than VAEs or stronger alternatives , e.g. , Wasserstein Autoencoders ( WAEs ) ( Tolstikhin et al. , 2017 ) . We do so by observing that , under commonly used distributional assumptions , training a stochastic encoder–decoder pair in VAEs does not differ from training a deterministic architecture where noise is added to the decoder ’ s input . We investigate how to substitute this noise injection mechanism with other regularization schemes in the proposed deterministic Regularized Autoencoders ( RAEs ) , and we thoroughly analyze how this affects performance . Finally , we equip RAEs with a generative mechanism via a simple ex-post density estimation step on the learned latent space . In summary , our contributions are as follows : i ) we introduce the RAE framework for generative modeling as a drop-in replacement for many common VAE architectures ; ii ) we propose an ex-post density estimation scheme which greatly improves sample quality for VAEs , WAEs and RAEs without the need to retrain the models ; iii ) we conduct a rigorous empirical evaluation to compare RAEs with VAEs and several baselines on standard image datasets and on more challenging structured domains such as molecule generation ( Kusner et al. , 2017 ; Gómez-Bombarelli et al. , 2018 ) . 2 VARIATIONAL AUTOENCODERS . For a general discussion , we consider a collection of high-dimensional i.i.d . samples X = { xi } Ni=1 drawn from the true data distribution pdata ( x ) over a random variable X taking values in the input space . The aim of generative modeling is to learn from X a mechanism to draw new samples xnew ∼ pdata . Variational Autoencoders provide a powerful latent variable framework to infer such a mechanism . The generative process of the VAE is defined as znew ∼ p ( Z ) , xnew ∼ pθ ( X |Z = znew ) ( 1 ) where p ( Z ) is a fixed prior distribution over a low-dimensional latent space Z . A stochastic decoder Dθ ( z ) = x ∼ pθ ( x | z ) = p ( X | gθ ( z ) ) ( 2 ) links the latent space to the input space through the likelihood distribution pθ , where gθ is an expressive non-linear function parameterized by θ.2 As a result , a VAE estimates pdata ( x ) as the infinite mixture model pθ ( x ) = ∫ pθ ( x | z ) p ( z ) dz . At the same time , the input space is mapped to the latent space via a stochastic encoder Eφ ( x ) = z ∼ qφ ( z |x ) = q ( Z | fφ ( x ) ) ( 3 ) where qφ ( z |x ) is the posterior distribution given by a second function fφ parameterized by φ. Computing the marginal log-likelihood log pθ ( x ) is generally intractable . One therefore follows a variational approach , maximizing the evidence lower bound ( ELBO ) for a sample x : log pθ ( x ) ≥ ELBO ( φ , θ , x ) = Ez∼qφ ( z |x ) log pθ ( x | z ) −KL ( qφ ( z |x ) ||p ( z ) ) ( 4 ) 2With slight abuse of notation , we use lowercase letters for both random variables and their realizations , e.g. , pθ ( x | z ) instead of p ( X |Z = z ) , when it is clear to discriminate between the two . Maximizing Eq . 4 over data X w.r.t . model parameters φ , θ corresponds to minimizing the loss argmin φ , θ Ex∼pdata LELBO = Ex∼pdata LREC + LKL ( 5 ) where LREC and LKL are defined for a sample x as follows : LREC = −Ez∼qφ ( z |x ) log pθ ( x | z ) LKL = KL ( qφ ( z |x ) ||p ( z ) ) ( 6 ) Intuitively , the reconstruction loss LREC takes into account the quality of autoencoded samples x through Dθ ( Eφ ( x ) ) , while the KL-divergence term LKL encourages qφ ( z |x ) to match the prior p ( z ) for each z which acts as a regularizer during training ( Hoffman & Johnson , 2016 ) . 2.1 PRACTICE AND SHORTCOMINGS OF VAES . To fit a VAE to data through Eq . 5 one has to specify the parametric forms for p ( z ) , qφ ( z |x ) , pθ ( x | z ) , and hence the deterministic mappings fφ and gθ . In practice , the choice for the above distributions is guided by trading off computational complexity with model expressiveness . In the most commonly adopted formulation of the VAE , qφ ( z |x ) and pθ ( x | z ) are assumed to be Gaussian : Eφ ( x ) ∼ N ( Z|µφ ( x ) , diag ( σφ ( x ) ) ) Dθ ( Eφ ( x ) ) ∼ N ( X|µθ ( z ) , diag ( σθ ( z ) ) ) ( 7 ) with means µφ , µθ and covariance parameters σφ , σθ given by fφ and gθ . In practice , the covariance of the decoder is set to the identity matrix for all z , i.e. , σθ ( z ) = 1 ( Dai & Wipf , 2019 ) . The expectation of LREC in Eq . 6 must be approximated via k Monte Carlo point estimates . It is expected that the quality of the Monte Carlo estimate , and hence convergence during learning and sample quality increases for larger k ( Burda et al. , 2015 ) . However , only a 1-sample approximation is generally carried out ( Kingma & Welling , 2014 ) since memory and time requirements are prohibitive for large k. With the 1-sample approximation , LREC can be computed as the mean squared error between input samples and their mean reconstructions µθ by a decoder that is deterministic in practice : LREC = ||x− µθ ( Eφ ( x ) ) ||22 ( 8 ) Gradients w.r.t . the encoder parameters φ are computed through the expectation of LREC in Eq . 6 via the reparametrization trick ( Kingma & Welling , 2014 ) where the stochasticity of Eφ is relegated to an auxiliary random variable which does not depend on φ : Eφ ( x ) = µφ ( x ) + σφ ( x ) , ∼ N ( 0 , I ) ( 9 ) where denotes the Hadamard product . An additional simplifying assumption involves fixing the prior p ( z ) to be a d-dimensional isotropic Gaussian N ( Z |0 , I ) . For this choice , the KL-divergence for a sample x is given in closed form : 2LKL = ||µφ ( x ) ||22 + d+ ∑d i σφ ( x ) i − logσφ ( x ) i . While the above assumptions make VAEs easy to implement , the stochasticity in the encoder and decoder are still problematic in practice ( Makhzani et al. , 2016 ; Tolstikhin et al. , 2017 ; Dai & Wipf , 2019 ) . In particular , one has to carefully balance the trade-off between the LKL term and LREC during optimization ( Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . A too-large weight on the LKL term can dominateLELBO , having the effect of over-regularization . As this would smooth the latent space , it can directly affect sample quality in a negative way . Heuristics to avoid this include manually finetuning or gradually annealing the importance of LKL during training ( Bowman et al. , 2016 ; Bauer & Mnih , 2019 ) . We also observe this trade-off in a practical experiment in Appendix A . Even after employing the full array of approximations and “ tricks ” to reach convergence of Eq . 5 for a satisfactory set of parameters , there is no guarantee that the learned latent space is distributed according to the assumed prior distribution . In other words , the aggregated posterior distribution qφ ( z ) = Ex∼pdataq ( z|x ) has been shown not to conform well to p ( z ) after training ( Tolstikhin et al. , 2017 ; Bauer & Mnih , 2019 ; Dai & Wipf , 2019 ) . This critical issue severely hinders the generative mechanism of VAEs ( cf . Eq . 1 ) since latent codes sampled from p ( z ) ( instead of q ( z ) ) might lead to regions of the latent space that are previously unseen toDθ during training . This results in generating out-of-distribution samples . We refer the reader to Appendix H for a visual demonstration of this phenomenon on the latent space of VAEs . We analyze solutions to this problem in Section 4 .
This paper propose an extension to deterministic autoencoders. Motivated from VAEs, the authors propose RAEs, which replace the noise injection in the encoders of VAEs with an explicit regularization term on the latent representations. As a result, the model becomes a deterministic autoencoder with a L_2 regularization on the latent representation z. To make the model generalize well, the authors also add a decoder regularization term L_REG. In addition, due to the encoder in RAE is deterministic, the authors propose several ex-post density estimation techniques for generating samples.
SP:e4f5ca770474ba98dc7643522ea6435f0586c292
From Variational to Deterministic Autoencoders
1 INTRODUCTION . Generative models lie at the core of machine learning . By capturing the mechanisms behind the data generation process , one can reason about data probabilistically , access and traverse the lowdimensional manifold the data is assumed to live on , and ultimately generate new data . It is therefore not surprising that generative models have gained momentum in applications such as computer vision ( Sohn et al. , 2015 ; Brock et al. , 2019 ) , NLP ( Bowman et al. , 2016 ; Severyn et al. , 2017 ) , and chemistry ( Kusner et al. , 2017 ; Jin et al. , 2018 ; Gómez-Bombarelli et al. , 2018 ) . Variational Autoencoders ( VAEs ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) cast learning representations for high-dimensional distributions as a variational inference problem . Learning a VAE amounts to the optimization of an objective balancing the quality of samples that are autoencoded through a stochastic encoder–decoder pair while encouraging the latent space to follow a fixed prior distribution . Since their introduction , VAEs have become one of the frameworks of choice among the different generative models . VAEs promise theoretically well-founded and more stable training than Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) and more efficient sampling mechanisms than autoregressive models ( Larochelle & Murray , 2011 ; Germain et al. , 2015 ) . However , the VAE framework is still far from delivering the promised generative mechanism , as there are several practical and theoretical challenges yet to be solved . A major weakness of VAEs is ∗Equal contribution . 1An implementation is available at : https : //github.com/ParthaEth/Regularized_ autoencoders-RAE- the tendency to strike an unsatisfying compromise between sample quality and reconstruction quality . In practice , this has been attributed to overly simplistic prior distributions ( Tomczak & Welling , 2018 ; Dai & Wipf , 2019 ) or alternatively , to the inherent over-regularization induced by the KL divergence term in the VAE objective ( Tolstikhin et al. , 2017 ) . Most importantly , the VAE objective itself poses several challenges as it admits trivial solutions that decouple the latent space from the input ( Chen et al. , 2017 ; Zhao et al. , 2017 ) , leading to the posterior collapse phenomenon in conjunction with powerful decoders ( van den Oord et al. , 2017 ) . Furthermore , due to its variational formulation , training a VAE requires approximating expectations through sampling at the cost of increased variance in gradients ( Burda et al. , 2015 ; Tucker et al. , 2017 ) , making initialization , validation , and annealing of hyperparameters essential in practice ( Bowman et al. , 2016 ; Higgins et al. , 2017 ; Bauer & Mnih , 2019 ) . Lastly , even after a satisfactory convergence of the objective , the learned aggregated posterior distribution rarely matches the assumed latent prior in practice ( Kingma et al. , 2016 ; Bauer & Mnih , 2019 ; Dai & Wipf , 2019 ) , ultimately hurting the quality of generated samples . All in all , much of the attention around VAEs is still directed towards “ fixing ” the aforementioned drawbacks associated with them . In this work , we take a different route : we question whether the variational framework adopted by VAEs is necessary for generative modeling and , in particular , to obtain a smooth latent space . We propose to adopt a simpler , deterministic version of VAEs that scales better , is simpler to optimize , and , most importantly , still produces a meaningful latent space and equivalently good or better samples than VAEs or stronger alternatives , e.g. , Wasserstein Autoencoders ( WAEs ) ( Tolstikhin et al. , 2017 ) . We do so by observing that , under commonly used distributional assumptions , training a stochastic encoder–decoder pair in VAEs does not differ from training a deterministic architecture where noise is added to the decoder ’ s input . We investigate how to substitute this noise injection mechanism with other regularization schemes in the proposed deterministic Regularized Autoencoders ( RAEs ) , and we thoroughly analyze how this affects performance . Finally , we equip RAEs with a generative mechanism via a simple ex-post density estimation step on the learned latent space . In summary , our contributions are as follows : i ) we introduce the RAE framework for generative modeling as a drop-in replacement for many common VAE architectures ; ii ) we propose an ex-post density estimation scheme which greatly improves sample quality for VAEs , WAEs and RAEs without the need to retrain the models ; iii ) we conduct a rigorous empirical evaluation to compare RAEs with VAEs and several baselines on standard image datasets and on more challenging structured domains such as molecule generation ( Kusner et al. , 2017 ; Gómez-Bombarelli et al. , 2018 ) . 2 VARIATIONAL AUTOENCODERS . For a general discussion , we consider a collection of high-dimensional i.i.d . samples X = { xi } Ni=1 drawn from the true data distribution pdata ( x ) over a random variable X taking values in the input space . The aim of generative modeling is to learn from X a mechanism to draw new samples xnew ∼ pdata . Variational Autoencoders provide a powerful latent variable framework to infer such a mechanism . The generative process of the VAE is defined as znew ∼ p ( Z ) , xnew ∼ pθ ( X |Z = znew ) ( 1 ) where p ( Z ) is a fixed prior distribution over a low-dimensional latent space Z . A stochastic decoder Dθ ( z ) = x ∼ pθ ( x | z ) = p ( X | gθ ( z ) ) ( 2 ) links the latent space to the input space through the likelihood distribution pθ , where gθ is an expressive non-linear function parameterized by θ.2 As a result , a VAE estimates pdata ( x ) as the infinite mixture model pθ ( x ) = ∫ pθ ( x | z ) p ( z ) dz . At the same time , the input space is mapped to the latent space via a stochastic encoder Eφ ( x ) = z ∼ qφ ( z |x ) = q ( Z | fφ ( x ) ) ( 3 ) where qφ ( z |x ) is the posterior distribution given by a second function fφ parameterized by φ. Computing the marginal log-likelihood log pθ ( x ) is generally intractable . One therefore follows a variational approach , maximizing the evidence lower bound ( ELBO ) for a sample x : log pθ ( x ) ≥ ELBO ( φ , θ , x ) = Ez∼qφ ( z |x ) log pθ ( x | z ) −KL ( qφ ( z |x ) ||p ( z ) ) ( 4 ) 2With slight abuse of notation , we use lowercase letters for both random variables and their realizations , e.g. , pθ ( x | z ) instead of p ( X |Z = z ) , when it is clear to discriminate between the two . Maximizing Eq . 4 over data X w.r.t . model parameters φ , θ corresponds to minimizing the loss argmin φ , θ Ex∼pdata LELBO = Ex∼pdata LREC + LKL ( 5 ) where LREC and LKL are defined for a sample x as follows : LREC = −Ez∼qφ ( z |x ) log pθ ( x | z ) LKL = KL ( qφ ( z |x ) ||p ( z ) ) ( 6 ) Intuitively , the reconstruction loss LREC takes into account the quality of autoencoded samples x through Dθ ( Eφ ( x ) ) , while the KL-divergence term LKL encourages qφ ( z |x ) to match the prior p ( z ) for each z which acts as a regularizer during training ( Hoffman & Johnson , 2016 ) . 2.1 PRACTICE AND SHORTCOMINGS OF VAES . To fit a VAE to data through Eq . 5 one has to specify the parametric forms for p ( z ) , qφ ( z |x ) , pθ ( x | z ) , and hence the deterministic mappings fφ and gθ . In practice , the choice for the above distributions is guided by trading off computational complexity with model expressiveness . In the most commonly adopted formulation of the VAE , qφ ( z |x ) and pθ ( x | z ) are assumed to be Gaussian : Eφ ( x ) ∼ N ( Z|µφ ( x ) , diag ( σφ ( x ) ) ) Dθ ( Eφ ( x ) ) ∼ N ( X|µθ ( z ) , diag ( σθ ( z ) ) ) ( 7 ) with means µφ , µθ and covariance parameters σφ , σθ given by fφ and gθ . In practice , the covariance of the decoder is set to the identity matrix for all z , i.e. , σθ ( z ) = 1 ( Dai & Wipf , 2019 ) . The expectation of LREC in Eq . 6 must be approximated via k Monte Carlo point estimates . It is expected that the quality of the Monte Carlo estimate , and hence convergence during learning and sample quality increases for larger k ( Burda et al. , 2015 ) . However , only a 1-sample approximation is generally carried out ( Kingma & Welling , 2014 ) since memory and time requirements are prohibitive for large k. With the 1-sample approximation , LREC can be computed as the mean squared error between input samples and their mean reconstructions µθ by a decoder that is deterministic in practice : LREC = ||x− µθ ( Eφ ( x ) ) ||22 ( 8 ) Gradients w.r.t . the encoder parameters φ are computed through the expectation of LREC in Eq . 6 via the reparametrization trick ( Kingma & Welling , 2014 ) where the stochasticity of Eφ is relegated to an auxiliary random variable which does not depend on φ : Eφ ( x ) = µφ ( x ) + σφ ( x ) , ∼ N ( 0 , I ) ( 9 ) where denotes the Hadamard product . An additional simplifying assumption involves fixing the prior p ( z ) to be a d-dimensional isotropic Gaussian N ( Z |0 , I ) . For this choice , the KL-divergence for a sample x is given in closed form : 2LKL = ||µφ ( x ) ||22 + d+ ∑d i σφ ( x ) i − logσφ ( x ) i . While the above assumptions make VAEs easy to implement , the stochasticity in the encoder and decoder are still problematic in practice ( Makhzani et al. , 2016 ; Tolstikhin et al. , 2017 ; Dai & Wipf , 2019 ) . In particular , one has to carefully balance the trade-off between the LKL term and LREC during optimization ( Dai & Wipf , 2019 ; Bauer & Mnih , 2019 ) . A too-large weight on the LKL term can dominateLELBO , having the effect of over-regularization . As this would smooth the latent space , it can directly affect sample quality in a negative way . Heuristics to avoid this include manually finetuning or gradually annealing the importance of LKL during training ( Bowman et al. , 2016 ; Bauer & Mnih , 2019 ) . We also observe this trade-off in a practical experiment in Appendix A . Even after employing the full array of approximations and “ tricks ” to reach convergence of Eq . 5 for a satisfactory set of parameters , there is no guarantee that the learned latent space is distributed according to the assumed prior distribution . In other words , the aggregated posterior distribution qφ ( z ) = Ex∼pdataq ( z|x ) has been shown not to conform well to p ( z ) after training ( Tolstikhin et al. , 2017 ; Bauer & Mnih , 2019 ; Dai & Wipf , 2019 ) . This critical issue severely hinders the generative mechanism of VAEs ( cf . Eq . 1 ) since latent codes sampled from p ( z ) ( instead of q ( z ) ) might lead to regions of the latent space that are previously unseen toDθ during training . This results in generating out-of-distribution samples . We refer the reader to Appendix H for a visual demonstration of this phenomenon on the latent space of VAEs . We analyze solutions to this problem in Section 4 .
The paper studies (the more conventional) deterministic auto-encoders, as they are easier to train than VAE. To then try to maintain the model's capability of approximating the data distribution and to draw/synthesize new unseen samples, the paper both looks at imposing additional regularization terms towards a smooth decoder and proposes to sample from a latent distribution that's induced from empirical embeddings (similar to an aggregate posterior in VAE). Experiments are mostly around contrasting VAEs with the proposed RAEs in terms of comparing the quality of the generated samples.
SP:e4f5ca770474ba98dc7643522ea6435f0586c292
Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards
1 INTRODUCTION . Recent progress in deep reinforcement learning ( DRL ) has enabled robots to learn and execute complex tasks , ranging from game playing ( Jaderberg et al. , 2018 ; OpenAI , 2019 ) , robotic manipulations ( Andrychowicz et al. , 2017 ; Haarnoja et al. , 2018 ) , to navigation ( Zhang et al. , 2017 ) . However , in many scenarios learning depends heavily on meaningful and frequent feedback from the environment for the agent to learn and correct behaviors . As a result , reinforcement learning ( RL ) problems with sparse rewards still remain a difficult challenge ( Riedmiller et al. , 2018 ; Agarwal et al. , 2019 ) . In a sparse reward setting , the agent typically explores without receiving any reward , until it enters a small subset of the environment space ( the ” goal ” ) . Due to lack of frequent feedback from the environment , learning in sparse reward problems is typically hard , and heavily relies on the agent entering the ” goal ” during exploration . A possible way to tackle this is through reward shaping ( Devlin & Kudenko , 2012 ; Zou et al. , 2019 ; Gao & Toni , 2015 ) , where manually designed rewards are added to the environment to guide the agent towards finding the ” goal ” ; however , this approach often requires domain knowledge of the environment , and may bias learning if the shaped rewards are not robust ( Ng et al. , 1999 ) . RL problems often benefit from representation learning ( Bengio et al. , 2013 ) , which studies the transformation of raw observations of an environment ( sensors , images , coordinates etc ) into a more meaningful form , such that the agent can more easily extract information useful for learning . Intuitively , raw states contain redundant or irrelevant information about the environment , which the agent must take time to learn to distinguish and remove ; representation learning directly tackles this problem by either eliminating redundant dimensions ( Kingma & Welling , 2013 ; van den Oord et al. , 2017 ) or emphasizing more useful elements of the state ( Nachum et al. , 2018a ) . Much of the prior work on representation learning focuses on generative approaches to model the environment , but some recent work also studies optimizations that learn important features ( Ghosh et al. , 2018 ) . In this paper , we tackle the challenge of DRL to solve sparse reward tasks : we apply representation learning to provide the agent meaningful rewards without the need for domain knowledge . In particular , we propose to use predictive coding in an unsupervised fashion to extract features that maximize the mutual information ( MI ) between consecutive states in a state trajectory . These predictive features are expected to have the potential to simplify the structure of an environment ’ s state space : they are optimized to both summarize the past and predict the future , capturing the most important elements of the environment dynamics . We show this method is useful for model-free learning from either raw states or images , and can be applied on top of any general deep reinforcement learning algorithms such as PPO ( Schulman et al. , 2017 ) . Although MI has traditionally been difficult to compute , recent advances have suggested optimizing on a tractable lower bound on the quantity ( Hjelm et al. , 2018 ; Belghazi et al. , 2018 ; Oord et al. , 2018 ) . We adopt one such method , Contrastive Predictive Coding ( Oord et al. , 2018 ) , to extract features that maximize MI between consecutive states in trajectories collected during exploration ( one thing worth noting is that this method is not restricted to specific predictive coding schemes such as CPC ) . Such features are then used for simple reward shaping in representation space to provide the agent better feedback in sparse reward problems . We demonstrate the validity of our method through extensive numerical simulations in a wide range of control environments such as maze navigation , robot locomotion , and robotic arm manipulation ( Figure 4 ) . In particular , we show that using these predictive features , we provide reward signals as effective for learning as handshaped rewards , which encode domain and task knowledge . This paper is structured as follows : We begin by providing preliminary information in Section 2 and discussing relevant work in Section 3 ; then , we explain and illustrate the proposed method in Section 4 ; lastly , we present experiment results in Section 5 , and conclude the paper by discussing the results and pointing out future work in Section 6 . 2 PRELIMINARIES . Reinforcement Learning and Reward Shaping : This paper assumes a finite-horizon Markov Decision Process ( MDP ) ( Puterman , 1994 ) , defined by a tuple ( S , A , P , r , γ , T ) . Here , S ∈ Rd denotes the state space , A ∈ Rm denotes the action space , P : S ×A×S → R+ denotes the state transition distribution , r : S × A → R denotes the reward function , γ ∈ [ 0 , 1 ] is the discount factor , and finally T is the horizon . At each step t , the action at ∈ A is sampled from a policy distribution πθ ( at|st ) where s ∈ S and θ is the policy parameter . After transiting into the next state by sampling from p ( st+1|at , st ) , where p ∈ P , the agent receives a scalar reward r ( st , at ) . The agent continues performing actions until it enters a terminal state or t reaches the horizon , by when the agent has completed one episode . We let τ denote the sequence of states that the agent enters in one episode . With such definition , the goal of RL is to learn a policy πθ∗ ( at|st ) that maximizes the expected discounted reward Eπ , P [ R ( τ0 : T−1 ) ] = Eπ , P [ ∑T−1 0 γ tr ( st , at ) ] , where expectation is taken on the possible trajectories τ and starting states x0 . In this paper , we assume model-free learning , meaning the agent does not have access to P . Reward shaping essentially replaces the original MDP with a new one , whose reward function is now r′ ( st , at ) . In this paper , reward shaping is done to train a policy πr′ that maximizes the expected discounted reward in the original MDP , i.e . Eπr′ , P [ R ( τ0 : T−1 ) ] . Mutual Information and Predictive Coding : Mutual information measures the amount of information obtained about one random variable after observing another random variable ( Cover & Thomas , 2012 ) . Formally , given two random variables X and Y with joint distribution p ( x , y ) and marginal densities p ( x ) and p ( y ) , their MI is defined as the KL-divergence between joint density and product of marginal densities : MI ( X ; Y ) = DKL ( p ( x , y ) ‖p ( x ) p ( y ) ) = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] . ( 1 ) Predictive coding in this paper aims to maximize the MI between consecutive states in the same state trajectory . As MI is difficult to compute , we adopt the method of optimizing on a lower bound , InfoNCE ( Oord et al. , 2018 ) , which takes the current context ct to predict a future state st+k : MI ( st+k ; ct ) ≥ ES [ log f ( zt+k , ct ) f ( zt+k , ct ) + ∑ sj∈S f ( zj , ct ) ] ( 2 ) Here , f ( x , y ) is optimized through cross entropy to model a density ratio : f ( x , y ) ∝ P ( x|y ) P ( x ) . zt+k is the embedding of state xt+k by the encoder , and ct is obtained by summarizing the embeddings of previous n states in a segment of a trajectory , zt−n+1 : t , through a gated recurrent unit ( Cho et al. , 2014 ) . Intuitively , the context ct pays attention to the evolution of states in order to summarize the past and predict the future ; thus , it forces the encoder to extract only the essential dynamical elements of the environment , elements that encapsulate state evolution . 3 RELEVANT WORKS Our paper uses the method of Contrastive Predictive Coding ( CPC ) ( Oord et al. , 2018 ) , which includes experiments in the domain of RL . In the CPC paper , the InfoNCE is applied to the LSTM component ( Hochreiter & Schmidhuber , 1997 ) of an A2C architecture ( Mnih et al. , 2016 ; Espeholt et al. , 2018 ) . The LSTM maps every state observation to an embedding , which is then directly used for learning . This differs from our approach , where we train on pre-collected trajectories to obtain embeddings , and only use these embeddings to provide rewards to the agent , which still learns on the raw states . Our approach has two main advantages : 1 ) Preprocessing states allows us to collect exploration-focused trajectories , and obtain embeddings that are suitable for multi-tasking . 2 ) Using embeddings to provide rewards is more resistant to noises in embeddings than using them as training features , since in the former case we care more about the accumulation of rewards across multiple states , where the noises are diluted . Applying representation learning to RL has been studied in many prior works ( Nachum et al. , 2018a ; Ghosh et al. , 2018 ; Oord et al. , 2018 ; Caselles-Dupré et al. , 2018 ) . In a recent paper on actionable representation ( Ghosh et al. , 2018 ) , representation learning is also applied to providing the agent useful reward signals . In actionable representation paper , states are treated as goals , and embeddings are optimized in a way that the distance between two states reflects the difference between the policies required to reach them . This is fundamentally different from our approach , which aims to extract features that have predictive qualities . Furthermore , computing actionable representation requires trained goal-conditioned policies as a part of the optimization , which is a strict requirement , while this paper aims to produce useful representations without needing access to trained policies . Lastly , VQ-VAE ( van den Oord et al. , 2017 ) is a generative approach that provides a principled way of extracting low-dimensional features . In contrast to VAE ( Kingma & Welling , 2013 ) , it outputs a discrete codebook , and the prior distribution is learned rather than static . VQ-VAE could be useful for removing redundant information from raw states , which may speed up learning ; however , since the goal of VQ-VAE is reconstruction , it does not put emphasis on features that are particularly useful for learning , nor does it attempt to understand the environment dynamics across long segments of states . Our use of predictive coding is thus be a better fit for reinforcement learning , as we emphasize on features that help understand the evolution of states rather than reconstruct each individual state .
This paper proposes using the features learned through Contrastive Predictive Coding as a means for reward shaping. Specifically, they propose to cluster the embedding using the clusters to provide feedback to the agent by applying a positive reward when the agent enters the goal cluster. In more complex domains they add another negative distance term of the embedding of the current state and goal state. Finally, they provide empirical evidence of their algorithm working in toy domains (such as four rooms and U-maze) as well as a set of control environments including AntMaze and Pendulum.
SP:7cd001a35175d8565c046093dcf070ba7fa988d6
Predictive Coding for Boosting Deep Reinforcement Learning with Sparse Rewards
1 INTRODUCTION . Recent progress in deep reinforcement learning ( DRL ) has enabled robots to learn and execute complex tasks , ranging from game playing ( Jaderberg et al. , 2018 ; OpenAI , 2019 ) , robotic manipulations ( Andrychowicz et al. , 2017 ; Haarnoja et al. , 2018 ) , to navigation ( Zhang et al. , 2017 ) . However , in many scenarios learning depends heavily on meaningful and frequent feedback from the environment for the agent to learn and correct behaviors . As a result , reinforcement learning ( RL ) problems with sparse rewards still remain a difficult challenge ( Riedmiller et al. , 2018 ; Agarwal et al. , 2019 ) . In a sparse reward setting , the agent typically explores without receiving any reward , until it enters a small subset of the environment space ( the ” goal ” ) . Due to lack of frequent feedback from the environment , learning in sparse reward problems is typically hard , and heavily relies on the agent entering the ” goal ” during exploration . A possible way to tackle this is through reward shaping ( Devlin & Kudenko , 2012 ; Zou et al. , 2019 ; Gao & Toni , 2015 ) , where manually designed rewards are added to the environment to guide the agent towards finding the ” goal ” ; however , this approach often requires domain knowledge of the environment , and may bias learning if the shaped rewards are not robust ( Ng et al. , 1999 ) . RL problems often benefit from representation learning ( Bengio et al. , 2013 ) , which studies the transformation of raw observations of an environment ( sensors , images , coordinates etc ) into a more meaningful form , such that the agent can more easily extract information useful for learning . Intuitively , raw states contain redundant or irrelevant information about the environment , which the agent must take time to learn to distinguish and remove ; representation learning directly tackles this problem by either eliminating redundant dimensions ( Kingma & Welling , 2013 ; van den Oord et al. , 2017 ) or emphasizing more useful elements of the state ( Nachum et al. , 2018a ) . Much of the prior work on representation learning focuses on generative approaches to model the environment , but some recent work also studies optimizations that learn important features ( Ghosh et al. , 2018 ) . In this paper , we tackle the challenge of DRL to solve sparse reward tasks : we apply representation learning to provide the agent meaningful rewards without the need for domain knowledge . In particular , we propose to use predictive coding in an unsupervised fashion to extract features that maximize the mutual information ( MI ) between consecutive states in a state trajectory . These predictive features are expected to have the potential to simplify the structure of an environment ’ s state space : they are optimized to both summarize the past and predict the future , capturing the most important elements of the environment dynamics . We show this method is useful for model-free learning from either raw states or images , and can be applied on top of any general deep reinforcement learning algorithms such as PPO ( Schulman et al. , 2017 ) . Although MI has traditionally been difficult to compute , recent advances have suggested optimizing on a tractable lower bound on the quantity ( Hjelm et al. , 2018 ; Belghazi et al. , 2018 ; Oord et al. , 2018 ) . We adopt one such method , Contrastive Predictive Coding ( Oord et al. , 2018 ) , to extract features that maximize MI between consecutive states in trajectories collected during exploration ( one thing worth noting is that this method is not restricted to specific predictive coding schemes such as CPC ) . Such features are then used for simple reward shaping in representation space to provide the agent better feedback in sparse reward problems . We demonstrate the validity of our method through extensive numerical simulations in a wide range of control environments such as maze navigation , robot locomotion , and robotic arm manipulation ( Figure 4 ) . In particular , we show that using these predictive features , we provide reward signals as effective for learning as handshaped rewards , which encode domain and task knowledge . This paper is structured as follows : We begin by providing preliminary information in Section 2 and discussing relevant work in Section 3 ; then , we explain and illustrate the proposed method in Section 4 ; lastly , we present experiment results in Section 5 , and conclude the paper by discussing the results and pointing out future work in Section 6 . 2 PRELIMINARIES . Reinforcement Learning and Reward Shaping : This paper assumes a finite-horizon Markov Decision Process ( MDP ) ( Puterman , 1994 ) , defined by a tuple ( S , A , P , r , γ , T ) . Here , S ∈ Rd denotes the state space , A ∈ Rm denotes the action space , P : S ×A×S → R+ denotes the state transition distribution , r : S × A → R denotes the reward function , γ ∈ [ 0 , 1 ] is the discount factor , and finally T is the horizon . At each step t , the action at ∈ A is sampled from a policy distribution πθ ( at|st ) where s ∈ S and θ is the policy parameter . After transiting into the next state by sampling from p ( st+1|at , st ) , where p ∈ P , the agent receives a scalar reward r ( st , at ) . The agent continues performing actions until it enters a terminal state or t reaches the horizon , by when the agent has completed one episode . We let τ denote the sequence of states that the agent enters in one episode . With such definition , the goal of RL is to learn a policy πθ∗ ( at|st ) that maximizes the expected discounted reward Eπ , P [ R ( τ0 : T−1 ) ] = Eπ , P [ ∑T−1 0 γ tr ( st , at ) ] , where expectation is taken on the possible trajectories τ and starting states x0 . In this paper , we assume model-free learning , meaning the agent does not have access to P . Reward shaping essentially replaces the original MDP with a new one , whose reward function is now r′ ( st , at ) . In this paper , reward shaping is done to train a policy πr′ that maximizes the expected discounted reward in the original MDP , i.e . Eπr′ , P [ R ( τ0 : T−1 ) ] . Mutual Information and Predictive Coding : Mutual information measures the amount of information obtained about one random variable after observing another random variable ( Cover & Thomas , 2012 ) . Formally , given two random variables X and Y with joint distribution p ( x , y ) and marginal densities p ( x ) and p ( y ) , their MI is defined as the KL-divergence between joint density and product of marginal densities : MI ( X ; Y ) = DKL ( p ( x , y ) ‖p ( x ) p ( y ) ) = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] . ( 1 ) Predictive coding in this paper aims to maximize the MI between consecutive states in the same state trajectory . As MI is difficult to compute , we adopt the method of optimizing on a lower bound , InfoNCE ( Oord et al. , 2018 ) , which takes the current context ct to predict a future state st+k : MI ( st+k ; ct ) ≥ ES [ log f ( zt+k , ct ) f ( zt+k , ct ) + ∑ sj∈S f ( zj , ct ) ] ( 2 ) Here , f ( x , y ) is optimized through cross entropy to model a density ratio : f ( x , y ) ∝ P ( x|y ) P ( x ) . zt+k is the embedding of state xt+k by the encoder , and ct is obtained by summarizing the embeddings of previous n states in a segment of a trajectory , zt−n+1 : t , through a gated recurrent unit ( Cho et al. , 2014 ) . Intuitively , the context ct pays attention to the evolution of states in order to summarize the past and predict the future ; thus , it forces the encoder to extract only the essential dynamical elements of the environment , elements that encapsulate state evolution . 3 RELEVANT WORKS Our paper uses the method of Contrastive Predictive Coding ( CPC ) ( Oord et al. , 2018 ) , which includes experiments in the domain of RL . In the CPC paper , the InfoNCE is applied to the LSTM component ( Hochreiter & Schmidhuber , 1997 ) of an A2C architecture ( Mnih et al. , 2016 ; Espeholt et al. , 2018 ) . The LSTM maps every state observation to an embedding , which is then directly used for learning . This differs from our approach , where we train on pre-collected trajectories to obtain embeddings , and only use these embeddings to provide rewards to the agent , which still learns on the raw states . Our approach has two main advantages : 1 ) Preprocessing states allows us to collect exploration-focused trajectories , and obtain embeddings that are suitable for multi-tasking . 2 ) Using embeddings to provide rewards is more resistant to noises in embeddings than using them as training features , since in the former case we care more about the accumulation of rewards across multiple states , where the noises are diluted . Applying representation learning to RL has been studied in many prior works ( Nachum et al. , 2018a ; Ghosh et al. , 2018 ; Oord et al. , 2018 ; Caselles-Dupré et al. , 2018 ) . In a recent paper on actionable representation ( Ghosh et al. , 2018 ) , representation learning is also applied to providing the agent useful reward signals . In actionable representation paper , states are treated as goals , and embeddings are optimized in a way that the distance between two states reflects the difference between the policies required to reach them . This is fundamentally different from our approach , which aims to extract features that have predictive qualities . Furthermore , computing actionable representation requires trained goal-conditioned policies as a part of the optimization , which is a strict requirement , while this paper aims to produce useful representations without needing access to trained policies . Lastly , VQ-VAE ( van den Oord et al. , 2017 ) is a generative approach that provides a principled way of extracting low-dimensional features . In contrast to VAE ( Kingma & Welling , 2013 ) , it outputs a discrete codebook , and the prior distribution is learned rather than static . VQ-VAE could be useful for removing redundant information from raw states , which may speed up learning ; however , since the goal of VQ-VAE is reconstruction , it does not put emphasis on features that are particularly useful for learning , nor does it attempt to understand the environment dynamics across long segments of states . Our use of predictive coding is thus be a better fit for reinforcement learning , as we emphasize on features that help understand the evolution of states rather than reconstruct each individual state .
The paper proposes a reward shaping method which aim to tackle sparse reward tasks. The paper first trains a representation using contrastive predictive coding and then uses the learned representation to provide feedback to the control agent. The main difference from the previous work (i.e. CPC) is that the paper uses the learned representation for reward shaping, not for learning on top of these representation. This is an interesting research topic.
SP:7cd001a35175d8565c046093dcf070ba7fa988d6
Skew-Explore: Learn faster in continuous spaces with sparse rewards
1 INTRODUCTION Reinforcement Learning ( RL ) is based on performing exploratory actions in a trial-and-error manner and reinforcing those actions that result in superior reward outcomes . Exploration plays an important role in solving a given sequential decision-making problem . A RL agent can not improve its behaviour without receiving rewards exceeding the expectation of the agent , and this happens only as the consequence of properly exploring the environment . In this paper , we propose a method to train a policy which efficiently explores a continuous state space . Our method is particularly well-suited to solve sequential decisionmaking tasks with sparse terminal rewards , i.e. , rewards received at the end of a successful interaction with the environment . We propose to directly maximize the entropy of the history states by exploiting the mutual information between the history states and a number of reference states . To achieve this , we introduce a novel reward function which , given the references , shapes the distribution of the history states . This reward function , combined with goal proposing learning frameworks , maximizes the entropy of the history states . We demonstrate that this way of directly maximizing the state entropy , compared to indirectly maximizing the mutual information ( WardeFarley et al. , 2018 ; Pong et al. , 2019 ) improves the ex- ploration of the state space as well as the convergence speed at solving tasks with sparse terminal rewards . Maximizing the mutual information between the visited states and the goal states , I ( S ; G ) , results in a natural exploration of the environment while learning to reach to different goal states ( Warde-Farley et al. , 2018 ; Pong et al. , 2019 ) . The mutual information can be written as I ( S ; G ) = h ( G ) − h ( G|S ) ; therefore maximizing the mutual information is equivalent to maximizing the en- tropy of the goal state while reducing the conditional entropy ( conditioned on the goal state ) . The first term , encourages the agent to choose its own goal states as diverse as possible , therefore improving the exploration , and the second term forces the agent to reach the different goals it has specified for itself , i.e. , training a goal-conditioned policy , π ( .|s , g ) . Instead of maximizing the mutual information , we propose to maximize the entropy of the visited states directly , i.e. , maximizing h ( S ) = h ( Z ) +h ( S|Z ) −h ( Z|S ) , where Z is a random variable that represents the reference points of promising areas for exploration . Therefore , in our formulation , we have an extra term , h ( S|Z ) , which encourages maximizing the entropy of the state conditioned on the reference points . This extra term , implemented by the proposed reward function , helps the agent to explore better at the vicinity of the references . We call our method Skew-Explore , since similar to Skew-Fit introduced by Pong et al . ( 2019 ) , it skews the distribution of the references toward the less visited states , but instead of directly reaching the goals , it explores the surrounding areas of them . We experimentally demonstrate that the new reward function enables an agent to explore the state space more efficiently in terms of covering larger areas in less time compared to the earlier methods . Furthermore , we demonstrate that our RL agent is capable of solving long-term sequential decision-making problems with sparse rewards faster . We apply the method to three simulated tasks , including a problem to find a trajectory of a YuMi robot end-effector , to open a door of a box , pressing a button inside the box and closing the door . In this case , the sparse reward is given only when the button is pressed and the door is closed , i.e. , at the end of about one minute of continuous interaction with the environment . To validate appropriateness of the trajectory found in simulation , we deployed it on a real YuMi robot , as shown in Figure 1 . The main contributions of this paper can be summarized as ( 1 ) introducing a novel reward function which increases the entropy of the history states much faster compared to the prior work , and ( 2 ) experimentally demonstrating the superiority of the proposed algorithm to solve three different sparse reward sequential decision-making problems . 2 RELATED WORK . Prior works have studied different algorithms for addressing the exploration problem . In this section , we summarize related works in the domain where rewards from the environment are sparse or absent . Intrinsic Reward : One way to encourage exploration is to define an intrinsically-motivated reward , including methods that assimilate the definition of curiosity in psychology ( Oudeyer et al. , 2007 ; Pathak et al. , 2017 ) . These methods have found success in domains like video games ( Ostrovski et al. , 2017 ; Burda et al. , 2018 ) . In these approaches , the ” novelty ” , ” curiosity ” or ” surprise ” of a state is computed as an intrinsic reward using mechanisms such as state-visiting count and prediction error ( Schmidhuber , 1991 ; Stadie et al. , 2015 ; Achiam & Sastry , 2017 ; Pathak et al. , 2017 ) . By considering this information , the agent is encouraged to search for areas that are less visited or have complex dynamics . However , as pointed out by Ecoffet et al . ( 2019 ) , an agent driven by intrinsic reward may suffer from the problem of detaching from the frontiers of high intrinsic reward area . Due to catastrophic forgetting , it may not be able to go back to previous areas that have not yet been fully explored ( Kirkpatrick et al. , 2017 ; Ellefsen et al. , 2015 ) . Our method is able to keep tracking the novelty frontier and train policy to explore different areas in the frontier . Diverse Skill/Option Discovery : Methods that aim to learn a set of behaviours which are distinct from each other , allow the agent to interact with the environment without rewards for a particular task . Gregor et al . ( 2016 ) introduced an option discovery technique based on maximizing the mutual information between the options and the final states of the trajectories . Eysenbach et al . ( 2018 ) ; Florensa et al . ( 2017a ) ; Savinov et al . ( 2018 ) proposed to learn a fixed set of skills by maximizing the mutual information through an internal objective computed using a discriminator . Achiam et al . ( 2018 ) extended the prior works by considering the whole trajectories and introduced a curriculum learning approach that gradually increases the number of skills to be learned . In these works , the exploration is encouraged implicitly through learning diverse skills . However , it is difficult to control the direction of exploration . In our method , we maintain a proposing module which tracks the global information of the states we have visited so far , and keep proposing reference points that guide the agent to the more promising areas for exploration . Self-Goal Proposing : Self-goal proposing methods are often combined with a goal-conditioned policy ( Kaelbling , 1993 ; Andrychowicz et al. , 2017 ) , where a goal ( or task ) generation model is trained jointly with a goal reaching policy . The agent receives rewards in terms of completing the internal tasks which makes it possible to explore the state space without any supervision from the environment . Sukhbaatar et al . ( 2017 ) described a scheme with two agents . The first one proposes tasks by performing a sequence of actions and the other repeats the actions in reverse order . Held et al . ( 2018 ) introduced a method that automatically label and propose goals at the appropriate level of difficulty using adversarial training . Similar works are proposed by Colas et al . ( 2018 ) ; Veeriah et al . ( 2018 ) ; Florensa et al . ( 2017b ) , where goals are selected based on the learning progress . WardeFarley et al . ( 2018 ) trained a goal-conditioned policy by maximizing the mutual information between the goal states and the achieved states . The goals are selected from the agent ’ s recent experience with strategies . Later , Pong et al . ( 2019 ) applied a similar idea of using mutual information . They maximize the entropy of a goal sampling distribution . The focus of these methods is on learning a policy that can reach diverse goals . Although gradually increasing the scale of the goal proposing network , the agent may eventually cover the entire state space , exploration itself is not efficient . In our work , we adopt the same idea of maximizing the entropy of the goal sampling distribution by Pong et al . ( 2019 ) . However , instead of using the goal-conditioned policy , we introduce a reference point-conditioned policy which greatly increases the efficiency of exploration . 3 SKEW-EXPLORE : SEARCHING FOR THE SPARSE REWARD . We discuss the policy learning problem in continuous state and action spaces , which we model as an infinite-horizon Markov decision process ( MDP ) . The MDP is fully characterized by a tuple ( S , A , pa ( s , s′ ) , R′a ( s , s′ ) ) , where S , the state space , and A , the action space , are subsets of Rn , the unknown transition probability p : S × A × S → [ 0 , inf ) indicates the probability density function of the next state s′ given the current state s ∈ S and the action a ∈ A . For each transition , the space associated environment E emits an extrinsic reward according to function R′ : S × A → R. The objective of the agent is to maximize the discounted return , i.e . return R = ∑∞ ts=0 γtsrts , where γ is a discounted factor and rts is the reward received at each step ts . In this study , we consider an agent interacting in an environment E with sparse reward . The sparse reward r is modelled as a truncated Gaussian function with a narrow range . From previous interactions , the agent holds an interaction set It , in which transaction triples ( sj , aj , sj+1 ) , ∀j ∈ { 1 , · · · , T − 1 } are contained . We also extract the states sj from It to form a history state set St , which contains all visited states by the agent until iteration t. The objective of our method is to find an arbitrary external goal in a continuous state space and converge to a policy that maximizes the R as fast as possible . This involves two processes 1 ) Find the external reward through efficient exploration . 2 ) Converge to a policy that maximizes R once the external reward is found . We can use the entropy of the history state set as a neutral objective to encourage exploration , since an agent that maximizes this objective should have visited all valid states uniformly . To describe it mathematically , we define a random variable S to represent the history states that the agent has visited . The distribution of S is estimated from the history state set St. Our goal is to encourage exploration by maximizing the entropy h ( S ) of the history states . However , using the entropy as the intrinsic reward directly may suffer from problems similar to other intrinsic motivated methods ( Schmidhuber , 1991 ; Stadie et al. , 2015 ; Achiam & Sastry , 2017 ; Pathak et al. , 2017 ) . As the reward of the same state is changing , the agent has the risk of detaching from the frontiers of high intrinsic reward area . We introduce a concept called novelty frontier reference point , which can be sampled from a distribution that represents the novelty frontier ( Ecoffet et al. , 2019 ) . The novelty frontier defined in our work represents the areas near the states with lower density in distribution p ( s ) . The frontier reference points are sampled after the distribution of the novelty frontier is updated . We define a Z to represent all the history frontier reference points with probability density p ( z ) estimated from a set Zt that contains all novelty frontier reference points until iteration t. The conditional probability p ( s|z ) defines the behaviour of the agent with respect to each reference point . In this work , we model this behaviour using a state distribution function Kz ( s − z ) parameterized by the displacement between the state and the reference point . The function Kz needs to be chosen carefully as it should satisfy our expectation of the policy behaviour and also , provides an informative reward signal to train the policy . Mathematically , we can rewrite p ( s ) as p ( s ) = ∫ f ( s|z ) p ( z ) dz = ∫ Kz ( s − z ) p ( z ) dz . Generally , Kz ( · ) can be different for different z . However , to reduce the complexity of learning , we constrain Kz ( · ) to be consistent for any z , meaningKz ( s−z ) = K ( s−z ) . The definition ofK ( · ) satisfies the definition of a kernel function . Using K ( s− z ) , p ( s ) can then be further represented as p ( s ) = ∫ K ( s− z ) p ( z ) dz = ( K ∗ p ) ( s ) . By considering the law of convolution of probability distributions , we obtain S = Z + N , where N is a random variable characterized by a density function K ( · ) . Now with this setup , we are able to to analyze our method ’ s performance using information theory . By considering the entropy ’ s relationship with mutual information h ( S ) = h ( S|Z ) +I ( S ; Z ) , we receive the final decomposition of our objective under the novelty frontier reference point-conditioned policy framework h ( S ) = h ( Z ) + h ( S|Z ) − h ( Z|S ) . ( 1 ) Eq . 1 indicates that in order to maximize the h ( S ) , we can individually maximize/minimize each term while making other terms fixed . In the following section , we will explain the optimization process in detail . 3.1 MAXIMIZING h ( Z ) : OBTAINING AN EXPANDING SET OF NOVELTY FRONTIER REFERENCE POINTS As introduced above , h ( Z ) is the entropy estimated from the novelty frontier reference points setZt . To increase h ( Z ) , we need to add a new reference point to Zt such that , the entropy estimated form Zt+1 is larger than the entropy estimated from Zt . In our method , the frontier reference points are sampled from the novelty frontier distribution which represents less history areas according to the current history states . Pong et al . ( 2019 ) proposed a method to skew the distribution of the history states using importance sampling , such that states with lower density can be proposed more often . In our work , we use a similar way to estimate the novelty frontier distribution . There are three steps in our process . In the first step , we estimate the p ( s ) from St using a density estimator e.g . Kernel Density Estimation ( KDE ) . In the second step , we sample Q states { s0 , · · · , sQ } from p ( s ) , and compute the normalized weight for each state using Eq . 2 wi = 1 Yα p ( si ) p ( si ) α α ∈ [ − inf , 0 ) , Yα = N∑ n=1 p ( s = sQ ) p ( s = sQ ) α , ( 2 ) where Yα is a normalizing constant . The state with lower p ( s ) has higher weight and vice versa . Finally , we utilize a generative model training scheme Tg ( · , · ) ( e.g . weighted KDE ) , together with sampled states and weights to get a skewed distribution pskewed ( s ) = Tg ( { s0 , · · · , sn } , { w0 , · · · , wn } ) to represent the novelty frontier distribution . If Q is big enough , by choosing a α appropriately , we are able to expand our frontiers after each iteration . As a consequence , the distribution estimated from Zt will become more and more uniform and its range will become larger and larger , just like annual ring of the tree . The entropy of a continuous uniform function U ( p , q ) is ln ( p − q ) and if the distribution has a larger range , the entropy is larger as well . Fig 2 illustrates the estimated frontier distribution skewed from p ( s ) . 3.2 MAXIMIZING h ( S|Z ) − h ( Z|S ) : INCREASING THE EXPLORATION RANGE AROUND REFERENCE POINTS The conditional entropy of h ( S|Z ) and h ( Z|S ) are highly correlated , maximizing/minimizing them individually are difficult . Therefore , in this section , we consider to maximize h ( S|Z ) - h ( Z|S ) as a whole . Using the relation S = Z + N , we rewrite the expression as h ( S|Z ) − h ( Z|S ) = h ( Z + N|Z ) − h ( Z|Z + N ) , which can be further simplified ( see Appendix D ) as h ( Z|S ) − h ( S|Z ) ≥ h ( N ) − h ( Z ) . This implies that there is a lower bound for the expression h ( S|Z ) − h ( Z|S ) . For a fixed h ( Z ) , we can maximize the lower bound h ( N ) − h ( Z ) by increasing h ( N ) . h ( N ) is related to the shape and variance of the exploration distribution near the reference point . In our method , we model N as a Gaussian distribution with zero mean . In an ideal case , we would like to have as large variance as possible . However , increasing the variance also results in learning difficulty , as we need a longer trajectory to evaluate the performance and more samples to update the network . Therefore , we use the variance to control the trade-off between exploration efficiency and learning efficiency .
This paper studies the problem of exploration in reinforcement learning. The key idea is to learn a goal-conditioned agent and do exploration by selecting goals at the frontier of previously visited states. This frontier is estimated using an extension of prior work (Pong 2019). The method is evaluated on two continuous control environments (2D navigation, manipulation), where it seems to outperform baselines.
SP:1e4d48aca131f5ff12775ba51dd1176397038d59
Skew-Explore: Learn faster in continuous spaces with sparse rewards
1 INTRODUCTION Reinforcement Learning ( RL ) is based on performing exploratory actions in a trial-and-error manner and reinforcing those actions that result in superior reward outcomes . Exploration plays an important role in solving a given sequential decision-making problem . A RL agent can not improve its behaviour without receiving rewards exceeding the expectation of the agent , and this happens only as the consequence of properly exploring the environment . In this paper , we propose a method to train a policy which efficiently explores a continuous state space . Our method is particularly well-suited to solve sequential decisionmaking tasks with sparse terminal rewards , i.e. , rewards received at the end of a successful interaction with the environment . We propose to directly maximize the entropy of the history states by exploiting the mutual information between the history states and a number of reference states . To achieve this , we introduce a novel reward function which , given the references , shapes the distribution of the history states . This reward function , combined with goal proposing learning frameworks , maximizes the entropy of the history states . We demonstrate that this way of directly maximizing the state entropy , compared to indirectly maximizing the mutual information ( WardeFarley et al. , 2018 ; Pong et al. , 2019 ) improves the ex- ploration of the state space as well as the convergence speed at solving tasks with sparse terminal rewards . Maximizing the mutual information between the visited states and the goal states , I ( S ; G ) , results in a natural exploration of the environment while learning to reach to different goal states ( Warde-Farley et al. , 2018 ; Pong et al. , 2019 ) . The mutual information can be written as I ( S ; G ) = h ( G ) − h ( G|S ) ; therefore maximizing the mutual information is equivalent to maximizing the en- tropy of the goal state while reducing the conditional entropy ( conditioned on the goal state ) . The first term , encourages the agent to choose its own goal states as diverse as possible , therefore improving the exploration , and the second term forces the agent to reach the different goals it has specified for itself , i.e. , training a goal-conditioned policy , π ( .|s , g ) . Instead of maximizing the mutual information , we propose to maximize the entropy of the visited states directly , i.e. , maximizing h ( S ) = h ( Z ) +h ( S|Z ) −h ( Z|S ) , where Z is a random variable that represents the reference points of promising areas for exploration . Therefore , in our formulation , we have an extra term , h ( S|Z ) , which encourages maximizing the entropy of the state conditioned on the reference points . This extra term , implemented by the proposed reward function , helps the agent to explore better at the vicinity of the references . We call our method Skew-Explore , since similar to Skew-Fit introduced by Pong et al . ( 2019 ) , it skews the distribution of the references toward the less visited states , but instead of directly reaching the goals , it explores the surrounding areas of them . We experimentally demonstrate that the new reward function enables an agent to explore the state space more efficiently in terms of covering larger areas in less time compared to the earlier methods . Furthermore , we demonstrate that our RL agent is capable of solving long-term sequential decision-making problems with sparse rewards faster . We apply the method to three simulated tasks , including a problem to find a trajectory of a YuMi robot end-effector , to open a door of a box , pressing a button inside the box and closing the door . In this case , the sparse reward is given only when the button is pressed and the door is closed , i.e. , at the end of about one minute of continuous interaction with the environment . To validate appropriateness of the trajectory found in simulation , we deployed it on a real YuMi robot , as shown in Figure 1 . The main contributions of this paper can be summarized as ( 1 ) introducing a novel reward function which increases the entropy of the history states much faster compared to the prior work , and ( 2 ) experimentally demonstrating the superiority of the proposed algorithm to solve three different sparse reward sequential decision-making problems . 2 RELATED WORK . Prior works have studied different algorithms for addressing the exploration problem . In this section , we summarize related works in the domain where rewards from the environment are sparse or absent . Intrinsic Reward : One way to encourage exploration is to define an intrinsically-motivated reward , including methods that assimilate the definition of curiosity in psychology ( Oudeyer et al. , 2007 ; Pathak et al. , 2017 ) . These methods have found success in domains like video games ( Ostrovski et al. , 2017 ; Burda et al. , 2018 ) . In these approaches , the ” novelty ” , ” curiosity ” or ” surprise ” of a state is computed as an intrinsic reward using mechanisms such as state-visiting count and prediction error ( Schmidhuber , 1991 ; Stadie et al. , 2015 ; Achiam & Sastry , 2017 ; Pathak et al. , 2017 ) . By considering this information , the agent is encouraged to search for areas that are less visited or have complex dynamics . However , as pointed out by Ecoffet et al . ( 2019 ) , an agent driven by intrinsic reward may suffer from the problem of detaching from the frontiers of high intrinsic reward area . Due to catastrophic forgetting , it may not be able to go back to previous areas that have not yet been fully explored ( Kirkpatrick et al. , 2017 ; Ellefsen et al. , 2015 ) . Our method is able to keep tracking the novelty frontier and train policy to explore different areas in the frontier . Diverse Skill/Option Discovery : Methods that aim to learn a set of behaviours which are distinct from each other , allow the agent to interact with the environment without rewards for a particular task . Gregor et al . ( 2016 ) introduced an option discovery technique based on maximizing the mutual information between the options and the final states of the trajectories . Eysenbach et al . ( 2018 ) ; Florensa et al . ( 2017a ) ; Savinov et al . ( 2018 ) proposed to learn a fixed set of skills by maximizing the mutual information through an internal objective computed using a discriminator . Achiam et al . ( 2018 ) extended the prior works by considering the whole trajectories and introduced a curriculum learning approach that gradually increases the number of skills to be learned . In these works , the exploration is encouraged implicitly through learning diverse skills . However , it is difficult to control the direction of exploration . In our method , we maintain a proposing module which tracks the global information of the states we have visited so far , and keep proposing reference points that guide the agent to the more promising areas for exploration . Self-Goal Proposing : Self-goal proposing methods are often combined with a goal-conditioned policy ( Kaelbling , 1993 ; Andrychowicz et al. , 2017 ) , where a goal ( or task ) generation model is trained jointly with a goal reaching policy . The agent receives rewards in terms of completing the internal tasks which makes it possible to explore the state space without any supervision from the environment . Sukhbaatar et al . ( 2017 ) described a scheme with two agents . The first one proposes tasks by performing a sequence of actions and the other repeats the actions in reverse order . Held et al . ( 2018 ) introduced a method that automatically label and propose goals at the appropriate level of difficulty using adversarial training . Similar works are proposed by Colas et al . ( 2018 ) ; Veeriah et al . ( 2018 ) ; Florensa et al . ( 2017b ) , where goals are selected based on the learning progress . WardeFarley et al . ( 2018 ) trained a goal-conditioned policy by maximizing the mutual information between the goal states and the achieved states . The goals are selected from the agent ’ s recent experience with strategies . Later , Pong et al . ( 2019 ) applied a similar idea of using mutual information . They maximize the entropy of a goal sampling distribution . The focus of these methods is on learning a policy that can reach diverse goals . Although gradually increasing the scale of the goal proposing network , the agent may eventually cover the entire state space , exploration itself is not efficient . In our work , we adopt the same idea of maximizing the entropy of the goal sampling distribution by Pong et al . ( 2019 ) . However , instead of using the goal-conditioned policy , we introduce a reference point-conditioned policy which greatly increases the efficiency of exploration . 3 SKEW-EXPLORE : SEARCHING FOR THE SPARSE REWARD . We discuss the policy learning problem in continuous state and action spaces , which we model as an infinite-horizon Markov decision process ( MDP ) . The MDP is fully characterized by a tuple ( S , A , pa ( s , s′ ) , R′a ( s , s′ ) ) , where S , the state space , and A , the action space , are subsets of Rn , the unknown transition probability p : S × A × S → [ 0 , inf ) indicates the probability density function of the next state s′ given the current state s ∈ S and the action a ∈ A . For each transition , the space associated environment E emits an extrinsic reward according to function R′ : S × A → R. The objective of the agent is to maximize the discounted return , i.e . return R = ∑∞ ts=0 γtsrts , where γ is a discounted factor and rts is the reward received at each step ts . In this study , we consider an agent interacting in an environment E with sparse reward . The sparse reward r is modelled as a truncated Gaussian function with a narrow range . From previous interactions , the agent holds an interaction set It , in which transaction triples ( sj , aj , sj+1 ) , ∀j ∈ { 1 , · · · , T − 1 } are contained . We also extract the states sj from It to form a history state set St , which contains all visited states by the agent until iteration t. The objective of our method is to find an arbitrary external goal in a continuous state space and converge to a policy that maximizes the R as fast as possible . This involves two processes 1 ) Find the external reward through efficient exploration . 2 ) Converge to a policy that maximizes R once the external reward is found . We can use the entropy of the history state set as a neutral objective to encourage exploration , since an agent that maximizes this objective should have visited all valid states uniformly . To describe it mathematically , we define a random variable S to represent the history states that the agent has visited . The distribution of S is estimated from the history state set St. Our goal is to encourage exploration by maximizing the entropy h ( S ) of the history states . However , using the entropy as the intrinsic reward directly may suffer from problems similar to other intrinsic motivated methods ( Schmidhuber , 1991 ; Stadie et al. , 2015 ; Achiam & Sastry , 2017 ; Pathak et al. , 2017 ) . As the reward of the same state is changing , the agent has the risk of detaching from the frontiers of high intrinsic reward area . We introduce a concept called novelty frontier reference point , which can be sampled from a distribution that represents the novelty frontier ( Ecoffet et al. , 2019 ) . The novelty frontier defined in our work represents the areas near the states with lower density in distribution p ( s ) . The frontier reference points are sampled after the distribution of the novelty frontier is updated . We define a Z to represent all the history frontier reference points with probability density p ( z ) estimated from a set Zt that contains all novelty frontier reference points until iteration t. The conditional probability p ( s|z ) defines the behaviour of the agent with respect to each reference point . In this work , we model this behaviour using a state distribution function Kz ( s − z ) parameterized by the displacement between the state and the reference point . The function Kz needs to be chosen carefully as it should satisfy our expectation of the policy behaviour and also , provides an informative reward signal to train the policy . Mathematically , we can rewrite p ( s ) as p ( s ) = ∫ f ( s|z ) p ( z ) dz = ∫ Kz ( s − z ) p ( z ) dz . Generally , Kz ( · ) can be different for different z . However , to reduce the complexity of learning , we constrain Kz ( · ) to be consistent for any z , meaningKz ( s−z ) = K ( s−z ) . The definition ofK ( · ) satisfies the definition of a kernel function . Using K ( s− z ) , p ( s ) can then be further represented as p ( s ) = ∫ K ( s− z ) p ( z ) dz = ( K ∗ p ) ( s ) . By considering the law of convolution of probability distributions , we obtain S = Z + N , where N is a random variable characterized by a density function K ( · ) . Now with this setup , we are able to to analyze our method ’ s performance using information theory . By considering the entropy ’ s relationship with mutual information h ( S ) = h ( S|Z ) +I ( S ; Z ) , we receive the final decomposition of our objective under the novelty frontier reference point-conditioned policy framework h ( S ) = h ( Z ) + h ( S|Z ) − h ( Z|S ) . ( 1 ) Eq . 1 indicates that in order to maximize the h ( S ) , we can individually maximize/minimize each term while making other terms fixed . In the following section , we will explain the optimization process in detail . 3.1 MAXIMIZING h ( Z ) : OBTAINING AN EXPANDING SET OF NOVELTY FRONTIER REFERENCE POINTS As introduced above , h ( Z ) is the entropy estimated from the novelty frontier reference points setZt . To increase h ( Z ) , we need to add a new reference point to Zt such that , the entropy estimated form Zt+1 is larger than the entropy estimated from Zt . In our method , the frontier reference points are sampled from the novelty frontier distribution which represents less history areas according to the current history states . Pong et al . ( 2019 ) proposed a method to skew the distribution of the history states using importance sampling , such that states with lower density can be proposed more often . In our work , we use a similar way to estimate the novelty frontier distribution . There are three steps in our process . In the first step , we estimate the p ( s ) from St using a density estimator e.g . Kernel Density Estimation ( KDE ) . In the second step , we sample Q states { s0 , · · · , sQ } from p ( s ) , and compute the normalized weight for each state using Eq . 2 wi = 1 Yα p ( si ) p ( si ) α α ∈ [ − inf , 0 ) , Yα = N∑ n=1 p ( s = sQ ) p ( s = sQ ) α , ( 2 ) where Yα is a normalizing constant . The state with lower p ( s ) has higher weight and vice versa . Finally , we utilize a generative model training scheme Tg ( · , · ) ( e.g . weighted KDE ) , together with sampled states and weights to get a skewed distribution pskewed ( s ) = Tg ( { s0 , · · · , sn } , { w0 , · · · , wn } ) to represent the novelty frontier distribution . If Q is big enough , by choosing a α appropriately , we are able to expand our frontiers after each iteration . As a consequence , the distribution estimated from Zt will become more and more uniform and its range will become larger and larger , just like annual ring of the tree . The entropy of a continuous uniform function U ( p , q ) is ln ( p − q ) and if the distribution has a larger range , the entropy is larger as well . Fig 2 illustrates the estimated frontier distribution skewed from p ( s ) . 3.2 MAXIMIZING h ( S|Z ) − h ( Z|S ) : INCREASING THE EXPLORATION RANGE AROUND REFERENCE POINTS The conditional entropy of h ( S|Z ) and h ( Z|S ) are highly correlated , maximizing/minimizing them individually are difficult . Therefore , in this section , we consider to maximize h ( S|Z ) - h ( Z|S ) as a whole . Using the relation S = Z + N , we rewrite the expression as h ( S|Z ) − h ( Z|S ) = h ( Z + N|Z ) − h ( Z|Z + N ) , which can be further simplified ( see Appendix D ) as h ( Z|S ) − h ( S|Z ) ≥ h ( N ) − h ( Z ) . This implies that there is a lower bound for the expression h ( S|Z ) − h ( Z|S ) . For a fixed h ( Z ) , we can maximize the lower bound h ( N ) − h ( Z ) by increasing h ( N ) . h ( N ) is related to the shape and variance of the exploration distribution near the reference point . In our method , we model N as a Gaussian distribution with zero mean . In an ideal case , we would like to have as large variance as possible . However , increasing the variance also results in learning difficulty , as we need a longer trajectory to evaluate the performance and more samples to update the network . Therefore , we use the variance to control the trade-off between exploration efficiency and learning efficiency .
This paper proposes a new exploration algorithm by proposing a new way of generating intrinsic rewards. Specifically, the authors propose to maintain a "novelty frontier" which consists of states that have low-likelihood under some likelihood model trained on their replay buffer. The authors propose to sample from the novelty frontier using a scheme similar to a prior method called Skew-Fit, but replace the VAE with a kernel-based density model. To construct an exploration reward, the authors estimate the KL divergence between the resulting policy state distribution and the desired `state distribution, where the desire state distribution is a Gaussian centered around a point sampled from the novelty frontier.
SP:1e4d48aca131f5ff12775ba51dd1176397038d59
Improving Generalization in Meta Reinforcement Learning using Learned Objectives
1 INTRODUCTION . The process of evolution has equipped humans with incredibly general learning algorithms . They enable us to solve a wide range of problems , even in the absence of a large number of related prior experiences . The algorithms that give rise to these capabilities are the result of distilling the collective experiences of many learners throughout the course of natural evolution . By essentially learning from learning experiences in this way , the resulting knowledge can be compactly encoded in the genetic code of an individual to give rise to the general learning capabilities that we observe today . In contrast , Reinforcement Learning ( RL ) in artificial agents rarely proceeds in this way . The learning rules that are used to train agents are the result of years of human engineering and design , ( e.g . Williams ( 1992 ) ; Wierstra et al . ( 2008 ) ; Mnih et al . ( 2013 ) ; Lillicrap et al . ( 2016 ) ; Schulman et al . ( 2015a ) ) . Correspondingly , artificial agents are inherently limited by the ability of the designer to incorporate the right inductive biases in order to learn from previous experiences . Several works have proposed an alternative framework based on meta reinforcement learning ( Schmidhuber , 1994 ; Wang et al. , 2016 ; Duan et al. , 2016 ; Finn et al. , 2017 ; Houthooft et al. , 2018 ; Clune , 2019 ) . Meta-RL distinguishes between learning to act in the environment ( the reinforcement learning problem ) and learning to learn ( the meta-learning problem ) . Hence , learning itself is now a learning problem , which in principle allows one to leverage prior learning experiences to meta-learn general learning rules that surpass human-engineered alternatives . However , while prior work found that learning rules could be meta-learned that generalize to slightly different environments or goals ( Finn et al. , 2017 ; Plappert et al. , 2018 ; Houthooft et al. , 2018 ) , generalization to entirely different environments remains an open problem . In this paper we present MetaGenRL1 , a novel meta reinforcement learning algorithm that metalearns learning rules that generalize to entirely different environments . MetaGenRL is inspired by the process of natural evolution as it distills the experiences of many agents into the parameters of an objective function that decides how future individuals will learn . Similar to Evolved Policy Gradients ( EPG ; Houthooft et al . ( 2018 ) ) , it meta-learns low complexity neural objective functions that can be used to train complex agents with many parameters . However , unlike EPG , it is able to meta-learn using second-order gradients , which offers several advantages as we will demonstrate . We evaluate MetaGenRL on a variety of continuous control tasks and compare to RL2 ( Wang et al. , 2016 ; Duan et al. , 2016 ) and EPG in addition to several human engineered learning algorithms . 1Code is available at http : //louiskirsch.com/code/metagenrl Compared to RL2 we find that MetaGenRL does not overfit and is able to train randomly initialized agents using meta-learned learning rules on entirely different environments . Compared to EPG we find that MetaGenRL is more sample efficient , and outperforms significantly under a fixed budget of environment interactions . The results of an ablation study and additional analysis provide further insight into the benefits of our approach . 2 PRELIMINARIES . Notation We consider the standard MDP Reinforcement Learning setting defined by a tuple e = ( S , A , P , ρ0 , r , γ , T ) consisting of states S , actions A , the transition probability distribution P : S × A × S → R+ , an initial state distribution ρ0 : S → R+ , the reward function r : S × A → [ −Rmax , Rmax ] , a discount factor γ , and the episode length T . The objective for the probabilistic policy πφ : S ×A→ R+ parameterized by φ is to maximize the expected discounted return : Eτ [ T−1∑ t=0 γtrt ] , where s0 ∼ ρ0 ( s0 ) , at ∼ πφ ( at|st ) , st+1 ∼ P ( st+1|st , at ) , rt = r ( st , at ) , ( 1 ) with τ = ( s0 , a0 , r0 , s1 , ... , sT−1 , aT−1 , rT−1 ) . Human Engineered Gradient Estimators A popular gradient-based approach to maximizing Equation 1 is REINFORCE ( Williams , 1992 ) . It directly differentiates Equation 1 with respect to φ using the likelihood ratio trick to derive gradient estimates of the form : ∇φEτ [ LREINF ( τ , πφ ) ] : = Eτ [ ∇φ T−1∑ t=0 log πφ ( at|st ) · T−1∑ t′=t γt ′−trt′ ) ] . ( 2 ) Although this basic estimator is rarely used in practice , it has become a building block for an entire class of policy-gradient algorithms of this form . For example , a popular extension from Schulman et al . ( 2015b ) combines REINFORCE with a Generalized Advantage Estimate ( GAE ) to yield the following policy gradient estimator : ∇φEτ [ LGAE ( τ , πφ , V ) ] : = Eτ [ ∇φ T−1∑ t=0 log πφ ( at|st ) ·A ( τ , V , t ) ] . ( 3 ) where A ( τ , V , t ) is the GAE and V : S → R is a value function estimate . Several recent other extensions include TRPO ( Schulman et al. , 2015a ) , which discourages bad policy updates using trust regions and iterative off-policy updates , or PPO ( Schulman et al. , 2017 ) , which offers similar benefits using only first order approximations . Parametrized Objective Functions In this work we note that many of these human engineered policy gradient estimators can be viewed as specific implementations of a general objective function L that is differentiated with respect to the policy parameters : ∇φEτ [ L ( τ , πφ , V ) ] . ( 4 ) Hence , it becomes natural to consider a generic parametrization of L that , for various choices of parameters α , recovers some of these estimators . In this paper , we will consider neural objective functions where Lα is implemented by a neural network . Our goal is then to optimize the parameters α of this neural network in order to give rise to a new learning algorithm that best maximizes Equation 1 on an entire class of ( different ) environments . 3 META-LEARNING NEURAL OBJECTIVES . In this work we propose MetaGenRL , a novel meta reinforcement learning algorithm that metalearns neural objective functions of the form Lα ( τ , πφ , V ) . MetaGenRL makes use of value functions and second-order gradients , which makes it more sample efficient compared to prior work ( Duan et al. , 2016 ; Wang et al. , 2016 ; Houthooft et al. , 2018 ) . More so , as we will demonstrate , MetaGenRL meta-learns objective functions that generalize to vastly different environments . Our key insight is that a differentiable critic Qθ : S × A → R can be used to measure the effect of locally changing the objective function parameters α based on the quality of the corresponding policy gradients . This enables a population of agents to use and improve a single parameterized objective function Lα through interacting with a set of ( potentially different ) environments . During evaluation ( meta-test time ) , the meta-learned objective function can then be used to train a randomly initialized RL agent in a new environment . 3.1 FROM DDPG TO GRADIENT-BASED META-LEARNING OF NEURAL OBJECTIVES . We will formally introduce MetaGenRL as an extension of the DDPG actor-critic framework ( Silver et al. , 2014 ; Lillicrap et al. , 2016 ) . In DDPG , a parameterized critic of the form Qθ : S × A → R transforms the non-differentiable RL reward maximization problem into a myopic value maximization problem for any st ∈ S. This is done by alternating between optimization of the critic Qθ and the ( here deterministic ) policy πφ . The critic is trained to minimize the TD-error by following : ∇θ ∑ ( st , at , rt , st+1 ) ( Qθ ( st , at ) − yt ) 2 , where yt = rt + γ ·Qθ ( st+1 , πφ ( st+1 ) ) , ( 5 ) and the dependence of yt on the parameter vector θ is ignored . The policy πφ is improved to increase the expected return from arbitrary states by following the gradient∇φ ∑ st Qθ ( st , πφ ( st ) ) . Both gradients can be computed entirely off-policy by sampling trajectories from a replay buffer . MetaGenRL builds on this idea of differentiating the critic Qθ with respect to the policy parameters . It incorporates a parameterized objective function Lα that is used to improve the policy ( i.e . by following the gradient ∇φLα ) , which adds one extra level of indirection : The critic Qθ improves Lα , while Lα improves the policy πφ . By first differentiating with respect to the objective function parameters α , and then with respect to the policy parameters φ , the critic can be used to measure the effect of updating πφ using Lα on the estimated return2 : ∇αQθ ( st , πφ′ ( st ) ) , where φ′ = φ−∇φLα ( τ , x ( φ ) , V ) . ( 6 ) This constitutes a type of second order gradient∇α∇φ that can be used to meta-train Lα to provide better updates to the policy parameters in the future . In practice we will use batching to optimize Equation 6 over multiple trajectories τ . Similarly to the policy-gradient estimators from Section 2 , the objective function Lα ( τ , x ( φ ) , V ) receives as inputs an episode trajectory τ = ( s0 : T−1 , a0 : T−1 , r0 : T−1 ) , the value function estimates 2In case of a probabilistic policy πφ ( at|st ) the following becomes an expectation under πφ and a reparameterizable form is required ( Williams , 1988 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . Here we focus on learning deterministic target policies . Algorithm 1 MetaGenRL : Meta-Training Require : p ( e ) a distribution of environments P ⇐ { ( e1 ∼ p ( e ) , φ1 , θ1 , B1 ← ∅ ) , . . . } . Randomly initialize population of agents Randomly initialize objective function Lα while Lα has not converged do for e , φ , θ , B ∈ P do . For each agent i in parallel if extend replay buffer B then Extend B using πφ in e Sample trajectories from B Update critic Qθ using TD-error Update policy by following ∇φLα Compute objective function gradient ∆i for agent i according to Equation 6 Sum gradients ∑ i ∆i to update Lα V , and an auxiliary input x ( φ ) ( previously πφ ) that can be differentiated with respect to the policy parameters . The latter is critical to be able to differentiate with respect to φ and in the simplest case it consists of the action as predicted by the policy . While Equation 6 is used for meta-learning Lα , the objective functionLα itself is used for policy learning by following∇φLα ( τ , x ( φ ) , V ) . See Figure 1 for an overview . MetaGenRL consists of two phases : During meta-training , we alternate between critic updates , objective function updates , and policy updates to meta-learn an objective function Lα as described in Algorithm 1 . During meta-testing in Algorithm 2 , we take the learned objective function Lα and keep it fixed while training a randomly initialized policy in a new environment to assess its performance . We note that the inputs to Lα are sampled from a replay buffer rather than solely using on-policy data . If Lα were to represent a REINFORCE-type objective then it would mean that differentiating Lα yields biased policy gradient estimates . In our experiments we will find that the gradients from Lα work much better in comparison to a biased off-policy REINFORCE algorithm , and to an importance-sampled unbiased REINFORCE algorithm , while also improving over the popular on-policy REINFORCE and PPO algorithms .
This paper presents a novel meta reinforcement learning algorithm capable of meta-generalizing to unseen tasks. They make use of a learned objective function used in combination with DDPG style update. Results are presented on different combinations of meta-training and meta-testing on lunar, half cheetah, and hopper environments with a focus on meta-generalization to vastly different environments.
SP:9043128647ca5b26b38c11af6fddf166e012a390
Improving Generalization in Meta Reinforcement Learning using Learned Objectives
1 INTRODUCTION . The process of evolution has equipped humans with incredibly general learning algorithms . They enable us to solve a wide range of problems , even in the absence of a large number of related prior experiences . The algorithms that give rise to these capabilities are the result of distilling the collective experiences of many learners throughout the course of natural evolution . By essentially learning from learning experiences in this way , the resulting knowledge can be compactly encoded in the genetic code of an individual to give rise to the general learning capabilities that we observe today . In contrast , Reinforcement Learning ( RL ) in artificial agents rarely proceeds in this way . The learning rules that are used to train agents are the result of years of human engineering and design , ( e.g . Williams ( 1992 ) ; Wierstra et al . ( 2008 ) ; Mnih et al . ( 2013 ) ; Lillicrap et al . ( 2016 ) ; Schulman et al . ( 2015a ) ) . Correspondingly , artificial agents are inherently limited by the ability of the designer to incorporate the right inductive biases in order to learn from previous experiences . Several works have proposed an alternative framework based on meta reinforcement learning ( Schmidhuber , 1994 ; Wang et al. , 2016 ; Duan et al. , 2016 ; Finn et al. , 2017 ; Houthooft et al. , 2018 ; Clune , 2019 ) . Meta-RL distinguishes between learning to act in the environment ( the reinforcement learning problem ) and learning to learn ( the meta-learning problem ) . Hence , learning itself is now a learning problem , which in principle allows one to leverage prior learning experiences to meta-learn general learning rules that surpass human-engineered alternatives . However , while prior work found that learning rules could be meta-learned that generalize to slightly different environments or goals ( Finn et al. , 2017 ; Plappert et al. , 2018 ; Houthooft et al. , 2018 ) , generalization to entirely different environments remains an open problem . In this paper we present MetaGenRL1 , a novel meta reinforcement learning algorithm that metalearns learning rules that generalize to entirely different environments . MetaGenRL is inspired by the process of natural evolution as it distills the experiences of many agents into the parameters of an objective function that decides how future individuals will learn . Similar to Evolved Policy Gradients ( EPG ; Houthooft et al . ( 2018 ) ) , it meta-learns low complexity neural objective functions that can be used to train complex agents with many parameters . However , unlike EPG , it is able to meta-learn using second-order gradients , which offers several advantages as we will demonstrate . We evaluate MetaGenRL on a variety of continuous control tasks and compare to RL2 ( Wang et al. , 2016 ; Duan et al. , 2016 ) and EPG in addition to several human engineered learning algorithms . 1Code is available at http : //louiskirsch.com/code/metagenrl Compared to RL2 we find that MetaGenRL does not overfit and is able to train randomly initialized agents using meta-learned learning rules on entirely different environments . Compared to EPG we find that MetaGenRL is more sample efficient , and outperforms significantly under a fixed budget of environment interactions . The results of an ablation study and additional analysis provide further insight into the benefits of our approach . 2 PRELIMINARIES . Notation We consider the standard MDP Reinforcement Learning setting defined by a tuple e = ( S , A , P , ρ0 , r , γ , T ) consisting of states S , actions A , the transition probability distribution P : S × A × S → R+ , an initial state distribution ρ0 : S → R+ , the reward function r : S × A → [ −Rmax , Rmax ] , a discount factor γ , and the episode length T . The objective for the probabilistic policy πφ : S ×A→ R+ parameterized by φ is to maximize the expected discounted return : Eτ [ T−1∑ t=0 γtrt ] , where s0 ∼ ρ0 ( s0 ) , at ∼ πφ ( at|st ) , st+1 ∼ P ( st+1|st , at ) , rt = r ( st , at ) , ( 1 ) with τ = ( s0 , a0 , r0 , s1 , ... , sT−1 , aT−1 , rT−1 ) . Human Engineered Gradient Estimators A popular gradient-based approach to maximizing Equation 1 is REINFORCE ( Williams , 1992 ) . It directly differentiates Equation 1 with respect to φ using the likelihood ratio trick to derive gradient estimates of the form : ∇φEτ [ LREINF ( τ , πφ ) ] : = Eτ [ ∇φ T−1∑ t=0 log πφ ( at|st ) · T−1∑ t′=t γt ′−trt′ ) ] . ( 2 ) Although this basic estimator is rarely used in practice , it has become a building block for an entire class of policy-gradient algorithms of this form . For example , a popular extension from Schulman et al . ( 2015b ) combines REINFORCE with a Generalized Advantage Estimate ( GAE ) to yield the following policy gradient estimator : ∇φEτ [ LGAE ( τ , πφ , V ) ] : = Eτ [ ∇φ T−1∑ t=0 log πφ ( at|st ) ·A ( τ , V , t ) ] . ( 3 ) where A ( τ , V , t ) is the GAE and V : S → R is a value function estimate . Several recent other extensions include TRPO ( Schulman et al. , 2015a ) , which discourages bad policy updates using trust regions and iterative off-policy updates , or PPO ( Schulman et al. , 2017 ) , which offers similar benefits using only first order approximations . Parametrized Objective Functions In this work we note that many of these human engineered policy gradient estimators can be viewed as specific implementations of a general objective function L that is differentiated with respect to the policy parameters : ∇φEτ [ L ( τ , πφ , V ) ] . ( 4 ) Hence , it becomes natural to consider a generic parametrization of L that , for various choices of parameters α , recovers some of these estimators . In this paper , we will consider neural objective functions where Lα is implemented by a neural network . Our goal is then to optimize the parameters α of this neural network in order to give rise to a new learning algorithm that best maximizes Equation 1 on an entire class of ( different ) environments . 3 META-LEARNING NEURAL OBJECTIVES . In this work we propose MetaGenRL , a novel meta reinforcement learning algorithm that metalearns neural objective functions of the form Lα ( τ , πφ , V ) . MetaGenRL makes use of value functions and second-order gradients , which makes it more sample efficient compared to prior work ( Duan et al. , 2016 ; Wang et al. , 2016 ; Houthooft et al. , 2018 ) . More so , as we will demonstrate , MetaGenRL meta-learns objective functions that generalize to vastly different environments . Our key insight is that a differentiable critic Qθ : S × A → R can be used to measure the effect of locally changing the objective function parameters α based on the quality of the corresponding policy gradients . This enables a population of agents to use and improve a single parameterized objective function Lα through interacting with a set of ( potentially different ) environments . During evaluation ( meta-test time ) , the meta-learned objective function can then be used to train a randomly initialized RL agent in a new environment . 3.1 FROM DDPG TO GRADIENT-BASED META-LEARNING OF NEURAL OBJECTIVES . We will formally introduce MetaGenRL as an extension of the DDPG actor-critic framework ( Silver et al. , 2014 ; Lillicrap et al. , 2016 ) . In DDPG , a parameterized critic of the form Qθ : S × A → R transforms the non-differentiable RL reward maximization problem into a myopic value maximization problem for any st ∈ S. This is done by alternating between optimization of the critic Qθ and the ( here deterministic ) policy πφ . The critic is trained to minimize the TD-error by following : ∇θ ∑ ( st , at , rt , st+1 ) ( Qθ ( st , at ) − yt ) 2 , where yt = rt + γ ·Qθ ( st+1 , πφ ( st+1 ) ) , ( 5 ) and the dependence of yt on the parameter vector θ is ignored . The policy πφ is improved to increase the expected return from arbitrary states by following the gradient∇φ ∑ st Qθ ( st , πφ ( st ) ) . Both gradients can be computed entirely off-policy by sampling trajectories from a replay buffer . MetaGenRL builds on this idea of differentiating the critic Qθ with respect to the policy parameters . It incorporates a parameterized objective function Lα that is used to improve the policy ( i.e . by following the gradient ∇φLα ) , which adds one extra level of indirection : The critic Qθ improves Lα , while Lα improves the policy πφ . By first differentiating with respect to the objective function parameters α , and then with respect to the policy parameters φ , the critic can be used to measure the effect of updating πφ using Lα on the estimated return2 : ∇αQθ ( st , πφ′ ( st ) ) , where φ′ = φ−∇φLα ( τ , x ( φ ) , V ) . ( 6 ) This constitutes a type of second order gradient∇α∇φ that can be used to meta-train Lα to provide better updates to the policy parameters in the future . In practice we will use batching to optimize Equation 6 over multiple trajectories τ . Similarly to the policy-gradient estimators from Section 2 , the objective function Lα ( τ , x ( φ ) , V ) receives as inputs an episode trajectory τ = ( s0 : T−1 , a0 : T−1 , r0 : T−1 ) , the value function estimates 2In case of a probabilistic policy πφ ( at|st ) the following becomes an expectation under πφ and a reparameterizable form is required ( Williams , 1988 ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . Here we focus on learning deterministic target policies . Algorithm 1 MetaGenRL : Meta-Training Require : p ( e ) a distribution of environments P ⇐ { ( e1 ∼ p ( e ) , φ1 , θ1 , B1 ← ∅ ) , . . . } . Randomly initialize population of agents Randomly initialize objective function Lα while Lα has not converged do for e , φ , θ , B ∈ P do . For each agent i in parallel if extend replay buffer B then Extend B using πφ in e Sample trajectories from B Update critic Qθ using TD-error Update policy by following ∇φLα Compute objective function gradient ∆i for agent i according to Equation 6 Sum gradients ∑ i ∆i to update Lα V , and an auxiliary input x ( φ ) ( previously πφ ) that can be differentiated with respect to the policy parameters . The latter is critical to be able to differentiate with respect to φ and in the simplest case it consists of the action as predicted by the policy . While Equation 6 is used for meta-learning Lα , the objective functionLα itself is used for policy learning by following∇φLα ( τ , x ( φ ) , V ) . See Figure 1 for an overview . MetaGenRL consists of two phases : During meta-training , we alternate between critic updates , objective function updates , and policy updates to meta-learn an objective function Lα as described in Algorithm 1 . During meta-testing in Algorithm 2 , we take the learned objective function Lα and keep it fixed while training a randomly initialized policy in a new environment to assess its performance . We note that the inputs to Lα are sampled from a replay buffer rather than solely using on-policy data . If Lα were to represent a REINFORCE-type objective then it would mean that differentiating Lα yields biased policy gradient estimates . In our experiments we will find that the gradients from Lα work much better in comparison to a biased off-policy REINFORCE algorithm , and to an importance-sampled unbiased REINFORCE algorithm , while also improving over the popular on-policy REINFORCE and PPO algorithms .
The paper proposes a meta reinforcement learning algorithm called MetaGenRL, which meta-learns learning rules to generalize to different environments. The paper poses an important observation where learning rules in reinforcement learning to train the agents are results of human engineering and design, instead, the paper demonstrates how to use second-order gradients to learn learning rules to train agents. Learning learning rules in general has been proposed and this paper is another attempt to further generalize what could be learned in the learning rules. The idea is verified on three Mujoco domains, where the neural objective function is learned from one / two domains, then deployed to a new unseen domain. The experiments show that the learned neural objective can generalize to new environments which are different from the meta-training environments.
SP:9043128647ca5b26b38c11af6fddf166e012a390
On summarized validation curves and generalization
The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks . However , this summarization tends to lose the intricacies of the per-task curves and it isn ’ t able to reflect if all the tasks are at their validation optimum even if the summarized curve might be . In this work , we explore this loss of information , how it affects the model at testing and how to detect it using interval plots . We propose two techniques as a proof-of-concept of the potential gain in the test performance when per-task validation curves are accounted for . Our experiments on three large datasets show up to a 2.5 % increase ( averaged over multiple trials ) in the test accuracy rate when model selection uses the per-task validation maximums instead of the summarized validation maximum . This potential increase is not a result of any modification to the model but rather at what point of training the weights were selected from . This presents an exciting direction for new training and model selection techniques that rely on more than just averaged metrics . 1 INTRODUCTION . A validation set , separate from the test set , is the de facto standard for training deep learning models through early stopping . This non-convergent approach ( Finnoff et al. , 1993 ) identifies the best model in multi-task/label settings based on an expected error across all tasks . Calculating metrics on the validation set can estimate the model ’ s generalization capability at every stage of training and monitoring the summarized validation curve over time aids the detection of overfitting . It is common to see the use of validation metrics as a way to stop training and/or load the best model for testing , as opposed to training a model to N epochs and then testing . While current works have always cautioned about the representativeness of validation data being used , the curves themselves haven ’ t been addressed much . In particular , there hasn ’ t been much attention on the summarized nature of the curves and their ability to represent the generalization of the constituent tasks . Tasks can vary in difficulty and even have a dependence on each other ( Graves , 2016 ; Alain & Bengio , 2016 ) . An example by Lee et al . ( 2016 ) is to suppose some task a is to predict whether a visual instance ‘ has wheels ’ or not , and task b is to predict if a given visual object ‘ is fast ’ ; not only is one easier , but there is also a dependence between them . So there is a possibility that easier tasks reach their best validation metric before the rest and may start overfitting if training were to be continued . This isn ’ t reflected very clearly with the use of a validation metric that is averaged over all tasks . As a larger number of underfit tasks would skew the average , the overall optimal validation point gets shifted to a later time-step ( epoch ) when the model could be worse at the easier tasks . Vice versa , the optimal epoch gets shifted earlier due to a larger , easier subset that are overfit when the harder tasks reach their individual optimal epochs . We term this mismatch in the overall and task optimal epochs as a ‘ temporal discrepancy ’ . In this work , we explore and try to mitigate this discrepancy between tasks . We present in this paper that early stopping on only the expected error over tasks leaves us blind to the performance they are sacrificing per task . The work is organized in the following manner : in §2 , we explore existing work that deals with methods for incorporating task difficulty ( which could be causing this discrepancy ) into training . The rest of the sections along with our contributions can be summarized as : 1 . We present a method to easily visualize and detect the discrepancy through interval plots in §3 2 . We formulate techniques that could quantify this discrepancy by also considering the pertask validation metrics in model selection in §4 . 3 . We explore the presence of the temporal discrepancy on three image datasets and test the aforementioned techniques to assess the change in performance in §5 4 . To the best of our knowledge , there has not been a study like this into the potential of per-task validation metrics to select an ensemble of models . 2 RELATED WORK . Training multiple related tasks together creates a shared representation that can generalize better on individual tasks . The rising prominence of multi-task learning can be attributed to Caruana ( 1997 ) . It has been acknowledged that some tasks are easier to learn than the others and plenty of works have tried to solve this issue through approaches that slow down the training of easier tasks . In other words , tasks are assigned a priority in the learning phase based on their difficulty determined through some metric . This assignment of priority implicitly tries to solve the temporal discrepancy without formally addressing its presence . Task prioritization can take the form of gradient magnitudes , parameter count , or update frequencies ( Guo et al. , 2018 ) . We can group existing solutions into task prioritization as a hyperparameter or task prioritization during training ( aka self-paced learning ) . The post-training brute force and clustering methods we propose do not fit into these categories as we believe they have not been done before . Instead of adjusting training or retraining , these methods operate on a model which has already been trained . Task prioritization as a hyperparameter is a way to handle per task overfitting that is almost the subconscious approach for most practitioners . This would include data-augmentation and over/undersampling . An example case is in Kokkinos ( 2017 ) where they use manually tuned task weights in order to improve performance . Task prioritization during training covers approaches where tasks dynamically change priority or are regularized in some way . For example Guo et al . ( 2018 ) takes an approach to change task weights during training based on multiple metrics such as error , perceived difficulty , and learnable parameters . The idea is that some tasks need to have a high weight at the start and a low weight later in training . In a similar direction Gradnorm ( Chen et al. , 2018 ) aims to set balance task weights based on normalizing the gradients across tasks . Using relationships between tasks during training is another direction . Ruder ( 2017 ) discussed negative transfer where sharing information with unrelated tasks might actually hurt performance . Work by Lee et al . ( 2016 ) incorporated a directed graph of relationships between tasks in order to enforce sharing between related tasks when reweighting tasks . Task clustering has been performed outside of neural networks by Evgeniou et al . ( 2005 ) ; Evgeniou & Pontil ( 2004 ) where they regularize per-task SVMs so that the parameters between related tasks are similar . It would be natural to use some of these methods as a baseline for our work . However , we think it would not be an equitable comparison as : • These baseline methods are applied during training whereas ours is a post-training analysis . • The main aspect of our analysis is only on the validation metric whereas these baselines consider a variety of different aspects of training . • The focus of our work is on how the weights change with time , keeping all else constant , and how these changes affect the validation and test performance . The aforementioned methods modify the gradients w.r.t . several factors during the training which adds more degrees of freedom and is difficult to compare . Regardless of task difficulty , training multiple tasks jointly with a neural network can lead to catastrophic forgetting : it refers to how a network can lose information that it had learned for a particular task as it learns another task ( McCloskey & Cohen , 1989 ) . Multiple works have explored and tried to mitigate this phenomenon ( Ratcliff , 1990 ; Robins , 1995 ; Goodrich & Arel , 2014 ; Kirkpatrick et al. , 2017 ; Kemker et al. , 2018 ; Lee et al. , 2017 ) and it still remains an open area of research . It is highly likely that catastrophic forgetting could be causing any such temporal discrepancy ; exploring the relationship between the two is an area is a very interesting direction in research and is left for future work . 3 STUDYING TEMPORAL DISCREPANCY BETWEEN TASKS . Firstly , we define what a task is to disambiguate from its general usage in multi-task learning literature . A ‘ task ’ is predicting a single output unit out of many , regardless of the training paradigm being multi-class or multi-label or other . Tasks can be very fine-grained such as predicting the class of an image or much higher-level such image classification , segmentation etc . While our work uses the term in the former context , our motivation and findings can be applied in the latter context ( which is the broader and more common context in multi-task learning ) as well . In the next two subsections , we define the term temporal discrepancy and display an example of it on CIFAR100 . Then , we introduce a simple method of visualizing it on datasets with a large number of tasks that would make it difficult to analyze the per-task curves together . 3.1 TEMPORAL DISCREPANCY . A temporal discrepancy in the validation performance refers to the phenomenon where the model isn ’ t optimal for all of its tasks simultaneously . This occurs when there is a difference between the overall optimal epoch determined by the summarized validation metric and the epoch in which task achieves its best validation metric is higher than some arbitrary threshold , i.e. , |ts − ti| > δ where ts is the optimal epoch of the summarized validation curve and ti is the optimal epoch for task i . Figure 1 displays an example of this discrepancy in CIFAR100 ( only five curves plotted for clarity ) . It is most evident for the labels Sea and Lamp which undergo a drop of 7.5 % and 5.7 % respectively in their validation accuracy from their peak epoch to ts . Similarly , Snake also starts degrading till ts but strangely starts improving after . Conversely , Rose and Streetcar are underfit at ts as they continue to improve after . The most noteworthy observation is that the averaged validation curve ( in dotted black ) completely plateaus out after the 150th epoch . There is significant variation occurring in the per-label curves but the averaged curve is unable to represent these dynamics in the training . Selecting an optimal model off the averaged curve can be quite misleading as it represents the entire [ 151 , 300 ] interval as optimal despite the labels ’ validation accuracies fluctuating significantly in this interval . The test performance of individual labels can wildly differ depending on which epoch is used for loading the weights for testing and/or deployment . 3.2 INTERVAL PLOTS . It is easy to examine the per-label curves in Figure 1 as only 5 % of the labels have been plotted . But when the number of tasks is high and all of them need to be plotted together to get a clearer global picture , decomposing the summarized validation curve can get very messy . Quasi-optimal validation interval plots , or interval plots for short , are a way of assessing the optimal temporality of the per-task validation performance relative to ts . It is a simple visualization method that aids in determining when and/or for how long the tasks are within the acceptable limits of the best validation performance and also which and/or how many tasks aren ’ t within these limits near the overall optimal epoch ts . Creating an interval plot involves finding a ‘ quasi-optimal ’ region for each task , i.e. , a consecutive temporal interval in which a validation metric of the task fluctuates near its maximum with a set tolerance . The task validation curves are first smoothed out to reduce noise and the time-step ( epoch ) at which the task achieved its optimal validation metric is determined . Then , the number of epochs before and after this task-optimal epoch in which the task metric is greater than a threshold is calculated . This duration of epochs is the interval for the task . Given a vector of validation metrics Ai for a task i , its interval τi is given by : τi = [ ti −m , . . . ti − 1 , ti , . . . ti + n ] ∀ aij ≥ aiti − where ti = argmax Ai , j ∈ τi and aij ∈ Ai Figure 2 plots the decomposed curves and the equivalent intervals for CIFAR100 . The overall optimal epoch ts doesn ’ t fall in the intervals of almost half the labels ; these labels aren ’ t at their potentially best validation performance at the early stopping point . Some intervals are notably small in duration , meaning those labels have a very sharp peak . This could imply that the validation performance is randomly high at that epoch and it ’ d be more suitable to shift the quasi-optimal region of these labels to a longer and/or later interval , that doesn ’ t necessarily contain ti , as long as the validation accuracy stays within the tolerance in that interval .
The paper examines the common practice of performing model selection by choosing the model that maximizes validation accuracy. In a setting where there are multiple tasks, the average validation error hides performance on individual tasks, which may be relevant. The paper casts multi-class image classification as a multi-task problem, where identifying each different class is a different task.
SP:f48d609519e10cdf6de5dd0341edd5544d96402c
On summarized validation curves and generalization
The validation curve is widely used for model selection and hyper-parameter search with the curve usually summarized over all the training tasks . However , this summarization tends to lose the intricacies of the per-task curves and it isn ’ t able to reflect if all the tasks are at their validation optimum even if the summarized curve might be . In this work , we explore this loss of information , how it affects the model at testing and how to detect it using interval plots . We propose two techniques as a proof-of-concept of the potential gain in the test performance when per-task validation curves are accounted for . Our experiments on three large datasets show up to a 2.5 % increase ( averaged over multiple trials ) in the test accuracy rate when model selection uses the per-task validation maximums instead of the summarized validation maximum . This potential increase is not a result of any modification to the model but rather at what point of training the weights were selected from . This presents an exciting direction for new training and model selection techniques that rely on more than just averaged metrics . 1 INTRODUCTION . A validation set , separate from the test set , is the de facto standard for training deep learning models through early stopping . This non-convergent approach ( Finnoff et al. , 1993 ) identifies the best model in multi-task/label settings based on an expected error across all tasks . Calculating metrics on the validation set can estimate the model ’ s generalization capability at every stage of training and monitoring the summarized validation curve over time aids the detection of overfitting . It is common to see the use of validation metrics as a way to stop training and/or load the best model for testing , as opposed to training a model to N epochs and then testing . While current works have always cautioned about the representativeness of validation data being used , the curves themselves haven ’ t been addressed much . In particular , there hasn ’ t been much attention on the summarized nature of the curves and their ability to represent the generalization of the constituent tasks . Tasks can vary in difficulty and even have a dependence on each other ( Graves , 2016 ; Alain & Bengio , 2016 ) . An example by Lee et al . ( 2016 ) is to suppose some task a is to predict whether a visual instance ‘ has wheels ’ or not , and task b is to predict if a given visual object ‘ is fast ’ ; not only is one easier , but there is also a dependence between them . So there is a possibility that easier tasks reach their best validation metric before the rest and may start overfitting if training were to be continued . This isn ’ t reflected very clearly with the use of a validation metric that is averaged over all tasks . As a larger number of underfit tasks would skew the average , the overall optimal validation point gets shifted to a later time-step ( epoch ) when the model could be worse at the easier tasks . Vice versa , the optimal epoch gets shifted earlier due to a larger , easier subset that are overfit when the harder tasks reach their individual optimal epochs . We term this mismatch in the overall and task optimal epochs as a ‘ temporal discrepancy ’ . In this work , we explore and try to mitigate this discrepancy between tasks . We present in this paper that early stopping on only the expected error over tasks leaves us blind to the performance they are sacrificing per task . The work is organized in the following manner : in §2 , we explore existing work that deals with methods for incorporating task difficulty ( which could be causing this discrepancy ) into training . The rest of the sections along with our contributions can be summarized as : 1 . We present a method to easily visualize and detect the discrepancy through interval plots in §3 2 . We formulate techniques that could quantify this discrepancy by also considering the pertask validation metrics in model selection in §4 . 3 . We explore the presence of the temporal discrepancy on three image datasets and test the aforementioned techniques to assess the change in performance in §5 4 . To the best of our knowledge , there has not been a study like this into the potential of per-task validation metrics to select an ensemble of models . 2 RELATED WORK . Training multiple related tasks together creates a shared representation that can generalize better on individual tasks . The rising prominence of multi-task learning can be attributed to Caruana ( 1997 ) . It has been acknowledged that some tasks are easier to learn than the others and plenty of works have tried to solve this issue through approaches that slow down the training of easier tasks . In other words , tasks are assigned a priority in the learning phase based on their difficulty determined through some metric . This assignment of priority implicitly tries to solve the temporal discrepancy without formally addressing its presence . Task prioritization can take the form of gradient magnitudes , parameter count , or update frequencies ( Guo et al. , 2018 ) . We can group existing solutions into task prioritization as a hyperparameter or task prioritization during training ( aka self-paced learning ) . The post-training brute force and clustering methods we propose do not fit into these categories as we believe they have not been done before . Instead of adjusting training or retraining , these methods operate on a model which has already been trained . Task prioritization as a hyperparameter is a way to handle per task overfitting that is almost the subconscious approach for most practitioners . This would include data-augmentation and over/undersampling . An example case is in Kokkinos ( 2017 ) where they use manually tuned task weights in order to improve performance . Task prioritization during training covers approaches where tasks dynamically change priority or are regularized in some way . For example Guo et al . ( 2018 ) takes an approach to change task weights during training based on multiple metrics such as error , perceived difficulty , and learnable parameters . The idea is that some tasks need to have a high weight at the start and a low weight later in training . In a similar direction Gradnorm ( Chen et al. , 2018 ) aims to set balance task weights based on normalizing the gradients across tasks . Using relationships between tasks during training is another direction . Ruder ( 2017 ) discussed negative transfer where sharing information with unrelated tasks might actually hurt performance . Work by Lee et al . ( 2016 ) incorporated a directed graph of relationships between tasks in order to enforce sharing between related tasks when reweighting tasks . Task clustering has been performed outside of neural networks by Evgeniou et al . ( 2005 ) ; Evgeniou & Pontil ( 2004 ) where they regularize per-task SVMs so that the parameters between related tasks are similar . It would be natural to use some of these methods as a baseline for our work . However , we think it would not be an equitable comparison as : • These baseline methods are applied during training whereas ours is a post-training analysis . • The main aspect of our analysis is only on the validation metric whereas these baselines consider a variety of different aspects of training . • The focus of our work is on how the weights change with time , keeping all else constant , and how these changes affect the validation and test performance . The aforementioned methods modify the gradients w.r.t . several factors during the training which adds more degrees of freedom and is difficult to compare . Regardless of task difficulty , training multiple tasks jointly with a neural network can lead to catastrophic forgetting : it refers to how a network can lose information that it had learned for a particular task as it learns another task ( McCloskey & Cohen , 1989 ) . Multiple works have explored and tried to mitigate this phenomenon ( Ratcliff , 1990 ; Robins , 1995 ; Goodrich & Arel , 2014 ; Kirkpatrick et al. , 2017 ; Kemker et al. , 2018 ; Lee et al. , 2017 ) and it still remains an open area of research . It is highly likely that catastrophic forgetting could be causing any such temporal discrepancy ; exploring the relationship between the two is an area is a very interesting direction in research and is left for future work . 3 STUDYING TEMPORAL DISCREPANCY BETWEEN TASKS . Firstly , we define what a task is to disambiguate from its general usage in multi-task learning literature . A ‘ task ’ is predicting a single output unit out of many , regardless of the training paradigm being multi-class or multi-label or other . Tasks can be very fine-grained such as predicting the class of an image or much higher-level such image classification , segmentation etc . While our work uses the term in the former context , our motivation and findings can be applied in the latter context ( which is the broader and more common context in multi-task learning ) as well . In the next two subsections , we define the term temporal discrepancy and display an example of it on CIFAR100 . Then , we introduce a simple method of visualizing it on datasets with a large number of tasks that would make it difficult to analyze the per-task curves together . 3.1 TEMPORAL DISCREPANCY . A temporal discrepancy in the validation performance refers to the phenomenon where the model isn ’ t optimal for all of its tasks simultaneously . This occurs when there is a difference between the overall optimal epoch determined by the summarized validation metric and the epoch in which task achieves its best validation metric is higher than some arbitrary threshold , i.e. , |ts − ti| > δ where ts is the optimal epoch of the summarized validation curve and ti is the optimal epoch for task i . Figure 1 displays an example of this discrepancy in CIFAR100 ( only five curves plotted for clarity ) . It is most evident for the labels Sea and Lamp which undergo a drop of 7.5 % and 5.7 % respectively in their validation accuracy from their peak epoch to ts . Similarly , Snake also starts degrading till ts but strangely starts improving after . Conversely , Rose and Streetcar are underfit at ts as they continue to improve after . The most noteworthy observation is that the averaged validation curve ( in dotted black ) completely plateaus out after the 150th epoch . There is significant variation occurring in the per-label curves but the averaged curve is unable to represent these dynamics in the training . Selecting an optimal model off the averaged curve can be quite misleading as it represents the entire [ 151 , 300 ] interval as optimal despite the labels ’ validation accuracies fluctuating significantly in this interval . The test performance of individual labels can wildly differ depending on which epoch is used for loading the weights for testing and/or deployment . 3.2 INTERVAL PLOTS . It is easy to examine the per-label curves in Figure 1 as only 5 % of the labels have been plotted . But when the number of tasks is high and all of them need to be plotted together to get a clearer global picture , decomposing the summarized validation curve can get very messy . Quasi-optimal validation interval plots , or interval plots for short , are a way of assessing the optimal temporality of the per-task validation performance relative to ts . It is a simple visualization method that aids in determining when and/or for how long the tasks are within the acceptable limits of the best validation performance and also which and/or how many tasks aren ’ t within these limits near the overall optimal epoch ts . Creating an interval plot involves finding a ‘ quasi-optimal ’ region for each task , i.e. , a consecutive temporal interval in which a validation metric of the task fluctuates near its maximum with a set tolerance . The task validation curves are first smoothed out to reduce noise and the time-step ( epoch ) at which the task achieved its optimal validation metric is determined . Then , the number of epochs before and after this task-optimal epoch in which the task metric is greater than a threshold is calculated . This duration of epochs is the interval for the task . Given a vector of validation metrics Ai for a task i , its interval τi is given by : τi = [ ti −m , . . . ti − 1 , ti , . . . ti + n ] ∀ aij ≥ aiti − where ti = argmax Ai , j ∈ τi and aij ∈ Ai Figure 2 plots the decomposed curves and the equivalent intervals for CIFAR100 . The overall optimal epoch ts doesn ’ t fall in the intervals of almost half the labels ; these labels aren ’ t at their potentially best validation performance at the early stopping point . Some intervals are notably small in duration , meaning those labels have a very sharp peak . This could imply that the validation performance is randomly high at that epoch and it ’ d be more suitable to shift the quasi-optimal region of these labels to a longer and/or later interval , that doesn ’ t necessarily contain ti , as long as the validation accuracy stays within the tolerance in that interval .
Model validation curve typically aggregates accuracies of all labels. This paper investigates the fine-grained per-label model validation curve. It shows that the optimal epoch varies by label. The paper proposes a visualization method to detect if there is a disparity between the per-label curves and the summarized validation curve. It also proposes two methods to exploit per-label metrics into model evaluation and selection. The experiments use three datasets: CIFAR 100, Tiny ImageNet, PadChest.
SP:f48d609519e10cdf6de5dd0341edd5544d96402c
A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed
1 INTRODUCTION . Many large-scale machine learning problems , especially in deep learning , are formulated as minimizing the sum of loss functions on millions of training examples ( Krizhevsky et al. , 2012 ; Devlin et al. , 2018 ) . Computing exact gradient over the entire training set is intractable for these problems . Instead of using full batch gradients , the variants of stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ; Zhang , 2004 ; Bottou , 2010 ; Sutskever et al. , 2013 ; Duchi et al. , 2011 ; Kingma & Ba , 2014 ) evaluate noisy gradient estimates from small mini-batches of randomly sampled training points at each iteration . The mini-batch size is often independent of the training set size , which allows SGD to immediately adapt the model parameters before going through the entire training set . Despite its simplicity , SGD works very well , even in the non-convex non-smooth deep learning problems ( He et al. , 2016 ; Vaswani et al. , 2017 ) . However , the optimization performance of the stochastic algorithm near local optima is significantly limited by the mini-batch sampling noise , controlled by the learning rate and the mini-batch size . The sampling variance and the slow convergence of SGD have been studied extensively in the past ( Chen et al. , 2016 ; Li et al. , 2017 ; Toulis & Airoldi , 2017 ) . To ensure convergence , machine learning practitioners have to either increase the mini-batch size or decrease the learning rate toward the end of the training ( Smith et al. , 2017 ; Ge et al. , 2019 ) . Recently , several clever variance reduction methods ( Roux et al. , 2012 ; Defazio et al. , 2014 ; Wang et al. , 2013 ; Johnson & Zhang , 2013 ) were proposed to alleviate the noisy gradient problem by using control-variates to achieve unbiased and lower-variance gradient estimators . In particular , the variants of Stochastic Variance Reduced Gradient ( SVRG ) ( Johnson & Zhang , 2013 ) , k-SVRG ( Raj & Stich , 2018 ) , L-SVRG ( Kovalev et al. , 2019 ) and Free-SVRG ( Sebbouh et al. , 2019 ) construct control-variates from previous staled snapshot model parameters . These methods enjoy a superior asymptotic performance in convex optimization compared to the standard SGD . The control-variate techniques are shown to improve the convergence rate of SGD from a sub-linear to a linear convergence rate . These variance reduction methods can also be combined with momentum ( Allen-Zhu , 2017 ) and preconditioning methods ( Moritz et al. , 2016 ) to obtain faster convergence . Despite their strong theoretical guarantees , SVRG-like algorithms have seen limited success in training deep learning models ( Defazio & Bottou , 2018 ) . Traditional results from stochastic optimization focus on the asymptotic analysis , but in practice , most of deep neural networks are only trained for hundreds of epochs due to the high computational cost . To address the gap between the asymptotic benefit of SVRG and the practical computational budget of training deep learning models , we provide a non-asymptotic study on the SVRG algorithms under a noisy least squares regression model . Although optimizing least squares regression is a basic problem , it has been shown to characterize the learning dynamics of many realistic deep learning models ( Zhang et al. , 2019 ; Lee et al. , 2019 ) . Recent works suggest that neural network learning behaves very differently in the underparameterized regime vs the overparameterized regime ( Ma et al. , 2018 ; Vaswani et al. , 2019 ) , characterized by whether the learnt model can achieve zero expected loss . We account for both training regimes in the analysis by assuming a linear target function and noisy labels . In the presence of label noise , the loss is lower bounded by the label variance . In the absence of the noise , the linear predictor can fit each training example perfectly . We summarize the main contributions as follows : • We show the exact expected loss of SVRG and SGD along an optimization trajectory as a function of iterations and computational cost . • Our non-asymptotic analysis provides an insightful comparison of SGD and SVRG by considering their computational cost and learning rate schedule . We discuss the trade-offs between the total computational cost , i.e . the total number of back-propagations performed , and convergence performance . • We consider two different training regimes with and without label noise . Under noisy labels , the analysis suggests SGD only outperforms SVRG under a mild total computational cost . However , SGD always exhibits a faster convergence compared to SVRG when there is no label noise . • Numerical experiments validate our theoretical predictions on both MNIST and CIFAR10 using various neural network architectures . In particular , we found the comparison of the convergence speed of SGD to that of SVRG in underparameterized neural networks closely matches with our noisy least squares model prediction . Whereas , the effect of overparameterization is captured by the regression model without label noise . 1.1 RELATED WORKS . Stochastic variance reduction methods consider minimizing a finite-sum of a collection of functions using SGD . In case we use SGD to minimize these objective functions , the stochasticity comes from the randomness in sampling a function in each optimization step . Due to the induced noise , SGD can only converge using decaying step sizes with sub-linear convergence rate . Methods such as SAG ( Roux et al. , 2012 ) , SVRG ( Johnson & Zhang , 2013 ) , and SAGA ( Defazio et al. , 2014 ) , are able to recover linear convergence rate of full-batch gradient descent with the asymptotic cost comparable to SGD . SAG and SAGA achieve this improvement at the substantial cost of storing the most recent gradient of each individual function . In contrast , SVRG spends extra computation at snapshot intervals by evaluating the full-batch gradient . Theoretical results such as Gazagnadou et al . ( 2019 ) show that under certain smoothness conditions , we can use larger step sizes with stochastic variance reduction methods than is allowed for SGD and hence achieve even faster convergence . In situations where we know the smoothness constant of functions , there are results on the optimal mini-batch size and the optimal step size given the inner loop size ( Sebbouh et al. , 2019 ) . Applying variance reduction methods in deep learning has been studied recently ( Defazio & Bottou , 2018 ) . The authors conjectured the ineffectiveness is caused by various elements commonly used in deep learning such as data augmentation , batch normalization and dropout . Such elements can potentially decrease the smoothness and make the stored gradients become stale quickly . The proposed solution is to either remove these elements or update the gradients more frequently than is practical . Dynamics of SGD and quadratic models Our main analysis tool is very closely related to recent work studying the dynamics of gradient-based stochastic methods . Wu et al . ( 2018 ) derived the dynamics of stochastic gradient descent with momentum on a noisy quadratic model ( Schaul et al. , 2013 ) , showing the problem of short horizon bias . In ( Zhang et al. , 2019 ) , the authors showed the same noisy quadratic model captures many of the essential characteristic of realistic neural networks training . Their noisy quadratic model successfully predicts the effectiveness of momentum , preconditioning and learning rate choices in training ResNets and Transformers . However , these previous quadratic models assume a constant variance in the gradient that is independent of the current parameters and the loss function . It makes them inadequate for analyzing the stochastic variance reduction methods , as SVRG can trivially achieve zero variance under the constant gradient noise . Instead , we adopted a noisy least-squares regression formulation by considering both the mini-batch sampling noise and the label noise . There are also recent works that derived the risk of SGD , for least-squares regression models using the bias-variance decomposition of the risk ( Belkin et al. , 2018 ; Hastie et al. , 2019 ) . We use a similar decomposition in our analysis . In contrast to the asymptotic analysis in these works , we compare SGD to SVRG along the optimization trajectory for any finite-time horizon under limited computation cost , not just the convergence points of those algorithms . Underparameterization vs overparameterization . Many of the state-of-the-art deep learning models are overparameterized deep neural networks with more parameters than the number of training examples . Even though these models are able to overfit to the data , when trained using SGD , they generalize well ( Zhang et al. , 2017 ) . As suggested in recent work , underparameterized and overparameterized regimes have different behaviours ( Ma et al. , 2018 ; Vaswani et al. , 2019 ; Schmidt & Roux , 2013 ) . Given the infinite width and a proper weight initialization , the learning dynamics of a neural network can be well-approximated by a linear model via the neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ; Chizat & Bach , 2018 ) . In NTK regime , neural networks are known to achieve global convergence by memorizing every training example . On the other hand , previous convergence results for SVRG have been obtained in stochastic convex optimization problems that are similar to that of an underparameterized model ( Roux et al. , 2012 ; Johnson & Zhang , 2013 ) . Our proposed noisy least-squares regression analysis captures both the underparameterization and overparameterization behavior by considering the presence or the absence of the label noise . 2 PRELIMINARY . 2.1 NOTATIONS . We will primarily focus on comparing the minibatch version of two methods , SGD and SVRG ( Johnson & Zhang , 2013 ) . Denote Li as the loss on ith data point . The SGD update is written as , θ ( t+1 ) = θ ( t ) − α ( t ) ĝ ( t ) , ( 1 ) where ĝ ( t ) = 1b ∑b i ∇θ ( t ) Li is the minibatch gradient , t is the training iteration , and α ( t ) is the learning rate . The SVRG algorithm is an inner-outer loop algorithm proposed to reduce the variance of the gradient caused by the minibatch sampling . In the outer loop , for every T steps , we evaluate a large batch gradient ḡ = 1N ∑N i ∇θ ( mT ) Li , where N b , and m is the outer loop index , and we store the parameters θ ( mT ) . In the inner loop , the update rule of the parameters is given by , θ ( mT+t+1 ) = θ ( mT+t ) − α ( t ) ( ĝ ( mT+t ) − g̃ ( mT+t ) + ḡ ) ( 2 ) where ĝ ( mT+t ) = 1b ∑b i ∇θ ( mT+t ) Li is the current gradient of the mini-batch and g̃ ( mT+t ) = 1 b ∑b i ∇θ ( mT ) Li is the old gradient . Note that in our analysis , the reference point is chosen to be the last iterate of previous outer loop θ ( mT ) , recommended as a practical implementation of the algorithm by the original SVRG paper Johnson & Zhang ( 2013 ) . 2.2 THE NOISY LEAST SQUARES REGRESSION MODEL . We now define the noisy least squares regression model ( Schaul et al. , 2013 ; Wu et al. , 2018 ) . In this setting , the input data is d-dimensional , and the output label is generated by a linear teacher model with additive noise , ( xi , i ) ∼ Px × P ; yi = x > i θ∗ + i , where E [ xi ] = µ ∈ Rd and Cov ( xi ) = Σ , E [ i ] = 0 , Var ( i ) = σ2y . We assume WLOG θ∗ = 0 . We also assume the data covariance matrix Σ is diagonal . This is an assumption adopted in many previous analysis and it is also a practical assumption as we often apply whitening to pre-process the training data . We would like to train a student model θ that minimizes the squared loss over the data distribution : min θ L ( θ ) : = E [ 1 2 ( x > i θ − yi ) 2 ] . ( 3 ) At each iteration , the optimizer can query an arbitrary number of data points { xi , yi } i sampled from data distribution . The SGD method uses b data points to form a minibatch gradient : ĝ ( t ) = 1 b b∑ i ( xix > i θ ( t ) − xi i ) = XbX > b θ ( t ) − 1√ b Xb b , ( 4 ) where Xb = 1√b [ x1 ; x2 ; · · · ; xb ] ∈ R d×b , and the noise vector b = [ 1 ; 2 ; · · · ; b ] > ∈ Rb . SVRG on the other hand , queries for N data points every T steps to form a large batch gradient ḡ = XNX > Nθ ( mT ) − 1√ N XN N , where XN and N are defined similarly . At each inner loop step , it further queries for another b data points , to form the update in Eq . 2 . Lastly , note that the expected loss can be written as a function of the second moment of the iterate , L ( θ ( t ) ) = 1 2 E [ ( x > i θ ( t ) − i ) 2 ] = 1 2 ( tr ( ΣE [ θ ( t ) θ ( t ) > ] ) + σ2y ) . Hence for the following analysis we mainly focus on deriving the dynamics of the second moment E [ θ ( t ) θ ( t ) > ] , denoted as A ( θ ( t ) ) . When Σ is diagonal , the loss can further be reduced to 1 2 diag ( Σ ) > diag ( E [ θ ( t ) θ ( t ) > ] ) + 12σ 2 y . We denote diag ( E [ θ ( t ) θ ( t ) > ] ) by m ( θ ( t ) ) .
This paper compares SGD and SVRG (as a representative variance reduced method) to explore tradeoffs. Although the computational complexity vs overall convergence performance tradeoff is well-known at this point, an interesting new perspective is the comparison in regions of interpolation (where SGD gradient variance will diminish on its own) and label noise (which propogates more seriously in SGD vs SVRG). The analysis is done on a simple linear model with regression, with some experiments on simulations, MNIST, and CIFAR.
SP:67c44f33dff59e4d218f753fdbc6296da62cdf62
A Non-asymptotic comparison of SVRG and SGD: tradeoffs between compute and speed
1 INTRODUCTION . Many large-scale machine learning problems , especially in deep learning , are formulated as minimizing the sum of loss functions on millions of training examples ( Krizhevsky et al. , 2012 ; Devlin et al. , 2018 ) . Computing exact gradient over the entire training set is intractable for these problems . Instead of using full batch gradients , the variants of stochastic gradient descent ( SGD ) ( Robbins & Monro , 1951 ; Zhang , 2004 ; Bottou , 2010 ; Sutskever et al. , 2013 ; Duchi et al. , 2011 ; Kingma & Ba , 2014 ) evaluate noisy gradient estimates from small mini-batches of randomly sampled training points at each iteration . The mini-batch size is often independent of the training set size , which allows SGD to immediately adapt the model parameters before going through the entire training set . Despite its simplicity , SGD works very well , even in the non-convex non-smooth deep learning problems ( He et al. , 2016 ; Vaswani et al. , 2017 ) . However , the optimization performance of the stochastic algorithm near local optima is significantly limited by the mini-batch sampling noise , controlled by the learning rate and the mini-batch size . The sampling variance and the slow convergence of SGD have been studied extensively in the past ( Chen et al. , 2016 ; Li et al. , 2017 ; Toulis & Airoldi , 2017 ) . To ensure convergence , machine learning practitioners have to either increase the mini-batch size or decrease the learning rate toward the end of the training ( Smith et al. , 2017 ; Ge et al. , 2019 ) . Recently , several clever variance reduction methods ( Roux et al. , 2012 ; Defazio et al. , 2014 ; Wang et al. , 2013 ; Johnson & Zhang , 2013 ) were proposed to alleviate the noisy gradient problem by using control-variates to achieve unbiased and lower-variance gradient estimators . In particular , the variants of Stochastic Variance Reduced Gradient ( SVRG ) ( Johnson & Zhang , 2013 ) , k-SVRG ( Raj & Stich , 2018 ) , L-SVRG ( Kovalev et al. , 2019 ) and Free-SVRG ( Sebbouh et al. , 2019 ) construct control-variates from previous staled snapshot model parameters . These methods enjoy a superior asymptotic performance in convex optimization compared to the standard SGD . The control-variate techniques are shown to improve the convergence rate of SGD from a sub-linear to a linear convergence rate . These variance reduction methods can also be combined with momentum ( Allen-Zhu , 2017 ) and preconditioning methods ( Moritz et al. , 2016 ) to obtain faster convergence . Despite their strong theoretical guarantees , SVRG-like algorithms have seen limited success in training deep learning models ( Defazio & Bottou , 2018 ) . Traditional results from stochastic optimization focus on the asymptotic analysis , but in practice , most of deep neural networks are only trained for hundreds of epochs due to the high computational cost . To address the gap between the asymptotic benefit of SVRG and the practical computational budget of training deep learning models , we provide a non-asymptotic study on the SVRG algorithms under a noisy least squares regression model . Although optimizing least squares regression is a basic problem , it has been shown to characterize the learning dynamics of many realistic deep learning models ( Zhang et al. , 2019 ; Lee et al. , 2019 ) . Recent works suggest that neural network learning behaves very differently in the underparameterized regime vs the overparameterized regime ( Ma et al. , 2018 ; Vaswani et al. , 2019 ) , characterized by whether the learnt model can achieve zero expected loss . We account for both training regimes in the analysis by assuming a linear target function and noisy labels . In the presence of label noise , the loss is lower bounded by the label variance . In the absence of the noise , the linear predictor can fit each training example perfectly . We summarize the main contributions as follows : • We show the exact expected loss of SVRG and SGD along an optimization trajectory as a function of iterations and computational cost . • Our non-asymptotic analysis provides an insightful comparison of SGD and SVRG by considering their computational cost and learning rate schedule . We discuss the trade-offs between the total computational cost , i.e . the total number of back-propagations performed , and convergence performance . • We consider two different training regimes with and without label noise . Under noisy labels , the analysis suggests SGD only outperforms SVRG under a mild total computational cost . However , SGD always exhibits a faster convergence compared to SVRG when there is no label noise . • Numerical experiments validate our theoretical predictions on both MNIST and CIFAR10 using various neural network architectures . In particular , we found the comparison of the convergence speed of SGD to that of SVRG in underparameterized neural networks closely matches with our noisy least squares model prediction . Whereas , the effect of overparameterization is captured by the regression model without label noise . 1.1 RELATED WORKS . Stochastic variance reduction methods consider minimizing a finite-sum of a collection of functions using SGD . In case we use SGD to minimize these objective functions , the stochasticity comes from the randomness in sampling a function in each optimization step . Due to the induced noise , SGD can only converge using decaying step sizes with sub-linear convergence rate . Methods such as SAG ( Roux et al. , 2012 ) , SVRG ( Johnson & Zhang , 2013 ) , and SAGA ( Defazio et al. , 2014 ) , are able to recover linear convergence rate of full-batch gradient descent with the asymptotic cost comparable to SGD . SAG and SAGA achieve this improvement at the substantial cost of storing the most recent gradient of each individual function . In contrast , SVRG spends extra computation at snapshot intervals by evaluating the full-batch gradient . Theoretical results such as Gazagnadou et al . ( 2019 ) show that under certain smoothness conditions , we can use larger step sizes with stochastic variance reduction methods than is allowed for SGD and hence achieve even faster convergence . In situations where we know the smoothness constant of functions , there are results on the optimal mini-batch size and the optimal step size given the inner loop size ( Sebbouh et al. , 2019 ) . Applying variance reduction methods in deep learning has been studied recently ( Defazio & Bottou , 2018 ) . The authors conjectured the ineffectiveness is caused by various elements commonly used in deep learning such as data augmentation , batch normalization and dropout . Such elements can potentially decrease the smoothness and make the stored gradients become stale quickly . The proposed solution is to either remove these elements or update the gradients more frequently than is practical . Dynamics of SGD and quadratic models Our main analysis tool is very closely related to recent work studying the dynamics of gradient-based stochastic methods . Wu et al . ( 2018 ) derived the dynamics of stochastic gradient descent with momentum on a noisy quadratic model ( Schaul et al. , 2013 ) , showing the problem of short horizon bias . In ( Zhang et al. , 2019 ) , the authors showed the same noisy quadratic model captures many of the essential characteristic of realistic neural networks training . Their noisy quadratic model successfully predicts the effectiveness of momentum , preconditioning and learning rate choices in training ResNets and Transformers . However , these previous quadratic models assume a constant variance in the gradient that is independent of the current parameters and the loss function . It makes them inadequate for analyzing the stochastic variance reduction methods , as SVRG can trivially achieve zero variance under the constant gradient noise . Instead , we adopted a noisy least-squares regression formulation by considering both the mini-batch sampling noise and the label noise . There are also recent works that derived the risk of SGD , for least-squares regression models using the bias-variance decomposition of the risk ( Belkin et al. , 2018 ; Hastie et al. , 2019 ) . We use a similar decomposition in our analysis . In contrast to the asymptotic analysis in these works , we compare SGD to SVRG along the optimization trajectory for any finite-time horizon under limited computation cost , not just the convergence points of those algorithms . Underparameterization vs overparameterization . Many of the state-of-the-art deep learning models are overparameterized deep neural networks with more parameters than the number of training examples . Even though these models are able to overfit to the data , when trained using SGD , they generalize well ( Zhang et al. , 2017 ) . As suggested in recent work , underparameterized and overparameterized regimes have different behaviours ( Ma et al. , 2018 ; Vaswani et al. , 2019 ; Schmidt & Roux , 2013 ) . Given the infinite width and a proper weight initialization , the learning dynamics of a neural network can be well-approximated by a linear model via the neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ; Chizat & Bach , 2018 ) . In NTK regime , neural networks are known to achieve global convergence by memorizing every training example . On the other hand , previous convergence results for SVRG have been obtained in stochastic convex optimization problems that are similar to that of an underparameterized model ( Roux et al. , 2012 ; Johnson & Zhang , 2013 ) . Our proposed noisy least-squares regression analysis captures both the underparameterization and overparameterization behavior by considering the presence or the absence of the label noise . 2 PRELIMINARY . 2.1 NOTATIONS . We will primarily focus on comparing the minibatch version of two methods , SGD and SVRG ( Johnson & Zhang , 2013 ) . Denote Li as the loss on ith data point . The SGD update is written as , θ ( t+1 ) = θ ( t ) − α ( t ) ĝ ( t ) , ( 1 ) where ĝ ( t ) = 1b ∑b i ∇θ ( t ) Li is the minibatch gradient , t is the training iteration , and α ( t ) is the learning rate . The SVRG algorithm is an inner-outer loop algorithm proposed to reduce the variance of the gradient caused by the minibatch sampling . In the outer loop , for every T steps , we evaluate a large batch gradient ḡ = 1N ∑N i ∇θ ( mT ) Li , where N b , and m is the outer loop index , and we store the parameters θ ( mT ) . In the inner loop , the update rule of the parameters is given by , θ ( mT+t+1 ) = θ ( mT+t ) − α ( t ) ( ĝ ( mT+t ) − g̃ ( mT+t ) + ḡ ) ( 2 ) where ĝ ( mT+t ) = 1b ∑b i ∇θ ( mT+t ) Li is the current gradient of the mini-batch and g̃ ( mT+t ) = 1 b ∑b i ∇θ ( mT ) Li is the old gradient . Note that in our analysis , the reference point is chosen to be the last iterate of previous outer loop θ ( mT ) , recommended as a practical implementation of the algorithm by the original SVRG paper Johnson & Zhang ( 2013 ) . 2.2 THE NOISY LEAST SQUARES REGRESSION MODEL . We now define the noisy least squares regression model ( Schaul et al. , 2013 ; Wu et al. , 2018 ) . In this setting , the input data is d-dimensional , and the output label is generated by a linear teacher model with additive noise , ( xi , i ) ∼ Px × P ; yi = x > i θ∗ + i , where E [ xi ] = µ ∈ Rd and Cov ( xi ) = Σ , E [ i ] = 0 , Var ( i ) = σ2y . We assume WLOG θ∗ = 0 . We also assume the data covariance matrix Σ is diagonal . This is an assumption adopted in many previous analysis and it is also a practical assumption as we often apply whitening to pre-process the training data . We would like to train a student model θ that minimizes the squared loss over the data distribution : min θ L ( θ ) : = E [ 1 2 ( x > i θ − yi ) 2 ] . ( 3 ) At each iteration , the optimizer can query an arbitrary number of data points { xi , yi } i sampled from data distribution . The SGD method uses b data points to form a minibatch gradient : ĝ ( t ) = 1 b b∑ i ( xix > i θ ( t ) − xi i ) = XbX > b θ ( t ) − 1√ b Xb b , ( 4 ) where Xb = 1√b [ x1 ; x2 ; · · · ; xb ] ∈ R d×b , and the noise vector b = [ 1 ; 2 ; · · · ; b ] > ∈ Rb . SVRG on the other hand , queries for N data points every T steps to form a large batch gradient ḡ = XNX > Nθ ( mT ) − 1√ N XN N , where XN and N are defined similarly . At each inner loop step , it further queries for another b data points , to form the update in Eq . 2 . Lastly , note that the expected loss can be written as a function of the second moment of the iterate , L ( θ ( t ) ) = 1 2 E [ ( x > i θ ( t ) − i ) 2 ] = 1 2 ( tr ( ΣE [ θ ( t ) θ ( t ) > ] ) + σ2y ) . Hence for the following analysis we mainly focus on deriving the dynamics of the second moment E [ θ ( t ) θ ( t ) > ] , denoted as A ( θ ( t ) ) . When Σ is diagonal , the loss can further be reduced to 1 2 diag ( Σ ) > diag ( E [ θ ( t ) θ ( t ) > ] ) + 12σ 2 y . We denote diag ( E [ θ ( t ) θ ( t ) > ] ) by m ( θ ( t ) ) .
This paper examines the tradeoffs between applying SVRG and SGD for training neural networks by providing an analysis of noisy least squares regression problems as well as experiments on simple MLPs and CNNs on MNIST and CIFAR-10. The theory analyzes a linear model where both the input $x$ and label noise $\epsilon$ follow Gaussian distributions. Under these assumptions, the paper shows that SVRG is able to converge to a smaller neighborhood at a slower rate than SGD, which converges faster to a larger neighborhood. This analysis coincides with the experimental behavior applied to neural networks, where one observes when training underparameterized models that SGD significantly outperforms SVRG initially, but SVRG is able to attain a lower loss value asymptotically. In the overparameterized regime, SGD is demonstrated to always outperform SVRG experimentally, which is argued to coincide with the case where there is no label noise in the theory.
SP:67c44f33dff59e4d218f753fdbc6296da62cdf62
SMiRL: Surprise Minimizing RL in Entropic Environments
1 INTRODUCTION . The general struggle for existence of animate beings is not a struggle for raw materials , nor for energy , but a struggle for negative entropy . ( Ludwig Boltzmann , 1886 ) All living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them ( Boltzmann , 1886 ; Schrödinger , 1944 ; Schneider & Kay , 1994 ; Friston , 2009 ) . Humans , for example , go to great lengths to shield themselves from surprise — we band together in millions to build cities with homes , supplying water , food , gas , and electricity to control the deterioration of our bodies and living spaces amidst heat and cold , wind and storm . The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats . Motivated by this , we ask : could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents ? Our method therefore addresses the unsupervised reinforcement learning problem : how might an agent in an environment acquire complex behaviors and skills with no external supervision ? This central problem in artificial intelligence has evoked several candidate solutions , largely focusing on novelty-seeking behaviors ( Schmidhuber , 1991 ; Lehman & Stanley , 2011 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ) . In simulated worlds , such as video games , novelty-seeking intrinsic motivation can lead to interesting and meaningful behavior . However , we argue that these sterile environments are fundamentally lacking compared to the real world . In the real world , natural forces and other agents offer bountiful novelty . The second law of thermodynamics stipulates ever-increasing entropy , and therefore perpetual novelty , without even requiring any agent intervention . Instead , the challenge in natural environments is homeostasis : discovering behaviors that enable agents to maintain an equilibrium , for example to preserve their bodies , their homes , and avoid predators and hunger . Even novelty seeking behaviors may emerge naturally as a means to maintain homeostasis : an agent that is curious and forages for food in unlikely places might better satisfy its hunger . We formalize allostasis as an objective for reinforcement learning based on surprise minimization ( SMiRL ) . In highly entropic and dynamic environments with undesirable forms of novelty , minimizing surprise ( i.e. , minimizing novelty ) causes agents to naturally seek a stable equilibrium . Natural environments with winds , earthquakes , adversaries , and other disruptions already offer a steady stream of novel stimuli , and an agent that minimizes surprise in these environments will act and explore in order to find the means to maintain a stable equilibrium in the face of these disturbances . SMiRL is simple to describe and implement : it works by maintaining a density p ( s ) of visited states and training a policy to act such that future states have high likelihood under p ( s ) . This interaction scheme is shown in Figure 1 ( right ) Across many different environments , with varied disruptive forces , and in agents with diverse embodiments and action spaces , we show that this simple approach induces useful equilibrium-seeking behaviors . We show that SMiRL agents can solve Tetris , avoid fireballs in Doom , and enable a simulated humanoid to balance and locomote , without any explicit task reward . More pragmatically , we show that SMiRL can be used together with a task reward to accelerate standard reinforcement learning in dynamic environments , and can provide a simple mechanism for imitation learning . SMiRL holds promise for a new kind of unsupervised RL method that produces behaviors that are closely tied to the prevailing disruptive forces , adversaries , and other sources of entropy in the environment . Videos of our results are available at https : //sites.google.com/view/surpriseminimization 2 SURPRISE MINIMIZING AGENTS . We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking to preserve order amidst chaos . In complex natural environments with disruptive forces that tend to naturally increase entropy , which we refer to as entropic environments , minimizing surprise over an agent ’ s lifetime requires taking action to reach stable states , and often requires acting continually to maintain homeostasis and avoid surprise . The long term effects of actions on the agent ’ s surprise can be complex and somewhat counterintuitive , especially when we consider that actions not only change the state that the agent is in , but also its beliefs about which states are more likely . The combination of these two processes induce the agent to not only seek states where p ( s ) is large , but to also visit states so as to alter p ( s ) , in order to receive larger rewards in the future . This “ meta ” level reasoning can result in behaviors where the agent might actually visit new states in order to make them more familiar . An example of this is shown in Figure 1 where in order to avoid the disruptions from the changing weather an agent needs to build a shelter or home to protect itself and decrease its observable surprise . The SMiRL formulation relies on disruptive forces in the environment to avoid collapse to degenerate solutions , such as staying in a single state s0 . Fortunately , natural environments typically offer no shortage of such disruption . 2.1 SURPRISE MINIMIZATION PROBLEM STATEMENT . To instantiate SMiRL , we design a reinforcement learning agent with a reward proportional to how familiar its current state is based on the history of states it has experienced during its “ life , ” which corresponds to a single episode . Formally , we assume a fully-observed controlled Markov process ( CMP ) , though extensions to partially observed settings can also be developed . We use st to denote the state at time t , and at to denote the agent ’ s action , ρ ( s0 ) to denote the initial state distribution , and T ( st+1|st , at ) to denote the transition dynamics . The agent has access to a datasetDt = { s1 , . . . , st } of all states experienced so far . By fitting a generative model pθt ( s ) with parameters θt to this dataset , the agent obtains an estimator that can be used to evaluate the negative surprise reward , given by rt ( s ) = log pθt ( s ) ( 1 ) We denote the fitting process as θt = U ( Dt ) . The goal of a SMiRL agent is to maximize the sum∑ t log pθt ( st+1 ) . Since the agent ’ s actions affect the future Dt and thus the future θt ’ s , the optimal policy does not simply visit states that have a high pθt ( s ) now , but rather those states that will change pθt ( s ) such that it provides high likelihood to the states that it sees in the future . 2.2 TRAINING SMIRL AGENTS . Algorithm 1 Training a SMiRL agent with RL 1 : Initialize policy parameters φ 2 : Initialize RL algorithm RL 3 : for each episode = 1 , 2 , . . . do 4 : s0 ∼ ρ ( s0 ) . Initial state distribution . 5 : D0 ← { s0 } . Reset state history . 6 : for each t = 0 , 1 , . . . , T do 7 : θt ← U ( Dt ) . Fit density model . 8 : at ∼ πφ ( at|st , θt , t ) . Run policy . 9 : st+1 ∼ T ( st+1|st , at ) . Transition dynamics . 10 : rt ← log pθt ( st+1 ) . Familiarity reward . 11 : Dt+1 ← Dt ∪ { st+1 } . Update state history . 12 : end for each 13 : φ← RL ( φ , s [ 0 : T ] , θ [ 0 : T ] , |D| [ 0 : T ] , a [ 0 : T ] , r [ 0 : T ] ) 14 : end for each We now present a practical reinforcement learning algorithm for surprise minimization . Recall that a critical component of SMiRL is reasoning about the effect of actions on future states that will be added to D , and their effect on future density estimates – e.g. , to understand that visiting a state that is currently unfamiliar and staying there will make that state familiar , and therefore lead to higher rewards in the long run . This means that the agent must reason not only about the unknown MDP dynamics , but also the dynamics of the density model pθ ( s ) trained on D. In our algorithm , we accomplish this via an episodic training procedure , where the agent is trained over many episodes and D is reset at the beginning of each episode to simulate a new lifetime . Through this procedure , SMiRL learns the parameters φ of the agent ’ s policy πφ for a fixed horizon . To learn this the policy must be conditioned on some sufficient statistic of Dt , since the reward rt is a function of Dt . Having trained parameterized generative models pθt as above on all states seen so far , we condition π on θt and |Dt| . This implies an assumption that θt and |Dt| represent the sufficient statistics necessary to summarize the contents of the dataset for the policy , and contain all information required to reason about how pθ will evolve in the future . Of course , we could also use any other summary statistic , or even read in the entirety of Dt using a recurrent model . In the next section , we also describe a modification that allows us to utilize a deep density model without conditioning π on a high-dimensional parameter vector . Algorithm 1 provides the pseudocode . SMiRL can be used with any reinforcement learning algorithm , which we denote RL in the pseudocode . As is standard in reinforcement learning , we alternate between sampling episodes from the policy ( lines 6-12 ) and updating the policy parameters ( line 13 ) . The details of the updates are left to the specific RL algorithm , which may be on or off-policy . During each episode , as shown in line 11 , D0 is initialized with the first state and grows as each state visited by the agent is added to the dataset . The parameters θt of the density model are fit to Dt at each timestep to both be passed to the policy and define the reward function . At the end of the episode , DT is discarded and the new D0 is initialized . 2.3 STATE DENSITY ESTIMATION WITH LEARNED REPRESENTATIONS . While SMiRL may in principle be used with any choice of model class for the generative model pθ ( s ) , this choice must be carefully made in practice . As we show in our experiments , relatively simple distribution classes , such as products of independent marginals , suffice to run SMiRL in simple environments with low-dimensional state spaces . However , it may be desirable in more complex environments to use more sophisticated density estimators , especially when learning directly from high-dimensional observations such as images . In particular , we propose to use variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) to learn a non-linear compressed state representation and facilitate estimation of pθ ( s ) for SMiRL . A VAE is trained using the standard loss to reconstruct states s after encoding them into a low-dimensional normal distribution qω ( z|s ) through the encoder q with parameters ω . A decoder pψ ( s|z , ) with parameters ψ computes s from the encoder output z . During this training process , a KL divergence loss between the prior p ( z ) and qω ( z|s ) is used to keep this distribution near the standard normal distribution . We described a VAE-based approach for estimating the SMiRL surprise reward . In our implementation , the VAE is trained online , with VAE updates interleaved with RL updates . Training a VAE requires more data than the simpler density models that can easily be fit to data from individual episodes . We propose to overcome this by not resetting the VAE parameters between training episodes . Instead , we train the VAE across episodes . Instead of passing all VAE parameters to the SMiRL policy , we track a separate episode-specific distribution pθt ( z ) , distinct from the VAE prior , over the course of each episode . pθt ( z ) replaces pθt ( s ) in the SMiRL algorithm and is fit to only that episode ’ s state history . We represent pθt ( z ) as a vector of independent normal distributions , and fit it to the VAE encoder outputs . This replaces the density estimate in line 10 of Algorithm 1 . Specifically , the corresponding update U ( Dt ) is performed as follows : z0 , . . . , zt = E [ qω ( z|s ) ] for s ∈ Dt µ = ∑t j=0 zj t+ 1 , σ = ∑t j=0 ( µ− zj ) 2 t+ 1 θt = [ µ , σ ] . Training the VAE online , over all previously seen data , deviates from the recipe in the previous section , where the density model was only updated within an episode . However , this does provide for a much richer state density model , and the within-episode updates to estimate pθt ( z ) still provide our method with meaningful surprise-seeking behavior . As we show in our experiments , this can improve the performance of SMiRL in practice .
This paper proposes a novel form of surprise-minimizing intrinsic reward signal that leads to interesting behavior in the absence of an external reward signal. The proposed approach encourages an agent to visit states with high probability / density under a parametric marginal state distribution that is learned as the agent interacts with its environment. The method (dubbed SMiRL) is evaluated in visual and proprioceptive high-dimensional "entropic" benchmarks (that progress without the agent doing anything in order to prevent trivial solutions such as standing and never moving), and compared against two surprise-maximizing intrinsic motivation methods (ICM and RND) as well as to a reward-maximizing oracle. The experiments demonstrate that SMiRL can lead to more sensible behavior compared to ICM and RND in the chosen environments, and eventually recover the performance of a purely reward-maximizing agent. Also, SMiRL can be used for imitation learning by pre-training the parametric state distribution with data from a teacher. Finally, SMiRL shows the potential of speeding up reinforcement learning by using intrinsic motivation as an additional reward signal added to the external task-defining reward.
SP:6022b52e1e160bd034df1a7c71c6ca163bcf4dc0
SMiRL: Surprise Minimizing RL in Entropic Environments
1 INTRODUCTION . The general struggle for existence of animate beings is not a struggle for raw materials , nor for energy , but a struggle for negative entropy . ( Ludwig Boltzmann , 1886 ) All living organisms carve out environmental niches within which they can maintain relative predictability amidst the ever-increasing entropy around them ( Boltzmann , 1886 ; Schrödinger , 1944 ; Schneider & Kay , 1994 ; Friston , 2009 ) . Humans , for example , go to great lengths to shield themselves from surprise — we band together in millions to build cities with homes , supplying water , food , gas , and electricity to control the deterioration of our bodies and living spaces amidst heat and cold , wind and storm . The need to discover and maintain such surprise-free equilibria has driven great resourcefulness and skill in organisms across very diverse natural habitats . Motivated by this , we ask : could the motive of preserving order amidst chaos guide the automatic acquisition of useful behaviors in artificial agents ? Our method therefore addresses the unsupervised reinforcement learning problem : how might an agent in an environment acquire complex behaviors and skills with no external supervision ? This central problem in artificial intelligence has evoked several candidate solutions , largely focusing on novelty-seeking behaviors ( Schmidhuber , 1991 ; Lehman & Stanley , 2011 ; Still & Precup , 2012 ; Bellemare et al. , 2016 ; Houthooft et al. , 2016 ; Pathak et al. , 2017 ) . In simulated worlds , such as video games , novelty-seeking intrinsic motivation can lead to interesting and meaningful behavior . However , we argue that these sterile environments are fundamentally lacking compared to the real world . In the real world , natural forces and other agents offer bountiful novelty . The second law of thermodynamics stipulates ever-increasing entropy , and therefore perpetual novelty , without even requiring any agent intervention . Instead , the challenge in natural environments is homeostasis : discovering behaviors that enable agents to maintain an equilibrium , for example to preserve their bodies , their homes , and avoid predators and hunger . Even novelty seeking behaviors may emerge naturally as a means to maintain homeostasis : an agent that is curious and forages for food in unlikely places might better satisfy its hunger . We formalize allostasis as an objective for reinforcement learning based on surprise minimization ( SMiRL ) . In highly entropic and dynamic environments with undesirable forms of novelty , minimizing surprise ( i.e. , minimizing novelty ) causes agents to naturally seek a stable equilibrium . Natural environments with winds , earthquakes , adversaries , and other disruptions already offer a steady stream of novel stimuli , and an agent that minimizes surprise in these environments will act and explore in order to find the means to maintain a stable equilibrium in the face of these disturbances . SMiRL is simple to describe and implement : it works by maintaining a density p ( s ) of visited states and training a policy to act such that future states have high likelihood under p ( s ) . This interaction scheme is shown in Figure 1 ( right ) Across many different environments , with varied disruptive forces , and in agents with diverse embodiments and action spaces , we show that this simple approach induces useful equilibrium-seeking behaviors . We show that SMiRL agents can solve Tetris , avoid fireballs in Doom , and enable a simulated humanoid to balance and locomote , without any explicit task reward . More pragmatically , we show that SMiRL can be used together with a task reward to accelerate standard reinforcement learning in dynamic environments , and can provide a simple mechanism for imitation learning . SMiRL holds promise for a new kind of unsupervised RL method that produces behaviors that are closely tied to the prevailing disruptive forces , adversaries , and other sources of entropy in the environment . Videos of our results are available at https : //sites.google.com/view/surpriseminimization 2 SURPRISE MINIMIZING AGENTS . We propose surprise minimization as a means to operationalize the idea of learning useful behaviors by seeking to preserve order amidst chaos . In complex natural environments with disruptive forces that tend to naturally increase entropy , which we refer to as entropic environments , minimizing surprise over an agent ’ s lifetime requires taking action to reach stable states , and often requires acting continually to maintain homeostasis and avoid surprise . The long term effects of actions on the agent ’ s surprise can be complex and somewhat counterintuitive , especially when we consider that actions not only change the state that the agent is in , but also its beliefs about which states are more likely . The combination of these two processes induce the agent to not only seek states where p ( s ) is large , but to also visit states so as to alter p ( s ) , in order to receive larger rewards in the future . This “ meta ” level reasoning can result in behaviors where the agent might actually visit new states in order to make them more familiar . An example of this is shown in Figure 1 where in order to avoid the disruptions from the changing weather an agent needs to build a shelter or home to protect itself and decrease its observable surprise . The SMiRL formulation relies on disruptive forces in the environment to avoid collapse to degenerate solutions , such as staying in a single state s0 . Fortunately , natural environments typically offer no shortage of such disruption . 2.1 SURPRISE MINIMIZATION PROBLEM STATEMENT . To instantiate SMiRL , we design a reinforcement learning agent with a reward proportional to how familiar its current state is based on the history of states it has experienced during its “ life , ” which corresponds to a single episode . Formally , we assume a fully-observed controlled Markov process ( CMP ) , though extensions to partially observed settings can also be developed . We use st to denote the state at time t , and at to denote the agent ’ s action , ρ ( s0 ) to denote the initial state distribution , and T ( st+1|st , at ) to denote the transition dynamics . The agent has access to a datasetDt = { s1 , . . . , st } of all states experienced so far . By fitting a generative model pθt ( s ) with parameters θt to this dataset , the agent obtains an estimator that can be used to evaluate the negative surprise reward , given by rt ( s ) = log pθt ( s ) ( 1 ) We denote the fitting process as θt = U ( Dt ) . The goal of a SMiRL agent is to maximize the sum∑ t log pθt ( st+1 ) . Since the agent ’ s actions affect the future Dt and thus the future θt ’ s , the optimal policy does not simply visit states that have a high pθt ( s ) now , but rather those states that will change pθt ( s ) such that it provides high likelihood to the states that it sees in the future . 2.2 TRAINING SMIRL AGENTS . Algorithm 1 Training a SMiRL agent with RL 1 : Initialize policy parameters φ 2 : Initialize RL algorithm RL 3 : for each episode = 1 , 2 , . . . do 4 : s0 ∼ ρ ( s0 ) . Initial state distribution . 5 : D0 ← { s0 } . Reset state history . 6 : for each t = 0 , 1 , . . . , T do 7 : θt ← U ( Dt ) . Fit density model . 8 : at ∼ πφ ( at|st , θt , t ) . Run policy . 9 : st+1 ∼ T ( st+1|st , at ) . Transition dynamics . 10 : rt ← log pθt ( st+1 ) . Familiarity reward . 11 : Dt+1 ← Dt ∪ { st+1 } . Update state history . 12 : end for each 13 : φ← RL ( φ , s [ 0 : T ] , θ [ 0 : T ] , |D| [ 0 : T ] , a [ 0 : T ] , r [ 0 : T ] ) 14 : end for each We now present a practical reinforcement learning algorithm for surprise minimization . Recall that a critical component of SMiRL is reasoning about the effect of actions on future states that will be added to D , and their effect on future density estimates – e.g. , to understand that visiting a state that is currently unfamiliar and staying there will make that state familiar , and therefore lead to higher rewards in the long run . This means that the agent must reason not only about the unknown MDP dynamics , but also the dynamics of the density model pθ ( s ) trained on D. In our algorithm , we accomplish this via an episodic training procedure , where the agent is trained over many episodes and D is reset at the beginning of each episode to simulate a new lifetime . Through this procedure , SMiRL learns the parameters φ of the agent ’ s policy πφ for a fixed horizon . To learn this the policy must be conditioned on some sufficient statistic of Dt , since the reward rt is a function of Dt . Having trained parameterized generative models pθt as above on all states seen so far , we condition π on θt and |Dt| . This implies an assumption that θt and |Dt| represent the sufficient statistics necessary to summarize the contents of the dataset for the policy , and contain all information required to reason about how pθ will evolve in the future . Of course , we could also use any other summary statistic , or even read in the entirety of Dt using a recurrent model . In the next section , we also describe a modification that allows us to utilize a deep density model without conditioning π on a high-dimensional parameter vector . Algorithm 1 provides the pseudocode . SMiRL can be used with any reinforcement learning algorithm , which we denote RL in the pseudocode . As is standard in reinforcement learning , we alternate between sampling episodes from the policy ( lines 6-12 ) and updating the policy parameters ( line 13 ) . The details of the updates are left to the specific RL algorithm , which may be on or off-policy . During each episode , as shown in line 11 , D0 is initialized with the first state and grows as each state visited by the agent is added to the dataset . The parameters θt of the density model are fit to Dt at each timestep to both be passed to the policy and define the reward function . At the end of the episode , DT is discarded and the new D0 is initialized . 2.3 STATE DENSITY ESTIMATION WITH LEARNED REPRESENTATIONS . While SMiRL may in principle be used with any choice of model class for the generative model pθ ( s ) , this choice must be carefully made in practice . As we show in our experiments , relatively simple distribution classes , such as products of independent marginals , suffice to run SMiRL in simple environments with low-dimensional state spaces . However , it may be desirable in more complex environments to use more sophisticated density estimators , especially when learning directly from high-dimensional observations such as images . In particular , we propose to use variational autoencoders ( VAEs ) ( Kingma & Welling , 2014 ) to learn a non-linear compressed state representation and facilitate estimation of pθ ( s ) for SMiRL . A VAE is trained using the standard loss to reconstruct states s after encoding them into a low-dimensional normal distribution qω ( z|s ) through the encoder q with parameters ω . A decoder pψ ( s|z , ) with parameters ψ computes s from the encoder output z . During this training process , a KL divergence loss between the prior p ( z ) and qω ( z|s ) is used to keep this distribution near the standard normal distribution . We described a VAE-based approach for estimating the SMiRL surprise reward . In our implementation , the VAE is trained online , with VAE updates interleaved with RL updates . Training a VAE requires more data than the simpler density models that can easily be fit to data from individual episodes . We propose to overcome this by not resetting the VAE parameters between training episodes . Instead , we train the VAE across episodes . Instead of passing all VAE parameters to the SMiRL policy , we track a separate episode-specific distribution pθt ( z ) , distinct from the VAE prior , over the course of each episode . pθt ( z ) replaces pθt ( s ) in the SMiRL algorithm and is fit to only that episode ’ s state history . We represent pθt ( z ) as a vector of independent normal distributions , and fit it to the VAE encoder outputs . This replaces the density estimate in line 10 of Algorithm 1 . Specifically , the corresponding update U ( Dt ) is performed as follows : z0 , . . . , zt = E [ qω ( z|s ) ] for s ∈ Dt µ = ∑t j=0 zj t+ 1 , σ = ∑t j=0 ( µ− zj ) 2 t+ 1 θt = [ µ , σ ] . Training the VAE online , over all previously seen data , deviates from the recipe in the previous section , where the density model was only updated within an episode . However , this does provide for a much richer state density model , and the within-episode updates to estimate pθt ( z ) still provide our method with meaningful surprise-seeking behavior . As we show in our experiments , this can improve the performance of SMiRL in practice .
This paper proposes Surprise Minimizing RL (SMiRL), a conceptual framework for training a reinforcement learning agent to seek out states with high likelihood under a density model trained on visited states. They qualitatively and quantitatively explore various aspects of the behaviour of these agents and argue that they exhibit a variety of favourable properties. They also compare their surprise minimizing algorithm with a variety of novelty-seeking algorithms (which can be considered somewhat the opposite) and show that in certain cases surprise minimization can result in more desirable behaviour. Finally, they show that using surprise minimization as an auxiliary reward can speed learning in certain settings.
SP:6022b52e1e160bd034df1a7c71c6ca163bcf4dc0
Projected Canonical Decomposition for Knowledge Base Completion
1 INTRODUCTION . The problems of representation learning and link prediction in multi-relational data can be formulated as a binary tensor completion problem , where the tensor is obtained by stacking the adjacency matrices of every relations between entities . This tensor can then be intrepreted as a `` knowledge base '' , and contains triples ( subject , predicate , object ) representing facts about the world . Link prediction in knowledge bases aims at automatically discovering missing facts ( Bordes et al. , 2011 ; Nickel et al. , 2011 ; Bordes et al. , 2013 ; Nickel et al. , 2016a ; Nguyen , 2017 ) . State of the art methods use the canonical polyadic ( CP ) decomposition of tensors ( Hitchcock , 1927 ) or variants of it ( Trouillon et al. , 2016 ; Kazemi & Poole , 2018 ; Lacroix et al. , 2018 ) . While initially motivated by low-rank assumptions on the underlying ground-truth tensor , the best performances are obtained by setting the rank as high as permitted by computational constraints , using tensor norms for regularization ( Lacroix et al. , 2018 ) . However , for large scale data where computational or memory constraints require ranks to be low ( Lerer et al. , 2019 ) , performances drop drastically . Tucker decomposition is another multilinear model which allows richer interactions between entities and predicate vectors . A special case of Tucker decomposition is RESCAL ( Nickel et al. , 2011 ) , in which the relations are represented by matrices and entities factors are shared for subjects and objects . However , an evaluation of this model in Nickel et al . ( 2016b ) shows that RESCAL lags behind other methods on several benchmarks of interest . Recent work have obtained more competitive results with similar models ( Balažević et al. , 2019b ; Wang et al. , 2019 ) , using different regularizers or deep learning heuristics such as dropout and label smoothing . Despite these recent efforts , learning Tucker decompositions remains mostly unresolved . Wang et al . ( 2019 ) does not achieve state of the art results on standard benchmarks , and we show ( see Figure 3 ) that the performances reported by Balažević et al . ( 2019b ) are actually matched by ComplEx ( Trouillon et al. , 2016 ; Lacroix et al. , 2018 ) optimized with Adam , which has less hyperparameters . In this work , we overcome some of the difficulties associated with learning a Tucker model for knowledge base completion . Balažević et al . ( 2019b ) use deep-learning mechanisms such as batch normalization ( Ioffe & Szegedy , 2015 ) , dropout ( Srivastava et al. , 2014 ) or learning-rate annealing to address both regularization and optimization issues . Our approach is different : We factorize the core tensor of the Tucker decomposition with CP to obtain a formulation which is closer to CP and better understand what difficulties appear . This yields a simple approach , which has a single regularization hyperparameter to tune for a fixed model specification . The main novelty of our approach is a more careful application of adaptive gradient techniques . State-of-the-art methods for tensor completion use optimization algorithms with adaptive diagonal rescaling such as Adagrad ( Duchi et al. , 2011 ) or Adam ( Kingma & Ba , 2014 ) . Through control experiments in which our model is equivalent to CP up to a fixed rotation of the embeddings , we show that one of the difficulties in training Tucker-style decompositions can be attributed to the lack of invariance to rotation of the diagonal rescaling . Focusing on Adagrad , we propose a different update rule that is equivalent to implicitely applying Adagrad to a CP model with a projection of the embedding to a lower dimensional subspace . Combining the Tucker formulation and the implicit Adagrad update , we obtain performances that match state-of-the-art methods on the standard benchmarks and achieve significanly better results for small embedding sizes on several datasets . Compared to the best current algorithm for Tucker decomposition of Balažević et al . ( 2019b ) , our approach has less hyperparameters , and we effectively report better performances than the implementation of ComplEx of Lacroix et al . ( 2018 ) in the regime of small embedding dimension . We discuss the related work in the next section . In Section 3 , we present a variant of the Tucker decomposition which allows to interpolate between Tucker and CP . The extreme case of this variant , which is equivalent to CP up to a fixed rotation of the embedding , serves as control model to highlight the deficiency of the diagonal rescaling of Adagrad for Tucker-style decompositions in experiments reported in Section 4 . We present the modified version of Adagrad in Section 5 and present experimental results on standard benchmarks of knowledge base completion in Section 7 . 2 LINK PREDICTION IN KNOWLEDGE BASES . Notation Tensors and matrices are denoted by uppercase letters . For a matrix U , ui is the vector corresponding to the i-th row of U . The tensor product is written ⊗ and the Hadamard product ( i.e. , elementwise product ) is written ⊙ . 2.1 LEARNING SETUP . A knowledge base consists of a set S of triples ( subject , predicate , object ) that represent ( true ) known facts . The goal of link prediction is to recover facts that are true but not in the database . The data is represented as a tensor X̃ ∈ { 0 , 1 } N×L×N for N the number of entities and L the number of predicates . Given a training set of triples , the goal is to provide a ranking of entities for queries of the type ( subject , predicate , ? ) and ( ? , predicate , object ) . Following Lacroix et al . ( 2018 ) , we use the cross-entropy as a surrogate of the ranking loss . As proposed by Lacroix et al . ( 2018 ) and Kazemi & Poole ( 2018 ) , we include reciprocal predicates : for each predicate P in the original dataset , and given an item o , each query of the form ( ? , P , o ) is reformulated as a query ( o , P−1 , ? ) , where o is now the subject of P−1 . This doubles the effective number of predicates but reduces the problem to queries of the type ( subject , predicate , ? ) only . For a given triple ( i , j , k ) ∈ S , the training loss function for a tensor X is then ℓi , j , k ( X ) = −Xi , j , k + log ( ∑ k′ ̸=k exp ( Xi , j , k′ ) ) . ( 1 ) For a tensor decomposition model X ( θ ) parameterized by θ , the parameters θ̂ are found by minimizing the regularized empirical risk with regularizer Λ : θ̂ = argmin θ L ( θ ) = argmin θ 1 |S| ∑ ( i , j , k ) ∈S ℓi , j , k ( X ( θ ) ) + νΛ ( θ ) . ( 2 ) This work studies specific models for X ( θ ) , inspired by CP and Tucker decomposition . We discuss the related work on tensor decompositions and link prediction in knowledge bases below . 2.2 RELATED WORK . 2.2.1 CANONICAL DECOMPOSITION AND ITS VARIANTS . The canonical polyadic ( CP ) decomposition of a tensor X is defined entrywise by ∀i , j , k , Xi , j , k = ⟨ui , vj , wk⟩ : = d∑ r=1 uirvjrwkr . The smallest value of d for which this decomposition exists is the rank of X . Each element Xi , j , k is thus represented as a multi-linear product of the 3 embeddings in Rd associated respectively to the ith subject , the jth predictate and the kth object . CP currently achieves near state-of-the-art performances on standard benchmarks of knowledge base completion ( Kazemi & Poole , 2018 ; Lacroix et al. , 2018 ) . Nonetheless , the best reported results are with the ComplEx model ( Trouillon et al. , 2016 ) , which learns complex-valued embeddings and sets the embeddings of the objects to be the complex conjugate of the embeddings of subjects , i.e. , wk = ūk . Prior to ComplEx , Dismult was proposed ( Yang et al. , 2014 ) as a variant of CP with wk = uk . While this model obtained good performances ( Kadlec et al. , 2017 ) , it can only model symmetric relations and does not perform as well as ComplEx . CP-based models are optimized with vanilla Adam or Adagrad and a single regularization parameter ( Trouillon et al. , 2016 ; Kadlec et al. , 2017 ; Lacroix et al. , 2018 ) and do not require additional heuristics for training . 2.2.2 TUCKER DECOMPOSITION AND ITS VARIANTS . Given a tensor X of size N×L×N , the Tucker decomposition of X is defined entrywise by ∀i , j , k , Xi , j , k = ⟨ui ⊗ vj ⊗ wk , C⟩ : = d1∑ r1=1 d2∑ r2=1 d3∑ r3=1 Cr1 , r2 , r3uir1vjr2wkr3 . The triple ( d1 , d2 , d3 ) are the rank parameters of the decomposition . We also use a multilinear product notation X = [ [ C ; U , V , W ] ] , where U , V , W are the matrices whose rows are respectively uj , vk , wl and C the three dimensional d1 × d2 × d3 core tensor . Note that the CP decomposition is a Tucker decomposition in which d1 = d2 = d3 = d and C is the identity , which we write [ [ U , V , W ] ] . With a non-trivial core tensor , Tucker decomposition is thus more flexible than CP for fixed embedding size . In knowledge base applications , we typically have d ≤ L ≪ N , so the vast majority of the model parameters are in the embedding matrices of the entities U and W . When constraints on the number of model parameters arise ( e.g. , memory constraints ) , Tucker models appear as natural candidates to increase the expressivity of the decomposition compared to CP with limited impact on the total number of parameters . While many variants of the Tucker decomposition have been proposed in the literature on tensor factorization ( see e.g. , Kolda & Bader , 2009 ) , the first approach based on Tucker for link prediction in knowledge bases is RESCAL ( Nickel et al. , 2011 ) . RESCAL uses a special form of Tucker decomposition in which the object and subject embeddings are shared , i.e. , U = W , and it does not compress the relation matrices . In the multilinear product notation above , a RESCAL model is thus written as X = [ [ C ; U , I , U ] ] . Despite some success on a few smaller datasets , RESCAL performances drop on larger datasets ( Nickel et al. , 2016b ) . This decrease in performances has been attributed either to improper regularization ( Nickel et al. , 2011 ) or optimization issues ( Xue et al. , 2018 ) . Balažević et al . ( 2019b ) revisits Tucker decomposition in the context of large-scale knowledge bases and resolves some of the optimization and regularization issues using learning rate annealing , batch-normalization and dropout . It comes at the price of more hyperparameters to tune for each dataset ( label smoothing , three different dropouts and a learning rate decay ) , and as we discuss in our experiments , the results they report are not better than ComplEx for the same number of parameters . Two methods were previously proposed to interpolate between the expressivity of RESCAL and CP . Xue et al . ( 2018 ) expands the HolE model ( Nickel et al. , 2016b ) ( and thus the ComplEx model ( Hayashi & Shimbo , 2017 ) ) based on cross-correlation of embeddings to close the gap in expressivity with the Tucker decomposition for a fixed embedding size . Jenatton et al . ( 2012 ) express the relation matrices in RESCAL as low-rank combination of a family of matrices . We describe the link between these approaches and ours in Appendix 9.4 . None of these approach however studied the effect of their formulation on optimization , and reported results inferior to ours .
In this paper, a tensor decomposition method is studied for link prediction problems. The model is based on Tucker decomposition but the core tensor is decomposed as CP decomposition so that it can be seen as an interpolation between Tucker and CP. The performance is evaluated with several NLP data sets (e.g., subject-verb-object triplets).
SP:8bdeb36997d6699e48511d9abac87df8c14bd087
Projected Canonical Decomposition for Knowledge Base Completion
1 INTRODUCTION . The problems of representation learning and link prediction in multi-relational data can be formulated as a binary tensor completion problem , where the tensor is obtained by stacking the adjacency matrices of every relations between entities . This tensor can then be intrepreted as a `` knowledge base '' , and contains triples ( subject , predicate , object ) representing facts about the world . Link prediction in knowledge bases aims at automatically discovering missing facts ( Bordes et al. , 2011 ; Nickel et al. , 2011 ; Bordes et al. , 2013 ; Nickel et al. , 2016a ; Nguyen , 2017 ) . State of the art methods use the canonical polyadic ( CP ) decomposition of tensors ( Hitchcock , 1927 ) or variants of it ( Trouillon et al. , 2016 ; Kazemi & Poole , 2018 ; Lacroix et al. , 2018 ) . While initially motivated by low-rank assumptions on the underlying ground-truth tensor , the best performances are obtained by setting the rank as high as permitted by computational constraints , using tensor norms for regularization ( Lacroix et al. , 2018 ) . However , for large scale data where computational or memory constraints require ranks to be low ( Lerer et al. , 2019 ) , performances drop drastically . Tucker decomposition is another multilinear model which allows richer interactions between entities and predicate vectors . A special case of Tucker decomposition is RESCAL ( Nickel et al. , 2011 ) , in which the relations are represented by matrices and entities factors are shared for subjects and objects . However , an evaluation of this model in Nickel et al . ( 2016b ) shows that RESCAL lags behind other methods on several benchmarks of interest . Recent work have obtained more competitive results with similar models ( Balažević et al. , 2019b ; Wang et al. , 2019 ) , using different regularizers or deep learning heuristics such as dropout and label smoothing . Despite these recent efforts , learning Tucker decompositions remains mostly unresolved . Wang et al . ( 2019 ) does not achieve state of the art results on standard benchmarks , and we show ( see Figure 3 ) that the performances reported by Balažević et al . ( 2019b ) are actually matched by ComplEx ( Trouillon et al. , 2016 ; Lacroix et al. , 2018 ) optimized with Adam , which has less hyperparameters . In this work , we overcome some of the difficulties associated with learning a Tucker model for knowledge base completion . Balažević et al . ( 2019b ) use deep-learning mechanisms such as batch normalization ( Ioffe & Szegedy , 2015 ) , dropout ( Srivastava et al. , 2014 ) or learning-rate annealing to address both regularization and optimization issues . Our approach is different : We factorize the core tensor of the Tucker decomposition with CP to obtain a formulation which is closer to CP and better understand what difficulties appear . This yields a simple approach , which has a single regularization hyperparameter to tune for a fixed model specification . The main novelty of our approach is a more careful application of adaptive gradient techniques . State-of-the-art methods for tensor completion use optimization algorithms with adaptive diagonal rescaling such as Adagrad ( Duchi et al. , 2011 ) or Adam ( Kingma & Ba , 2014 ) . Through control experiments in which our model is equivalent to CP up to a fixed rotation of the embeddings , we show that one of the difficulties in training Tucker-style decompositions can be attributed to the lack of invariance to rotation of the diagonal rescaling . Focusing on Adagrad , we propose a different update rule that is equivalent to implicitely applying Adagrad to a CP model with a projection of the embedding to a lower dimensional subspace . Combining the Tucker formulation and the implicit Adagrad update , we obtain performances that match state-of-the-art methods on the standard benchmarks and achieve significanly better results for small embedding sizes on several datasets . Compared to the best current algorithm for Tucker decomposition of Balažević et al . ( 2019b ) , our approach has less hyperparameters , and we effectively report better performances than the implementation of ComplEx of Lacroix et al . ( 2018 ) in the regime of small embedding dimension . We discuss the related work in the next section . In Section 3 , we present a variant of the Tucker decomposition which allows to interpolate between Tucker and CP . The extreme case of this variant , which is equivalent to CP up to a fixed rotation of the embedding , serves as control model to highlight the deficiency of the diagonal rescaling of Adagrad for Tucker-style decompositions in experiments reported in Section 4 . We present the modified version of Adagrad in Section 5 and present experimental results on standard benchmarks of knowledge base completion in Section 7 . 2 LINK PREDICTION IN KNOWLEDGE BASES . Notation Tensors and matrices are denoted by uppercase letters . For a matrix U , ui is the vector corresponding to the i-th row of U . The tensor product is written ⊗ and the Hadamard product ( i.e. , elementwise product ) is written ⊙ . 2.1 LEARNING SETUP . A knowledge base consists of a set S of triples ( subject , predicate , object ) that represent ( true ) known facts . The goal of link prediction is to recover facts that are true but not in the database . The data is represented as a tensor X̃ ∈ { 0 , 1 } N×L×N for N the number of entities and L the number of predicates . Given a training set of triples , the goal is to provide a ranking of entities for queries of the type ( subject , predicate , ? ) and ( ? , predicate , object ) . Following Lacroix et al . ( 2018 ) , we use the cross-entropy as a surrogate of the ranking loss . As proposed by Lacroix et al . ( 2018 ) and Kazemi & Poole ( 2018 ) , we include reciprocal predicates : for each predicate P in the original dataset , and given an item o , each query of the form ( ? , P , o ) is reformulated as a query ( o , P−1 , ? ) , where o is now the subject of P−1 . This doubles the effective number of predicates but reduces the problem to queries of the type ( subject , predicate , ? ) only . For a given triple ( i , j , k ) ∈ S , the training loss function for a tensor X is then ℓi , j , k ( X ) = −Xi , j , k + log ( ∑ k′ ̸=k exp ( Xi , j , k′ ) ) . ( 1 ) For a tensor decomposition model X ( θ ) parameterized by θ , the parameters θ̂ are found by minimizing the regularized empirical risk with regularizer Λ : θ̂ = argmin θ L ( θ ) = argmin θ 1 |S| ∑ ( i , j , k ) ∈S ℓi , j , k ( X ( θ ) ) + νΛ ( θ ) . ( 2 ) This work studies specific models for X ( θ ) , inspired by CP and Tucker decomposition . We discuss the related work on tensor decompositions and link prediction in knowledge bases below . 2.2 RELATED WORK . 2.2.1 CANONICAL DECOMPOSITION AND ITS VARIANTS . The canonical polyadic ( CP ) decomposition of a tensor X is defined entrywise by ∀i , j , k , Xi , j , k = ⟨ui , vj , wk⟩ : = d∑ r=1 uirvjrwkr . The smallest value of d for which this decomposition exists is the rank of X . Each element Xi , j , k is thus represented as a multi-linear product of the 3 embeddings in Rd associated respectively to the ith subject , the jth predictate and the kth object . CP currently achieves near state-of-the-art performances on standard benchmarks of knowledge base completion ( Kazemi & Poole , 2018 ; Lacroix et al. , 2018 ) . Nonetheless , the best reported results are with the ComplEx model ( Trouillon et al. , 2016 ) , which learns complex-valued embeddings and sets the embeddings of the objects to be the complex conjugate of the embeddings of subjects , i.e. , wk = ūk . Prior to ComplEx , Dismult was proposed ( Yang et al. , 2014 ) as a variant of CP with wk = uk . While this model obtained good performances ( Kadlec et al. , 2017 ) , it can only model symmetric relations and does not perform as well as ComplEx . CP-based models are optimized with vanilla Adam or Adagrad and a single regularization parameter ( Trouillon et al. , 2016 ; Kadlec et al. , 2017 ; Lacroix et al. , 2018 ) and do not require additional heuristics for training . 2.2.2 TUCKER DECOMPOSITION AND ITS VARIANTS . Given a tensor X of size N×L×N , the Tucker decomposition of X is defined entrywise by ∀i , j , k , Xi , j , k = ⟨ui ⊗ vj ⊗ wk , C⟩ : = d1∑ r1=1 d2∑ r2=1 d3∑ r3=1 Cr1 , r2 , r3uir1vjr2wkr3 . The triple ( d1 , d2 , d3 ) are the rank parameters of the decomposition . We also use a multilinear product notation X = [ [ C ; U , V , W ] ] , where U , V , W are the matrices whose rows are respectively uj , vk , wl and C the three dimensional d1 × d2 × d3 core tensor . Note that the CP decomposition is a Tucker decomposition in which d1 = d2 = d3 = d and C is the identity , which we write [ [ U , V , W ] ] . With a non-trivial core tensor , Tucker decomposition is thus more flexible than CP for fixed embedding size . In knowledge base applications , we typically have d ≤ L ≪ N , so the vast majority of the model parameters are in the embedding matrices of the entities U and W . When constraints on the number of model parameters arise ( e.g. , memory constraints ) , Tucker models appear as natural candidates to increase the expressivity of the decomposition compared to CP with limited impact on the total number of parameters . While many variants of the Tucker decomposition have been proposed in the literature on tensor factorization ( see e.g. , Kolda & Bader , 2009 ) , the first approach based on Tucker for link prediction in knowledge bases is RESCAL ( Nickel et al. , 2011 ) . RESCAL uses a special form of Tucker decomposition in which the object and subject embeddings are shared , i.e. , U = W , and it does not compress the relation matrices . In the multilinear product notation above , a RESCAL model is thus written as X = [ [ C ; U , I , U ] ] . Despite some success on a few smaller datasets , RESCAL performances drop on larger datasets ( Nickel et al. , 2016b ) . This decrease in performances has been attributed either to improper regularization ( Nickel et al. , 2011 ) or optimization issues ( Xue et al. , 2018 ) . Balažević et al . ( 2019b ) revisits Tucker decomposition in the context of large-scale knowledge bases and resolves some of the optimization and regularization issues using learning rate annealing , batch-normalization and dropout . It comes at the price of more hyperparameters to tune for each dataset ( label smoothing , three different dropouts and a learning rate decay ) , and as we discuss in our experiments , the results they report are not better than ComplEx for the same number of parameters . Two methods were previously proposed to interpolate between the expressivity of RESCAL and CP . Xue et al . ( 2018 ) expands the HolE model ( Nickel et al. , 2016b ) ( and thus the ComplEx model ( Hayashi & Shimbo , 2017 ) ) based on cross-correlation of embeddings to close the gap in expressivity with the Tucker decomposition for a fixed embedding size . Jenatton et al . ( 2012 ) express the relation matrices in RESCAL as low-rank combination of a family of matrices . We describe the link between these approaches and ours in Appendix 9.4 . None of these approach however studied the effect of their formulation on optimization , and reported results inferior to ours .
The paper introduces a novel tensor decomposition that is reminiscent of canonical decomposition (CP) with low-rank factors, based on the observation that the core tensor in Tucker decomposition can be decomposed, resulting in a model interpolating between CP and Tucker. The authors argue that a straight application of AdaGrad on this decomposition is inadequate, and propose Ada^{imp} algorithm that enforces rotation invariance of the gradient update. The new decomposition is applied to ComplEx model (called PComplEx) that demonstrates better performance than the baseline.
SP:8bdeb36997d6699e48511d9abac87df8c14bd087
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks
1 INTRODUCTION AND CONTRIBUTIONS . The problem of denoising consists of recovering a signal from measurements corrupted by noise , and is a canonical application of statistical estimation that has been studied since the 1950 ’ s . Achieving high-quality denoising results requires ( at least implicitly ) quantifying and exploiting the differences between signals and noise . In the case of photographic images , the denoising problem is both an important application , as well as a useful test-bed for our understanding of natural images . In the past decade , convolutional neural networks ( LeCun et al. , 2015 ) have achieved state-of-the-art results in image denoising ( Zhang et al. , 2017 ; Chen & Pock , 2017 ) . Despite their success , these solutions are mysterious : we lack both intuition and formal understanding of the mechanisms they implement . Network architecture and functional units are often borrowed from the image-recognition literature , and it is unclear which of these aspects contributes to , or limits , the denoising performance . The goal of this work is advance our understanding of deep-learning models for denoising . Our contributions are twofold : First , we study the generalization capabilities of deep-learning models across different noise levels . Second , we provide novel tools for analyzing the mechanisms implemented by neural networks to denoise natural images . An important advantage of deep-learning techniques over traditional methodology is that a single neural network can be trained to perform denoising at a wide range of noise levels . Currently , this is achieved by simulating the whole range of noise levels during training ( Zhang et al. , 2017 ) . Here , we show that this is not necessary . Neural networks can be made to generalize automatically across noise ∗Equal contribution . levels through a simple modification in the architecture : removing all additive constants . We find this holds for a variety of network architectures proposed in previous literature . We provide extensive empirical evidence that the main state-of-the-art denoising architectures systematically overfit to the noise levels in the training set , and that this is due to the presence of a net bias . Suppressing this bias makes it possible to attain state-of-the-art performance while training over a very limited range of noise levels . The data-driven mechanisms implemented by deep neural networks to perform denoising are almost completely unknown . It is unclear what priors are being learned by the models , and how they are affected by the choice of architecture and training strategies . Here , we provide novel linear-algebraic tools to visualize and interpret these strategies through a local analysis of the Jacobian of the denoising map . The analysis reveals locally adaptive properties of the learned models , akin to existing nonlinear filtering algorithms . In addition , we show that the deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace capturing features of natural images . 2 RELATED WORK . The classical solution to the denoising problem is the Wiener filter ( Wiener , 1950 ) , which assumes a translation-invariant Gaussian signal model . The main limitation of Wiener filtering is that it over-smoothes , eliminating fine-scale details and textures . Modern filtering approaches address this issue by adapting the filters to the local structure of the noisy image ( e.g . Tomasi & Manduchi ( 1998 ) ; Milanfar ( 2012 ) ) . Here we show that neural networks implement such strategies implicitly , learning them directly from the data . In the 1990 ’ s powerful denoising techniques were developed based on multi-scale ( `` wavelet '' ) transforms . These transforms map natural images to a domain where they have sparser representations . This makes it possible to perform denoising by applying nonlinear thresholding operations in order to discard components that are small relative to the noise level ( Donoho & Johnstone , 1995 ; Simoncelli & Adelson , 1996 ; Chang et al. , 2000 ) . From a linear-algebraic perspective , these algorithms operate by projecting the noisy input onto a lower-dimensional subspace that contains plausible signal content . The projection eliminates the orthogonal complement of the subspace , which mostly contains noise . This general methodology laid the foundations for the state-of-the-art models in the 2000 ’ s ( e.g . ( Dabov et al. , 2006 ) ) , some of which added a data-driven perspective , learning sparsifying transforms ( Elad & Aharon , 2006 ) , and nonlinear shrinkage functions ( Hel-Or & Shaked , 2008 ; Raphan & Simoncelli , 2008 ) , directly from natural images . Here , we show that deep-learning models learn similar priors in the form of local linear subspaces capturing image features . In the past decade , purely data-driven models based on convolutional neural networks ( LeCun et al. , 2015 ) have come to dominate all previous methods in terms of performance . These models consist of cascades of convolutional filters , and rectifying nonlinearities , which are capable of representing a diverse and powerful set of functions . Training such architectures to minimize mean square error over large databases of noisy natural-image patches achieves current state-of-the-art results ( Zhang et al. , 2017 ; Huang et al. , 2017 ; Ronneberger et al. , 2015 ; Zhang et al. , 2018a ) . 3 NETWORK BIAS IMPAIRS GENERALIZATION . We assume a measurement model in which images are corrupted by additive noise : y = x+n , where x ∈ RN is the original image , containing N pixels , n is an image of i.i.d . samples of Gaussian noise with variance σ2 , and y is the noisy observation . The denoising problem consists of finding a function f : RN → RN , that provides a good estimate of the original image , x . Commonly , one minimizes the mean squared error : f = arg ming E||x − g ( y ) ||2 , where the expectation is taken over some distribution over images , x , as well as over the distribution of noise realizations . In deep learning , the denoising function g is parameterized by the weights of the network , so the optimization is over these parameters . If the noise standard deviation , σ , is unknown , the expectation must also be taken over a distribution of σ . This problem is often called blind denoising in the literature . In this work , we study the generalization performance of CNNs across noise levels σ , i.e . when they are tested on noise levels not included in the training set . Feedforward neural networks with rectified linear units ( ReLUs ) are piecewise affine : for a given activation pattern of the ReLUs , the effect of the network on the input is a cascade of linear transformations ( convolutional or fully connected layers , Wk ) , additive constants ( bk ) , and pointwise multiplications by a binary mask corresponding to the fixed activation pattern ( R ) . Since each of these is affine , the entire cascade implements a single affine transformation . For a fixed noisy input image y ∈ RN with N pixels , the function f : RN → RN computed by a denoising neural network may be written f ( y ) = WLR ( WL−1 ... R ( W1y + b1 ) + ... bL−1 ) + bL = Ayy + by , ( 1 ) where Ay ∈ RN×N is the Jacobian of f ( · ) evaluated at input y , and by ∈ RN represents the net bias . The subscripts on Ay and by serve as a reminder that both depend on the ReLU activation patterns , which in turn depend on the input vector y . Based on equation 1 we can perform a first-order decomposition of the error or residual of the neural network for a specific input : y−f ( y ) = ( I−Ay ) y−by . Figure 1 shows the magnitude of the residual and the constant , which is equal to the net bias by , for a range of noise levels . Over the training range , the net bias is small , implying that the linear term is responsible for most of the denoising ( see Figures 9 and 10 for a visualization of both components ) . However , when the network is evaluated at noise levels outside of the training range , the norm of the bias increases dramatically , and the residual is significantly smaller than the noise , suggesting a form of overfitting . Indeed , network performance generalizes very poorly to noise levels outside the training range . This is illustrated for an example image in Figure 2 , and demonstrated through extensive experiments in Section 5 . 4 PROPOSED METHODOLOGY : BIAS-FREE NETWORKS . Section 3 shows that CNNs overfit to the noise levels present in the training set , and that this is associated with wild fluctuations of the net bias by . This suggests that the overfitting might be ameliorated by removing additive ( bias ) terms from every stage of the network , resulting in a biasfree CNN ( BF-CNN ) . Note that bias terms are also removed from the batch-normalization used during training . This simple change in the architecture has an interesting consequence . If the CNN has ReLU activations the denoising map is locally homogeneous , and consequently invariant to scaling : rescaling the input by a constant value simply rescales the output by the same amount , just as it would for a linear system . Lemma 1 . Let fBF : RN → RN be a feedforward neural network with ReLU activation functions and no additive constant terms in any layer . For any input y ∈ R and any nonnegative constant α , fBF ( αy ) = αfBF ( y ) . ( 2 ) Proof . We can write the action of a bias-free neural network with L layers in terms of the weight matrix Wi , 1 ≤ i ≤ L , of each layer and a rectifying operator R , which sets to zero any negative entries in its input . Multiplying by a nonnegative constant does not change the sign of the entries of a vector , so for any z with the right dimension and any α > 0R ( αz ) = αR ( z ) , which implies fBF ( αy ) = WLR ( WL−1 · · ·R ( W1αy ) ) = αWLR ( WL−1 · · ·R ( W1y ) ) = αfBF ( y ) . ( 3 ) Note that networks with nonzero net bias are not scaling invariant because scaling the input may change the activation pattern of the ReLUs . Scaling invariance is intuitively desireable for a denoising method operating on natural images ; a rescaled image is still an image . Note that Lemma 1 holds for networks with skip connections where the feature maps are concatenated or added , because both of these operations are linear . In the following sections we demonstrate that removing all additive terms in CNN architectures has two important consequences : ( 1 ) the networks gain the ability to generalize to noise levels not encountered during training ( as illustrated by Figure 2 the improvement is striking ) , and ( 2 ) the denoising mechanism can be analyzed locally via linear-algebraic tools that reveal intriguing ties to more traditional denoising methodology such as nonlinear filtering and sparsity-based techniques . 5 BIAS-FREE NETWORKS GENERALIZE ACROSS NOISE LEVELS . In order to evaluate the effect of removing the net bias in denoising CNNs , we compare several state-ofthe-art architectures to their bias-free counterparts , which are exactly the same except for the absence of any additive constants within the networks ( note that this includes the batch-normalization additive parameter ) . These architectures include popular features of existing neural-network techniques in image processing : recurrence , multiscale filters , and skip connections . More specifically , we examine the following models ( see Section A for additional details ) : • DnCNN ( Zhang et al. , 2017 ) : A feedforward CNN with 20 convolutional layers , each consisting of 3 × 3 filters , 64 channels , batch normalization ( Ioffe & Szegedy , 2015 ) , a ReLU nonlinearity , and a skip connection from the initial layer to the final layer . • Recurrent CNN : A recurrent architecture inspired by Zhang et al . ( 2018a ) where the basic module is a CNN with 5 layers , 3× 3 filters and 64 channels in the intermediate layers . The order of the recurrence is 4 . • UNet ( Ronneberger et al. , 2015 ) : A multiscale architecture with 9 convolutional layers and skip connections between the different scales . • Simplified DenseNet : CNN with skip connections inspired by the DenseNet architecture ( Huang et al. , 2017 ; Zhang et al. , 2018b ) . We train each network to denoise images corrupted by i.i.d . Gaussian noise over a range of standard deviations ( the training range of the network ) . We then evaluate the network for noise levels that are both within and beyond the training range . Our experiments are carried out on 180 × 180 natural images from the Berkeley Segmentation Dataset ( Martin et al. , 2001 ) to be consistent with previous results ( Schmidt & Roth , 2014 ; Chen & Pock , 2017 ; Zhang et al. , 2017 ) . Additional details about the dataset and training procedure are provided in Section B . Figures 3 , 11 and 12 show our results . For a wide range of different training ranges , and for all architectures , we observe the same phenomenon : the performance of CNNs is good over the training range , but degrades dramatically at new noise levels ; in stark contrast , the corresponding BF-CNNs provide strong denoising performance over noise levels outside the training range . This holds for both PSNR and the more perceptually-meaningful Structural Similarity Index ( Wang et al. , 2004 ) ( see Figure 12 ) . Figure 2 shows an example image , demonstrating visually the striking difference in generalization performance between a CNN and its corresponding BF-CNN . Our results provide strong evidence that removing net bias in CNN architectures results in effective generalization to noise levels out of the training range .
This paper proposed to remove all bias terms in denoising networks to avoid overfitting when different noise levels exist. With analysis, the paper concludes that the dimensions of subspaces of image features are adaptively changing according to the noise level. An interesting result is that the MSE is proportional to sigma instead of sigma^2 when using bias-free networks, which provides some theoretical evidence of advantage of using BF-CNN.
SP:62a75399aa97a61432385cf1dffabb674741a18a
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks
1 INTRODUCTION AND CONTRIBUTIONS . The problem of denoising consists of recovering a signal from measurements corrupted by noise , and is a canonical application of statistical estimation that has been studied since the 1950 ’ s . Achieving high-quality denoising results requires ( at least implicitly ) quantifying and exploiting the differences between signals and noise . In the case of photographic images , the denoising problem is both an important application , as well as a useful test-bed for our understanding of natural images . In the past decade , convolutional neural networks ( LeCun et al. , 2015 ) have achieved state-of-the-art results in image denoising ( Zhang et al. , 2017 ; Chen & Pock , 2017 ) . Despite their success , these solutions are mysterious : we lack both intuition and formal understanding of the mechanisms they implement . Network architecture and functional units are often borrowed from the image-recognition literature , and it is unclear which of these aspects contributes to , or limits , the denoising performance . The goal of this work is advance our understanding of deep-learning models for denoising . Our contributions are twofold : First , we study the generalization capabilities of deep-learning models across different noise levels . Second , we provide novel tools for analyzing the mechanisms implemented by neural networks to denoise natural images . An important advantage of deep-learning techniques over traditional methodology is that a single neural network can be trained to perform denoising at a wide range of noise levels . Currently , this is achieved by simulating the whole range of noise levels during training ( Zhang et al. , 2017 ) . Here , we show that this is not necessary . Neural networks can be made to generalize automatically across noise ∗Equal contribution . levels through a simple modification in the architecture : removing all additive constants . We find this holds for a variety of network architectures proposed in previous literature . We provide extensive empirical evidence that the main state-of-the-art denoising architectures systematically overfit to the noise levels in the training set , and that this is due to the presence of a net bias . Suppressing this bias makes it possible to attain state-of-the-art performance while training over a very limited range of noise levels . The data-driven mechanisms implemented by deep neural networks to perform denoising are almost completely unknown . It is unclear what priors are being learned by the models , and how they are affected by the choice of architecture and training strategies . Here , we provide novel linear-algebraic tools to visualize and interpret these strategies through a local analysis of the Jacobian of the denoising map . The analysis reveals locally adaptive properties of the learned models , akin to existing nonlinear filtering algorithms . In addition , we show that the deep networks implicitly perform a projection onto an adaptively-selected low-dimensional subspace capturing features of natural images . 2 RELATED WORK . The classical solution to the denoising problem is the Wiener filter ( Wiener , 1950 ) , which assumes a translation-invariant Gaussian signal model . The main limitation of Wiener filtering is that it over-smoothes , eliminating fine-scale details and textures . Modern filtering approaches address this issue by adapting the filters to the local structure of the noisy image ( e.g . Tomasi & Manduchi ( 1998 ) ; Milanfar ( 2012 ) ) . Here we show that neural networks implement such strategies implicitly , learning them directly from the data . In the 1990 ’ s powerful denoising techniques were developed based on multi-scale ( `` wavelet '' ) transforms . These transforms map natural images to a domain where they have sparser representations . This makes it possible to perform denoising by applying nonlinear thresholding operations in order to discard components that are small relative to the noise level ( Donoho & Johnstone , 1995 ; Simoncelli & Adelson , 1996 ; Chang et al. , 2000 ) . From a linear-algebraic perspective , these algorithms operate by projecting the noisy input onto a lower-dimensional subspace that contains plausible signal content . The projection eliminates the orthogonal complement of the subspace , which mostly contains noise . This general methodology laid the foundations for the state-of-the-art models in the 2000 ’ s ( e.g . ( Dabov et al. , 2006 ) ) , some of which added a data-driven perspective , learning sparsifying transforms ( Elad & Aharon , 2006 ) , and nonlinear shrinkage functions ( Hel-Or & Shaked , 2008 ; Raphan & Simoncelli , 2008 ) , directly from natural images . Here , we show that deep-learning models learn similar priors in the form of local linear subspaces capturing image features . In the past decade , purely data-driven models based on convolutional neural networks ( LeCun et al. , 2015 ) have come to dominate all previous methods in terms of performance . These models consist of cascades of convolutional filters , and rectifying nonlinearities , which are capable of representing a diverse and powerful set of functions . Training such architectures to minimize mean square error over large databases of noisy natural-image patches achieves current state-of-the-art results ( Zhang et al. , 2017 ; Huang et al. , 2017 ; Ronneberger et al. , 2015 ; Zhang et al. , 2018a ) . 3 NETWORK BIAS IMPAIRS GENERALIZATION . We assume a measurement model in which images are corrupted by additive noise : y = x+n , where x ∈ RN is the original image , containing N pixels , n is an image of i.i.d . samples of Gaussian noise with variance σ2 , and y is the noisy observation . The denoising problem consists of finding a function f : RN → RN , that provides a good estimate of the original image , x . Commonly , one minimizes the mean squared error : f = arg ming E||x − g ( y ) ||2 , where the expectation is taken over some distribution over images , x , as well as over the distribution of noise realizations . In deep learning , the denoising function g is parameterized by the weights of the network , so the optimization is over these parameters . If the noise standard deviation , σ , is unknown , the expectation must also be taken over a distribution of σ . This problem is often called blind denoising in the literature . In this work , we study the generalization performance of CNNs across noise levels σ , i.e . when they are tested on noise levels not included in the training set . Feedforward neural networks with rectified linear units ( ReLUs ) are piecewise affine : for a given activation pattern of the ReLUs , the effect of the network on the input is a cascade of linear transformations ( convolutional or fully connected layers , Wk ) , additive constants ( bk ) , and pointwise multiplications by a binary mask corresponding to the fixed activation pattern ( R ) . Since each of these is affine , the entire cascade implements a single affine transformation . For a fixed noisy input image y ∈ RN with N pixels , the function f : RN → RN computed by a denoising neural network may be written f ( y ) = WLR ( WL−1 ... R ( W1y + b1 ) + ... bL−1 ) + bL = Ayy + by , ( 1 ) where Ay ∈ RN×N is the Jacobian of f ( · ) evaluated at input y , and by ∈ RN represents the net bias . The subscripts on Ay and by serve as a reminder that both depend on the ReLU activation patterns , which in turn depend on the input vector y . Based on equation 1 we can perform a first-order decomposition of the error or residual of the neural network for a specific input : y−f ( y ) = ( I−Ay ) y−by . Figure 1 shows the magnitude of the residual and the constant , which is equal to the net bias by , for a range of noise levels . Over the training range , the net bias is small , implying that the linear term is responsible for most of the denoising ( see Figures 9 and 10 for a visualization of both components ) . However , when the network is evaluated at noise levels outside of the training range , the norm of the bias increases dramatically , and the residual is significantly smaller than the noise , suggesting a form of overfitting . Indeed , network performance generalizes very poorly to noise levels outside the training range . This is illustrated for an example image in Figure 2 , and demonstrated through extensive experiments in Section 5 . 4 PROPOSED METHODOLOGY : BIAS-FREE NETWORKS . Section 3 shows that CNNs overfit to the noise levels present in the training set , and that this is associated with wild fluctuations of the net bias by . This suggests that the overfitting might be ameliorated by removing additive ( bias ) terms from every stage of the network , resulting in a biasfree CNN ( BF-CNN ) . Note that bias terms are also removed from the batch-normalization used during training . This simple change in the architecture has an interesting consequence . If the CNN has ReLU activations the denoising map is locally homogeneous , and consequently invariant to scaling : rescaling the input by a constant value simply rescales the output by the same amount , just as it would for a linear system . Lemma 1 . Let fBF : RN → RN be a feedforward neural network with ReLU activation functions and no additive constant terms in any layer . For any input y ∈ R and any nonnegative constant α , fBF ( αy ) = αfBF ( y ) . ( 2 ) Proof . We can write the action of a bias-free neural network with L layers in terms of the weight matrix Wi , 1 ≤ i ≤ L , of each layer and a rectifying operator R , which sets to zero any negative entries in its input . Multiplying by a nonnegative constant does not change the sign of the entries of a vector , so for any z with the right dimension and any α > 0R ( αz ) = αR ( z ) , which implies fBF ( αy ) = WLR ( WL−1 · · ·R ( W1αy ) ) = αWLR ( WL−1 · · ·R ( W1y ) ) = αfBF ( y ) . ( 3 ) Note that networks with nonzero net bias are not scaling invariant because scaling the input may change the activation pattern of the ReLUs . Scaling invariance is intuitively desireable for a denoising method operating on natural images ; a rescaled image is still an image . Note that Lemma 1 holds for networks with skip connections where the feature maps are concatenated or added , because both of these operations are linear . In the following sections we demonstrate that removing all additive terms in CNN architectures has two important consequences : ( 1 ) the networks gain the ability to generalize to noise levels not encountered during training ( as illustrated by Figure 2 the improvement is striking ) , and ( 2 ) the denoising mechanism can be analyzed locally via linear-algebraic tools that reveal intriguing ties to more traditional denoising methodology such as nonlinear filtering and sparsity-based techniques . 5 BIAS-FREE NETWORKS GENERALIZE ACROSS NOISE LEVELS . In order to evaluate the effect of removing the net bias in denoising CNNs , we compare several state-ofthe-art architectures to their bias-free counterparts , which are exactly the same except for the absence of any additive constants within the networks ( note that this includes the batch-normalization additive parameter ) . These architectures include popular features of existing neural-network techniques in image processing : recurrence , multiscale filters , and skip connections . More specifically , we examine the following models ( see Section A for additional details ) : • DnCNN ( Zhang et al. , 2017 ) : A feedforward CNN with 20 convolutional layers , each consisting of 3 × 3 filters , 64 channels , batch normalization ( Ioffe & Szegedy , 2015 ) , a ReLU nonlinearity , and a skip connection from the initial layer to the final layer . • Recurrent CNN : A recurrent architecture inspired by Zhang et al . ( 2018a ) where the basic module is a CNN with 5 layers , 3× 3 filters and 64 channels in the intermediate layers . The order of the recurrence is 4 . • UNet ( Ronneberger et al. , 2015 ) : A multiscale architecture with 9 convolutional layers and skip connections between the different scales . • Simplified DenseNet : CNN with skip connections inspired by the DenseNet architecture ( Huang et al. , 2017 ; Zhang et al. , 2018b ) . We train each network to denoise images corrupted by i.i.d . Gaussian noise over a range of standard deviations ( the training range of the network ) . We then evaluate the network for noise levels that are both within and beyond the training range . Our experiments are carried out on 180 × 180 natural images from the Berkeley Segmentation Dataset ( Martin et al. , 2001 ) to be consistent with previous results ( Schmidt & Roth , 2014 ; Chen & Pock , 2017 ; Zhang et al. , 2017 ) . Additional details about the dataset and training procedure are provided in Section B . Figures 3 , 11 and 12 show our results . For a wide range of different training ranges , and for all architectures , we observe the same phenomenon : the performance of CNNs is good over the training range , but degrades dramatically at new noise levels ; in stark contrast , the corresponding BF-CNNs provide strong denoising performance over noise levels outside the training range . This holds for both PSNR and the more perceptually-meaningful Structural Similarity Index ( Wang et al. , 2004 ) ( see Figure 12 ) . Figure 2 shows an example image , demonstrating visually the striking difference in generalization performance between a CNN and its corresponding BF-CNN . Our results provide strong evidence that removing net bias in CNN architectures results in effective generalization to noise levels out of the training range .
This paper looks at how deep convolutional neural networks for image denoising can generalize across various noise levels. First, they argue that state-of-the-art denoising networks perform poorly outside of the training noise range. The authors empirically show that as denoising performance degrades on unseen noise levels, the network residual for a specific input is being increasingly dominated by the network bias (as opposed to the purely linear Jacobian term). Therefore, they propose using bias-free convolutional neural networks for better generalization performance in image denoising. Their experimental results show that bias-free denoisers significantly outperform their original counter-parts on unseen noise levels across various popular architectures. Then, they perform a local analysis of the bias-free network around an input image that is now a strictly linear function of the input. They empirically demonstrate that the Jacobian is approximately low-rank and symmetric, therefore the effect of the denoiser can be interpreted as a nonlinear adaptive filter that projects the noisy image onto a low-dimensional signal subspace. The authors show that most of the energy of the clean image falls into the signal subspace and the effective dimensionality of this subspace is inversely proportional to the noise level.
SP:62a75399aa97a61432385cf1dffabb674741a18a
Pay Attention to Features, Transfer Learn Faster CNNs
1 Introduction . Despite recent successes of CNNs achieving state-of-the-art performance in vision applications ( Tan & Le , 2019 ; Cai & Vasconcelos , 2018 ; Zhao et al. , 2018 ; Ren et al. , 2015 ) , there are two major shortcomings limiting their deployments in real life . First , training CNNs from random initializations to achieve high task accuracy generally requires a large amount of data that is expensive to collect . Second , CNNs are typically compute-intensive and memory-demanding , hindering their adoption to power-limited scenarios . To address the former challenge , transfer learning ( Pan & Yang , 2009 ) is thus designed to transfer knowledge learned from the source task to a target dataset that has limited data samples . In practice , we often choose a source dataset such that the input domain of the source comprises the domain of the target . A common paradigm for transfer learning is to train a model on a large source dataset , and then fine-tune the pre-trained weights with regularization methods on the target dataset ( Zagoruyko & Komodakis , 2017 ; Yim et al. , 2017 ; Li et al. , 2018 ; Li & Hoiem , 2018 ; Li et al. , 2019 ) . For example , one regularization method , L2-SP ( Li et al. , 2018 ) , penalizes the L2-distances of pretrained weights on the source dataset and the weights being trained on the target dataset . The pretrained source weights serves as a starting point when training on the target data . During fine-tuning on the target dataset , the regularization constrains the search space around this starting point , which in turn prevents overfitting the target dataset . Intuitively , the responsibility of transfer learning is to preserve the source knowledge acquired by important neurons . The neurons thereby retain their abilities to extract features from the source domain , and contribute to the network ’ s performance on the target dataset . ∗Equal contribution , corresponding authors . †Work partially done during an internship at Baidu Research . Moreover , by determining the importance of neurons , unimportant ones can further be removed from computation during inference with network pruning methods ( Luo et al. , 2017 ; He et al. , 2017 ; Zhuang et al. , 2018 ; Ye et al. , 2018 ; Gao et al. , 2019 ) . The removal of unnecessary compute not only makes CNNs smaller in size but also reduces computational costs while minimizing possible accuracy degradations . As the source domain encompasses the target , many neurons responsible for extracting features from the source domain may become irrelevant to the target domain and can be removed . In Figure 1 , a simple empirical study of the channel neurons ’ activation magnitudes corroborates our intuition : as deeper layers extract higher-level features , more neurons become either specialized or irrelevant to dogs . The discussion above hence prompts two questions regarding the neurons : which neurons should we transfer source knowledge to , and which are actually important to the target model ? Yet traditional transfer learning methods fail to provide answers to both , as generally they transfer knowledge either equally for each neuron with the same regularized weights , or determine the strength of regularization using only the source dataset ( Li et al. , 2018 ) . The source domain could be vastly larger than the target , giving importance to weights that are irrelevant to the target task . Recent years have seen a surge of interest in network pruning techniques , many of which induce sparsity by pushing neuron weights or outputs to zeros , allowing them to be pruned without a detrimental impact on the task accuracies . Even though pruning methods present a solution to neuron/weight importance , unfortunately they do not provide an answer to the latter question , i.e . whether these neurons/weights are important to the target dataset . The reason for this is that pruning optimization objectives are often in conflict with traditional transfer learning , as both drive weight values in different directions : zero for pruning and the initial starting point for transfer learning . As we will see later , a näıve composition of the two methods could have a disastrous impact on the accuracy of a pruned CNN transferlearned on the target dataset . In this paper , to tackle the challenge of jointly transferring source knowledge and pruning target CNNs , we propose a new method based on attention mechanism ( Vaswani et al. , 2017 ) , attentive feature distillation and selection ( AFDS ) . For the images in the target dataset , AFDS dynamically learns not only the features to transfer , but also the unimportant neurons to skip . During transfer learning , instead of fine-tuning with L2-SP regularization which explores the proximity of the pre-trained weights , we argue that a better alternative is to mimic the feature maps , i.e . the output response of each convolutional layer in the source model when images from the target dataset are shown , with L2-distances . This way the fine-tuned model can still learn the behavior of the source model . Additionally , without the restriction of searching only the proximity of the initial position , the weights in the target model can be optimized freely and thus increasing their generalization capacity . Therefore , we present attentive feature distillation ( AFD ) to learn which relevant features to transfer . To accelerate the transfer-learned model , we further propose attentive feature selection ( AFS ) to prune networks dynamically . AFS is designed to learn to predictively select important output channels in the convolution to evaluate and skip unimportant ones , depending on the input to the convolution . Rarely activated channel neurons can further be removed from the network , reducing the model ’ s memory footprint . From an informal perspective , both AFD and AFS learn to adjust the “ valves ” that control the flow of information for each channel neuron . The former adjusts the strength of regularization , thereby tuning the flow of knowledge being transferred from the source model . The latter allows salient information to pass on to the subsequent layer and stops the flow of unimportant information . A significant attribute that differentiates AFD and AFS from their existing counterparts is that we employ attention mechanisms to adaptively learn to “ turn the valves ” dynamically with small trainable auxiliary networks . Our main contributions are as follows : • We present attentive feature distillation and selection ( AFDS ) to effectively transfer learn CNNs , and demonstrate state-of-the-art performance on many publicly available datasets with ResNet-101 ( He et al. , 2016 ) models transfer learned from ImageNet ( Deng et al. , 2009 ) . • We paired a large range of existing transfer learning and network pruning methods , and examined their abilities to trade-off FLOPs with task accuracy . • By changing the fraction of channel neurons to skip for each convolution , AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy . We found that AFDS generally provides the best FLOPs and accuracy trade-off when compared to a broad range of paired methods . 2 Related Work . 2.1 Transfer Learning . Training a deep CNN to achieve high accuracy generally require a large amount of training data , which may be expensive to collect . Transfer learning ( Pan & Yang , 2009 ) addresses this challenge by transferring knowledge learned on a large dataset that has a similar domain to the training dataset . A typical approach for CNNs is to first train the model on a large source dataset , and make use of their feature extraction abilities ( Donahue et al. , 2014 ; Razavian et al. , 2014 ) . Moreover , it has been demonstrated that the task accuracy can be further improved by fine-tuning the resulting pre-trained model on a smaller target dataset with a similar domain but a different task ( Yosinski et al. , 2014 ; Azizpour et al. , 2015 ) . Li et al . ( 2018 ) proposed L2-SP regularization to minimize the L2-distance between each fine-tuned parameter and its initial pre-trained value , thus preserving knowledge learned in the pre-trained model . In addition , they presented L2-SP-Fisher , which further weighs each L2-distance using Fisher information matrix estimated from the source dataset . Instead of constraining the parameter search space , Li et al . ( 2019 ) showed that it is often more effective to regularize feature maps during fine-tuning , and further learns which features to pay attention to . Learning without Forgetting ( Li & Hoiem , 2018 ) learns to adapt the model to new tasks , while trying to match the output response on the original task of the original model using knowledge distillation ( KD ) ( Hinton et al. , 2014 ) . Methods proposed by Zagoruyko & Komodakis ( 2017 ) and Yim et al . ( 2017 ) transfer knowledge from a teacher model to a student by regularizing features . The former computes and regularizes spatial statistics across all feature maps channels , whereas the latter estimates the flow of information across layers for each pair of channels , and transfers this knowledge to the student . Instead of manually deciding the regularization penalties and what to regularize as in the previous approaches , Jang et al . ( 2019 ) used meta-learning to automatically learn what knowledge to transfer from the teacher and to where in the student model . Inspired by Li et al . ( 2019 ) and Jang et al . ( 2019 ) , this paper introduces attentive feature distillation ( AFD ) , which similarly transfers knowledge by learning from the teacher ’ s feature maps . It however differs from Jang et al . ( 2019 ) as the teacher and student models share the same network topology , and it instead learns which channel to transfer from the teacher to the student in the same convolutional output . 2.2 Structured Sparsity . Sparsity in neural networks has been a long-studied subject ( Reed , 1993 ; LeCun et al. , 1990 ; Chauvin , 1989 ; Mozer & Smolensky , 1989 ; Hassibi et al. , 1994 ) . Related techniques have been applied to modern deep CNNs with great success ( Guo et al. , 2016 ; Dong et al. , 2017a ) , significantly lowering their storage requirements . In general , as these methods zero out individual weights , producing irregular sparse connections , which can not be efficiently exploited by GPUs to speed up computation . For this , many recent work turned their attention to structured sparsity ( Alvarez & Salzmann , 2016 ; Wen et al. , 2016 ; Liu et al. , 2017 ; He et al. , 2017 ; 2018 ) . This approach aims to find coarse-grained sparsity and preserves dense structures , thus allowing conventional GPUs to compute them efficiently . Alvarez & Salzmann ( 2016 ) and Wen et al . ( 2016 ) both added group Lasso to penalize non-zero weights , and removed channels entirely that have been reduced to zero . Liu et al . ( 2017 ) proposed network slimming ( NS ) , which adds L1 regularization to the trainable channel-wise scaling parameters γ used in batch normalization , and gradually prunes channels with small γ values by threshold . He et al . ( 2018 ) introduced soft filter pruning ( SFP ) , which iteratively fine-tunes and sets channels with small L2-norms to zero . Pruning algorithms remove weights or neurons from the network . The network may therefore lose its ability to process some difficult inputs correctly , as the neurons responsible for them are permanently discarded . Gao et al . ( 2019 ) have found empirically that task accuracies degrades considerably when most of the computation are removed from the network , and introduced feature boosting and suppression ( FBS ) . Instead of removing neurons permanently from the network , FBS learns to dynamically prune unimportant channels , depending on the current input image . In this paper , attentive feature selection ( AFS ) builds on top of the advantages of both static and dynamic pruning algorithms . AFS not only preserves neurons that are important to some input images , but also removes unimportant ones for most inputs from the network , reducing both the memory and compute requirements for inference . There are methods that dynamically select which paths to evaluate in a network dependent on the input ( Figurnov et al. , 2017 ; Dong et al. , 2017b ; Bolukbasi et al. , 2017 ; Lin et al. , 2017 ; Shazeer et al. , 2017 ; Wu et al. , 2018 ; Ren et al. , 2018 ) . They however introduce architectural and/or training method changes , and thus can not be applied directly on existing popular models pre-trained on ImageNet ( Deng et al. , 2009 ) .
This paper proposes a method called attentive feature distillation and selection (AFDS) to improve the performance of transfer learning for CNNs. The authors argue that the regularization should constrain the proximity of feature maps, instead of pre-trained model weights. Specifically, the authors proposes two modifications of loss functions: 1) Attentive feature distillation (AFD), which modifies the regularization term to learn different weights for each channel and 2) Attentive feature selection (AFS), which modifies the ConvBN layers by predicts unimportant channels and suppress them.
SP:35407fdffbf982a97312ef16673be781d593ff22
Pay Attention to Features, Transfer Learn Faster CNNs
1 Introduction . Despite recent successes of CNNs achieving state-of-the-art performance in vision applications ( Tan & Le , 2019 ; Cai & Vasconcelos , 2018 ; Zhao et al. , 2018 ; Ren et al. , 2015 ) , there are two major shortcomings limiting their deployments in real life . First , training CNNs from random initializations to achieve high task accuracy generally requires a large amount of data that is expensive to collect . Second , CNNs are typically compute-intensive and memory-demanding , hindering their adoption to power-limited scenarios . To address the former challenge , transfer learning ( Pan & Yang , 2009 ) is thus designed to transfer knowledge learned from the source task to a target dataset that has limited data samples . In practice , we often choose a source dataset such that the input domain of the source comprises the domain of the target . A common paradigm for transfer learning is to train a model on a large source dataset , and then fine-tune the pre-trained weights with regularization methods on the target dataset ( Zagoruyko & Komodakis , 2017 ; Yim et al. , 2017 ; Li et al. , 2018 ; Li & Hoiem , 2018 ; Li et al. , 2019 ) . For example , one regularization method , L2-SP ( Li et al. , 2018 ) , penalizes the L2-distances of pretrained weights on the source dataset and the weights being trained on the target dataset . The pretrained source weights serves as a starting point when training on the target data . During fine-tuning on the target dataset , the regularization constrains the search space around this starting point , which in turn prevents overfitting the target dataset . Intuitively , the responsibility of transfer learning is to preserve the source knowledge acquired by important neurons . The neurons thereby retain their abilities to extract features from the source domain , and contribute to the network ’ s performance on the target dataset . ∗Equal contribution , corresponding authors . †Work partially done during an internship at Baidu Research . Moreover , by determining the importance of neurons , unimportant ones can further be removed from computation during inference with network pruning methods ( Luo et al. , 2017 ; He et al. , 2017 ; Zhuang et al. , 2018 ; Ye et al. , 2018 ; Gao et al. , 2019 ) . The removal of unnecessary compute not only makes CNNs smaller in size but also reduces computational costs while minimizing possible accuracy degradations . As the source domain encompasses the target , many neurons responsible for extracting features from the source domain may become irrelevant to the target domain and can be removed . In Figure 1 , a simple empirical study of the channel neurons ’ activation magnitudes corroborates our intuition : as deeper layers extract higher-level features , more neurons become either specialized or irrelevant to dogs . The discussion above hence prompts two questions regarding the neurons : which neurons should we transfer source knowledge to , and which are actually important to the target model ? Yet traditional transfer learning methods fail to provide answers to both , as generally they transfer knowledge either equally for each neuron with the same regularized weights , or determine the strength of regularization using only the source dataset ( Li et al. , 2018 ) . The source domain could be vastly larger than the target , giving importance to weights that are irrelevant to the target task . Recent years have seen a surge of interest in network pruning techniques , many of which induce sparsity by pushing neuron weights or outputs to zeros , allowing them to be pruned without a detrimental impact on the task accuracies . Even though pruning methods present a solution to neuron/weight importance , unfortunately they do not provide an answer to the latter question , i.e . whether these neurons/weights are important to the target dataset . The reason for this is that pruning optimization objectives are often in conflict with traditional transfer learning , as both drive weight values in different directions : zero for pruning and the initial starting point for transfer learning . As we will see later , a näıve composition of the two methods could have a disastrous impact on the accuracy of a pruned CNN transferlearned on the target dataset . In this paper , to tackle the challenge of jointly transferring source knowledge and pruning target CNNs , we propose a new method based on attention mechanism ( Vaswani et al. , 2017 ) , attentive feature distillation and selection ( AFDS ) . For the images in the target dataset , AFDS dynamically learns not only the features to transfer , but also the unimportant neurons to skip . During transfer learning , instead of fine-tuning with L2-SP regularization which explores the proximity of the pre-trained weights , we argue that a better alternative is to mimic the feature maps , i.e . the output response of each convolutional layer in the source model when images from the target dataset are shown , with L2-distances . This way the fine-tuned model can still learn the behavior of the source model . Additionally , without the restriction of searching only the proximity of the initial position , the weights in the target model can be optimized freely and thus increasing their generalization capacity . Therefore , we present attentive feature distillation ( AFD ) to learn which relevant features to transfer . To accelerate the transfer-learned model , we further propose attentive feature selection ( AFS ) to prune networks dynamically . AFS is designed to learn to predictively select important output channels in the convolution to evaluate and skip unimportant ones , depending on the input to the convolution . Rarely activated channel neurons can further be removed from the network , reducing the model ’ s memory footprint . From an informal perspective , both AFD and AFS learn to adjust the “ valves ” that control the flow of information for each channel neuron . The former adjusts the strength of regularization , thereby tuning the flow of knowledge being transferred from the source model . The latter allows salient information to pass on to the subsequent layer and stops the flow of unimportant information . A significant attribute that differentiates AFD and AFS from their existing counterparts is that we employ attention mechanisms to adaptively learn to “ turn the valves ” dynamically with small trainable auxiliary networks . Our main contributions are as follows : • We present attentive feature distillation and selection ( AFDS ) to effectively transfer learn CNNs , and demonstrate state-of-the-art performance on many publicly available datasets with ResNet-101 ( He et al. , 2016 ) models transfer learned from ImageNet ( Deng et al. , 2009 ) . • We paired a large range of existing transfer learning and network pruning methods , and examined their abilities to trade-off FLOPs with task accuracy . • By changing the fraction of channel neurons to skip for each convolution , AFDS can further accelerate the transfer learned models while minimizing the impact on task accuracy . We found that AFDS generally provides the best FLOPs and accuracy trade-off when compared to a broad range of paired methods . 2 Related Work . 2.1 Transfer Learning . Training a deep CNN to achieve high accuracy generally require a large amount of training data , which may be expensive to collect . Transfer learning ( Pan & Yang , 2009 ) addresses this challenge by transferring knowledge learned on a large dataset that has a similar domain to the training dataset . A typical approach for CNNs is to first train the model on a large source dataset , and make use of their feature extraction abilities ( Donahue et al. , 2014 ; Razavian et al. , 2014 ) . Moreover , it has been demonstrated that the task accuracy can be further improved by fine-tuning the resulting pre-trained model on a smaller target dataset with a similar domain but a different task ( Yosinski et al. , 2014 ; Azizpour et al. , 2015 ) . Li et al . ( 2018 ) proposed L2-SP regularization to minimize the L2-distance between each fine-tuned parameter and its initial pre-trained value , thus preserving knowledge learned in the pre-trained model . In addition , they presented L2-SP-Fisher , which further weighs each L2-distance using Fisher information matrix estimated from the source dataset . Instead of constraining the parameter search space , Li et al . ( 2019 ) showed that it is often more effective to regularize feature maps during fine-tuning , and further learns which features to pay attention to . Learning without Forgetting ( Li & Hoiem , 2018 ) learns to adapt the model to new tasks , while trying to match the output response on the original task of the original model using knowledge distillation ( KD ) ( Hinton et al. , 2014 ) . Methods proposed by Zagoruyko & Komodakis ( 2017 ) and Yim et al . ( 2017 ) transfer knowledge from a teacher model to a student by regularizing features . The former computes and regularizes spatial statistics across all feature maps channels , whereas the latter estimates the flow of information across layers for each pair of channels , and transfers this knowledge to the student . Instead of manually deciding the regularization penalties and what to regularize as in the previous approaches , Jang et al . ( 2019 ) used meta-learning to automatically learn what knowledge to transfer from the teacher and to where in the student model . Inspired by Li et al . ( 2019 ) and Jang et al . ( 2019 ) , this paper introduces attentive feature distillation ( AFD ) , which similarly transfers knowledge by learning from the teacher ’ s feature maps . It however differs from Jang et al . ( 2019 ) as the teacher and student models share the same network topology , and it instead learns which channel to transfer from the teacher to the student in the same convolutional output . 2.2 Structured Sparsity . Sparsity in neural networks has been a long-studied subject ( Reed , 1993 ; LeCun et al. , 1990 ; Chauvin , 1989 ; Mozer & Smolensky , 1989 ; Hassibi et al. , 1994 ) . Related techniques have been applied to modern deep CNNs with great success ( Guo et al. , 2016 ; Dong et al. , 2017a ) , significantly lowering their storage requirements . In general , as these methods zero out individual weights , producing irregular sparse connections , which can not be efficiently exploited by GPUs to speed up computation . For this , many recent work turned their attention to structured sparsity ( Alvarez & Salzmann , 2016 ; Wen et al. , 2016 ; Liu et al. , 2017 ; He et al. , 2017 ; 2018 ) . This approach aims to find coarse-grained sparsity and preserves dense structures , thus allowing conventional GPUs to compute them efficiently . Alvarez & Salzmann ( 2016 ) and Wen et al . ( 2016 ) both added group Lasso to penalize non-zero weights , and removed channels entirely that have been reduced to zero . Liu et al . ( 2017 ) proposed network slimming ( NS ) , which adds L1 regularization to the trainable channel-wise scaling parameters γ used in batch normalization , and gradually prunes channels with small γ values by threshold . He et al . ( 2018 ) introduced soft filter pruning ( SFP ) , which iteratively fine-tunes and sets channels with small L2-norms to zero . Pruning algorithms remove weights or neurons from the network . The network may therefore lose its ability to process some difficult inputs correctly , as the neurons responsible for them are permanently discarded . Gao et al . ( 2019 ) have found empirically that task accuracies degrades considerably when most of the computation are removed from the network , and introduced feature boosting and suppression ( FBS ) . Instead of removing neurons permanently from the network , FBS learns to dynamically prune unimportant channels , depending on the current input image . In this paper , attentive feature selection ( AFS ) builds on top of the advantages of both static and dynamic pruning algorithms . AFS not only preserves neurons that are important to some input images , but also removes unimportant ones for most inputs from the network , reducing both the memory and compute requirements for inference . There are methods that dynamically select which paths to evaluate in a network dependent on the input ( Figurnov et al. , 2017 ; Dong et al. , 2017b ; Bolukbasi et al. , 2017 ; Lin et al. , 2017 ; Shazeer et al. , 2017 ; Wu et al. , 2018 ; Ren et al. , 2018 ) . They however introduce architectural and/or training method changes , and thus can not be applied directly on existing popular models pre-trained on ImageNet ( Deng et al. , 2009 ) .
The paper presents an improvement to the task of transfer learning by being deliberate about which channels from the base model are most relevant to the new task at hand. It does this by apply attentive feature selection (AFS) to select channels or features that align well with the down stream task and attentive feature distillation (AFD) to pass on these features to the student network. In the process they do channel pruning there by decreasing the size of the network and enabling faster inference speeds. Their major argument is that plain transfer learning is redundant and wasteful and careful attention applied to selection of the features and channels to be transfered can lead to smaller faster models which in several cases presented in the paper provide superior performance.
SP:35407fdffbf982a97312ef16673be781d593ff22
Multi-objective Neural Architecture Search via Predictive Network Performance Optimization
1 INTRODUCTION . Recently Neural Architecture Search ( NAS ) has aroused a surge of interest by its potentials of freeing the researchers from tedious and time-consuming architecture tuning for each new task and dataset . Specifically , NAS has already shown some competitive results comparing with hand-crafted architectures in computer vision : classification ( Real et al. , 2019b ) , detection , segmentation ( Ghiasi et al. , 2019 ; Chen et al. , 2019 ; Liu et al. , 2019a ) and super-resolution ( Chu et al. , 2019 ) . Meanwhile , NAS has also achieved remarkable results in natural language processing tasks ( Luong et al. , 2018 ; So et al. , 2019 ) . A variety of search strategies have been proposed , which may be categorized into two groups : oneshot NAS algorithms ( Liu et al. , 2019b ; Pham et al. , 2018 ; Luo et al. , 2018 ) , and sample-based algorithms ( Zoph & Le , 2017 ; Liu et al. , 2018a ; Real et al. , 2019b ) . One-shot NAS algorithms embed the architecture searching process into the training stage by using weight sharing , continuous relaxation or network morphisms . However , those methods can not guarantee the optimal performance of the final model due to those approximation tricks and is usually sensitive to the initial seeds ( Sciuto et al. , 2019 ) . On the other hand , sample-based algorithms are relatively slower but reliable . They explore and exploit the search space using some general search algorithms by providing potential candidates with higher accuracy . However , it requires fully training of huge amounts of candidate models . Typically , the focus of most existing NAS methods has been on the accuracy of the final searched model alone , ignoring the cost spent in the search phase . Thus , the comparison between existing search algorithms for NAS is very difficult . ( Wang et al. , 2019b ) gives us an example of evaluating the NAS algorithms from this view . They compare the number of training architectures sampled until finding the global optimal architecture with the top accuracy in the NAS datasets . Besides accuracy , in real applications , there are many other objectives we should concern , such as speed/accuracy trade-off . Hence , in this paper , we aim at designing an efficient multi-objective NAS algorithm to adaptively explore the search space and capture the structural information of architectures related to the performance . The common issue faced by this problem is that optimizing objective functions is computationally expensive and the search space always contains billions of architectures . To tackle this problem , we present BOGCN-NAS , a NAS algorithm that utilizes Bayesian Optimization ( BO ) together with Graph Convolutional Network ( GCN ) . BO is an efficient algorithm for finding the global optimum of costly black-box function ( Mockus et al. , 1978 ) . In our method , we replace the popular Gaussian Processes model with a proposed GCN model as the surrogate function for BO ( Jones , 2001 ) . We have found that GCN can generalize fairly well with just a few architecture-accuracy pairs as its training set . As BO balances exploration and exploitation during searching and GCN extracts embeddings that can well represent model architectures , BOGCN-NAS is able to obtain the optimal model architecture with only a few samples from the search space . Thus , our method is more resource-efficient than the previous ones . Graph neural network has been proposed in previous work for predicting the parameters of the architecture using a graph hypernetwork ( Zhang et al. , 2019 ) . However , it ’ s still a one-shot NAS method and thus can not ensure the performance of the final found model . In contrast , we use graph embedding to predict the performance directly and can guarantee performance as well . The proposed BOGCN-NAS outperforms current state-of-the-art searching methods , including Evolution ( Real et al. , 2019b ) , MCTS ( Wang et al. , 2019b ) , LaNAS ( Wang et al. , 2019a ) . We observe consistent gains on multiple search space for CV and NLP tasks , i.e. , NASBench-101 ( denoted NASBench ) ( Ying et al. , 2019 ) and LSTM-12K ( toy dataset ) . In particular , our method BOGCN-NAS is 128.4×more efficient than Random Search and 7.8×more efficient than previous SOTA LaNAS on NASBench ( Wang et al. , 2019a ) . We apply our method to multi-objective NAS further , considering adding more search objectives including accuracy and number of parameters . Our method can find more superior Pareto front on NASBench . Our algorithm is applied on open domain search with NASNet search space and ResNet Style search space , which finds competitive models in both scenarios . The results of experiment demonstrate our proposed algorithm can find a more competitive Pareto front compared with other sample-based methods . 2 RELATED WORK . 2.1 BAYESIAN OPTIMIZATION . Bayesian Optimization aims to find the global optimal over a compact subset X ( here we consider maximization problem ) : x∗ = arg max x∈X f ( x ) . ( 1 ) Bayesian Optimization considers prior belief about objective function and updates posterior probability with online sampling . Gaussian Processes ( GPs ) is widely used as a surrogate model to approximate the objective function ( Jones , 2001 ) . And Expected Improvement acquisition function is often adopted ( Mockus et al. , 1978 ) . For the hyperparameters of the surrogate model Θ , we define γ ( x ) = µ ( x ; D , Θ ) − f ( xbest ) σ ( x ; D , Θ ) , ( 2 ) where µ ( x ; D , Θ ) is the predictive mean , σ2 ( x ; D , Θ ) is the predictive variance and f ( xbest ) is the maximal value observed . The Expected Improvement ( EI ) criterion is defined as follows . aEI ( x ; D , Θ ) = σ ( x ; D , Θ ) [ γ ( x ) Φ ( γ ( x ) ; 0 , 1 ) +N ( γ ( x ) ; 0 , 1 ) ] , ( 3 ) where N ( · ; 0 , 1 ) is the probability density function of a standard normal and Φ ( · ; 0 , 1 ) is its cumulative distribution . 2.2 MULTI-OBJECTIVE OPTIMIZATION . Without loss of generality about max or min , given a search space X and m ≥ 1 objectives f1 : X → R , . . . , fm : X → R , variable X1 ∈ X dominates variable X2 ∈ X ( denoted X1 X2 ) if ( i ) fi ( X1 ) ≥ fi ( X2 ) , ∀i ∈ { 1 , . . . , m } ; and ( ii ) fj ( X1 ) > fj ( X2 ) for at least one j ∈ { 1 , . . . , m } . X∗ is Pareto optimal if there is no X ∈ X that domaines X∗ . The set of all Pareto optimal architectures consitutes the Pareto front Pf . A multi-objective optimization problem ( MOP ) aims at finding such input X ∈ X that X can not be dominated by any variable in X ( Marler & Arora , 2004 ) . 2.3 GRAPH CONVOLUTIONAL NETWORK . Let the graph be G = ( V , E ) , where V is a set of N nodes , and E is the set of edges . Let its adjacency matrix be A and feature matrix be X . The graph convolutional network ( GCN ) is a learning model for graph-structure data ( Kipf & Welling , 2016 ) . For a L-layer GCN , the layer-wise propagation rule is given by : H ( l+1 ) = f ( H ( l ) , A ) = ReLU ( D̃ 1 2 ÃD̃− 1 2H ( l ) W ( l ) ) , ( 4 ) where à = A + I , I is the identity matrix , D̃ is a diagonal matrix with D̃ii = ∑N j=1Aij , H ( l ) and W ( l ) are the feature map and weight at the l-th layer respectively , and ReLU ( · ) is the ReLU activation function . H ( 0 ) is the original feature matrix X , and H ( L ) is the graph embedding matrix . 3 BOGCN-NAS . To search for the optimal architecture more efficiently , we proposed BOGCN-NAS by using predictive network performance Optimization with the GCN ( Section 3.2 ) while utilizing the Bayesian Optimization . Figure 1 shows the overview of the proposed algorithm . 3.1 MULTI-OBJECTIVE NAS . We formulate NAS problem as a multi-objective optimization problem over the architecture search space A where objective functions can be accuracy , latency , number of parameters , etc . We aim to find architectures on the Pareto front of A . Specifically , when m = 1 , it reduces to single-objective ( usually accuracy ) NAS and the corresponding Pareto front reduces to one optimal architecture . 3.2 GCN PREDICTOR . GCN predictor predicts the performance ( like accuracy ) of an architecture . Compared with MLP and LSTM predictors proposed before ( Wang et al. , 2019b ) , GCN can preserve the context of graph data better . Another important characteristic of GCN is its ability to handle variable number of nodes , while an MLP can not take a larger architecture as the input . Even though the LSTM can handle variable-length sequences , its performance is not competitive because of the flat string encoding . A neural network can be viewed as a directed attributed graph , in which each node represents an operation ( such as convolution operation ) and each edge represents a data flow . As a concrete illustration , we use the architectures in the NASBench dataset ( Ying et al. , 2019 ) as an example . The idea can be easily extended to other architectures . In NASBench , each architecture is constituted by stacking multiple repeated cells . Thus , we will focus on searching the cell architecture . An example cell in NASBench is illustrated on the left side of Figure 1 , where “ input ” represents the input of the cell , “ output ” represents the output of the cell , “ 1 × 1 Conv , 3 × 3 Conv , Max Pooling ” are three different operations ( 5 operations totally ) . We propose to encode the cell into an adjacency matrix A ( asymmetric ) and a feature matrix X , as the input of our GCN predictor . Note that the vanilla GCN only extracts the node embeddings , while we want to obtain graph embedding . Following ( Scarselli et al. , 2008 ) , we add a global node to the original graph of the cell and let every node point at the global node . The adjacency matrix can be obtained directly from the graph structure . For the feature matrix , we use the one-hot coding scheme for each operation . Besides the original 5 different operations defined in NASBench , we add another operation ( global node ) into coding scheme . We feed A and X to a multi-layer GCN model to obtain the embedding of every node H ( L ) by ( Eq . 4 ) . For high-level prediction , we leave original nodes out and take the embedding of global node solely because it already has the overall context of the architecture . Followed by one fully-connected layer with sigmoid activation function , we can get the predicted accuracy . In training phase , we use MSE loss for regression optimization .
This paper proposed BOGCN-NAS that encodes current architecture with Graph convolutional network (GCN) and uses the feature extracted from GCN as the input to perform a Bayesian regression (predicting bias and variance, See Eqn. 5-6). They use Bayesian Optimization to pick the most promising next model with Expected Improvement, train it and take its resulting accuracy/latency as an additional training sample, and repeat.
SP:d510a4587befa21d3f6b151d437e9d5272ce03a2
Multi-objective Neural Architecture Search via Predictive Network Performance Optimization
1 INTRODUCTION . Recently Neural Architecture Search ( NAS ) has aroused a surge of interest by its potentials of freeing the researchers from tedious and time-consuming architecture tuning for each new task and dataset . Specifically , NAS has already shown some competitive results comparing with hand-crafted architectures in computer vision : classification ( Real et al. , 2019b ) , detection , segmentation ( Ghiasi et al. , 2019 ; Chen et al. , 2019 ; Liu et al. , 2019a ) and super-resolution ( Chu et al. , 2019 ) . Meanwhile , NAS has also achieved remarkable results in natural language processing tasks ( Luong et al. , 2018 ; So et al. , 2019 ) . A variety of search strategies have been proposed , which may be categorized into two groups : oneshot NAS algorithms ( Liu et al. , 2019b ; Pham et al. , 2018 ; Luo et al. , 2018 ) , and sample-based algorithms ( Zoph & Le , 2017 ; Liu et al. , 2018a ; Real et al. , 2019b ) . One-shot NAS algorithms embed the architecture searching process into the training stage by using weight sharing , continuous relaxation or network morphisms . However , those methods can not guarantee the optimal performance of the final model due to those approximation tricks and is usually sensitive to the initial seeds ( Sciuto et al. , 2019 ) . On the other hand , sample-based algorithms are relatively slower but reliable . They explore and exploit the search space using some general search algorithms by providing potential candidates with higher accuracy . However , it requires fully training of huge amounts of candidate models . Typically , the focus of most existing NAS methods has been on the accuracy of the final searched model alone , ignoring the cost spent in the search phase . Thus , the comparison between existing search algorithms for NAS is very difficult . ( Wang et al. , 2019b ) gives us an example of evaluating the NAS algorithms from this view . They compare the number of training architectures sampled until finding the global optimal architecture with the top accuracy in the NAS datasets . Besides accuracy , in real applications , there are many other objectives we should concern , such as speed/accuracy trade-off . Hence , in this paper , we aim at designing an efficient multi-objective NAS algorithm to adaptively explore the search space and capture the structural information of architectures related to the performance . The common issue faced by this problem is that optimizing objective functions is computationally expensive and the search space always contains billions of architectures . To tackle this problem , we present BOGCN-NAS , a NAS algorithm that utilizes Bayesian Optimization ( BO ) together with Graph Convolutional Network ( GCN ) . BO is an efficient algorithm for finding the global optimum of costly black-box function ( Mockus et al. , 1978 ) . In our method , we replace the popular Gaussian Processes model with a proposed GCN model as the surrogate function for BO ( Jones , 2001 ) . We have found that GCN can generalize fairly well with just a few architecture-accuracy pairs as its training set . As BO balances exploration and exploitation during searching and GCN extracts embeddings that can well represent model architectures , BOGCN-NAS is able to obtain the optimal model architecture with only a few samples from the search space . Thus , our method is more resource-efficient than the previous ones . Graph neural network has been proposed in previous work for predicting the parameters of the architecture using a graph hypernetwork ( Zhang et al. , 2019 ) . However , it ’ s still a one-shot NAS method and thus can not ensure the performance of the final found model . In contrast , we use graph embedding to predict the performance directly and can guarantee performance as well . The proposed BOGCN-NAS outperforms current state-of-the-art searching methods , including Evolution ( Real et al. , 2019b ) , MCTS ( Wang et al. , 2019b ) , LaNAS ( Wang et al. , 2019a ) . We observe consistent gains on multiple search space for CV and NLP tasks , i.e. , NASBench-101 ( denoted NASBench ) ( Ying et al. , 2019 ) and LSTM-12K ( toy dataset ) . In particular , our method BOGCN-NAS is 128.4×more efficient than Random Search and 7.8×more efficient than previous SOTA LaNAS on NASBench ( Wang et al. , 2019a ) . We apply our method to multi-objective NAS further , considering adding more search objectives including accuracy and number of parameters . Our method can find more superior Pareto front on NASBench . Our algorithm is applied on open domain search with NASNet search space and ResNet Style search space , which finds competitive models in both scenarios . The results of experiment demonstrate our proposed algorithm can find a more competitive Pareto front compared with other sample-based methods . 2 RELATED WORK . 2.1 BAYESIAN OPTIMIZATION . Bayesian Optimization aims to find the global optimal over a compact subset X ( here we consider maximization problem ) : x∗ = arg max x∈X f ( x ) . ( 1 ) Bayesian Optimization considers prior belief about objective function and updates posterior probability with online sampling . Gaussian Processes ( GPs ) is widely used as a surrogate model to approximate the objective function ( Jones , 2001 ) . And Expected Improvement acquisition function is often adopted ( Mockus et al. , 1978 ) . For the hyperparameters of the surrogate model Θ , we define γ ( x ) = µ ( x ; D , Θ ) − f ( xbest ) σ ( x ; D , Θ ) , ( 2 ) where µ ( x ; D , Θ ) is the predictive mean , σ2 ( x ; D , Θ ) is the predictive variance and f ( xbest ) is the maximal value observed . The Expected Improvement ( EI ) criterion is defined as follows . aEI ( x ; D , Θ ) = σ ( x ; D , Θ ) [ γ ( x ) Φ ( γ ( x ) ; 0 , 1 ) +N ( γ ( x ) ; 0 , 1 ) ] , ( 3 ) where N ( · ; 0 , 1 ) is the probability density function of a standard normal and Φ ( · ; 0 , 1 ) is its cumulative distribution . 2.2 MULTI-OBJECTIVE OPTIMIZATION . Without loss of generality about max or min , given a search space X and m ≥ 1 objectives f1 : X → R , . . . , fm : X → R , variable X1 ∈ X dominates variable X2 ∈ X ( denoted X1 X2 ) if ( i ) fi ( X1 ) ≥ fi ( X2 ) , ∀i ∈ { 1 , . . . , m } ; and ( ii ) fj ( X1 ) > fj ( X2 ) for at least one j ∈ { 1 , . . . , m } . X∗ is Pareto optimal if there is no X ∈ X that domaines X∗ . The set of all Pareto optimal architectures consitutes the Pareto front Pf . A multi-objective optimization problem ( MOP ) aims at finding such input X ∈ X that X can not be dominated by any variable in X ( Marler & Arora , 2004 ) . 2.3 GRAPH CONVOLUTIONAL NETWORK . Let the graph be G = ( V , E ) , where V is a set of N nodes , and E is the set of edges . Let its adjacency matrix be A and feature matrix be X . The graph convolutional network ( GCN ) is a learning model for graph-structure data ( Kipf & Welling , 2016 ) . For a L-layer GCN , the layer-wise propagation rule is given by : H ( l+1 ) = f ( H ( l ) , A ) = ReLU ( D̃ 1 2 ÃD̃− 1 2H ( l ) W ( l ) ) , ( 4 ) where à = A + I , I is the identity matrix , D̃ is a diagonal matrix with D̃ii = ∑N j=1Aij , H ( l ) and W ( l ) are the feature map and weight at the l-th layer respectively , and ReLU ( · ) is the ReLU activation function . H ( 0 ) is the original feature matrix X , and H ( L ) is the graph embedding matrix . 3 BOGCN-NAS . To search for the optimal architecture more efficiently , we proposed BOGCN-NAS by using predictive network performance Optimization with the GCN ( Section 3.2 ) while utilizing the Bayesian Optimization . Figure 1 shows the overview of the proposed algorithm . 3.1 MULTI-OBJECTIVE NAS . We formulate NAS problem as a multi-objective optimization problem over the architecture search space A where objective functions can be accuracy , latency , number of parameters , etc . We aim to find architectures on the Pareto front of A . Specifically , when m = 1 , it reduces to single-objective ( usually accuracy ) NAS and the corresponding Pareto front reduces to one optimal architecture . 3.2 GCN PREDICTOR . GCN predictor predicts the performance ( like accuracy ) of an architecture . Compared with MLP and LSTM predictors proposed before ( Wang et al. , 2019b ) , GCN can preserve the context of graph data better . Another important characteristic of GCN is its ability to handle variable number of nodes , while an MLP can not take a larger architecture as the input . Even though the LSTM can handle variable-length sequences , its performance is not competitive because of the flat string encoding . A neural network can be viewed as a directed attributed graph , in which each node represents an operation ( such as convolution operation ) and each edge represents a data flow . As a concrete illustration , we use the architectures in the NASBench dataset ( Ying et al. , 2019 ) as an example . The idea can be easily extended to other architectures . In NASBench , each architecture is constituted by stacking multiple repeated cells . Thus , we will focus on searching the cell architecture . An example cell in NASBench is illustrated on the left side of Figure 1 , where “ input ” represents the input of the cell , “ output ” represents the output of the cell , “ 1 × 1 Conv , 3 × 3 Conv , Max Pooling ” are three different operations ( 5 operations totally ) . We propose to encode the cell into an adjacency matrix A ( asymmetric ) and a feature matrix X , as the input of our GCN predictor . Note that the vanilla GCN only extracts the node embeddings , while we want to obtain graph embedding . Following ( Scarselli et al. , 2008 ) , we add a global node to the original graph of the cell and let every node point at the global node . The adjacency matrix can be obtained directly from the graph structure . For the feature matrix , we use the one-hot coding scheme for each operation . Besides the original 5 different operations defined in NASBench , we add another operation ( global node ) into coding scheme . We feed A and X to a multi-layer GCN model to obtain the embedding of every node H ( L ) by ( Eq . 4 ) . For high-level prediction , we leave original nodes out and take the embedding of global node solely because it already has the overall context of the architecture . Followed by one fully-connected layer with sigmoid activation function , we can get the predicted accuracy . In training phase , we use MSE loss for regression optimization .
This paper provide a NAS algorithm using Bayesian Optimization with Graph Convolutional Network predictor. The method apply GCN as a surrogate model to adaptively discover and incorporate nodes structure to approximate the performance of the architecture. The method further considers an efficient multi-objective search which can be flexibly injected into any sample-based NAS pipelines to efficiently find the best speed/accuracy trade-off.
SP:d510a4587befa21d3f6b151d437e9d5272ce03a2
Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation
1 INTRODUCTION . Generative adversarial models ( GANs ) have been extremely successful at learning complex distributions such as natural images ( Zhu et al. , 2017 ; Isola et al. , 2017 ) . However , for sequence generation , directly applying GANs without carefully engineered constraints typically results in strong artifacts over time due to the significant difficulties introduced by the temporal changes . In particular , conditional video generation tasks are very challenging learning problems where generators should not only learn to represent the data distribution of the target domain , but also learn to correlate the output distribution over time with conditional inputs . Their central objective is to faithfully reproduce the temporal dynamics of the target domain and not resort to trivial solutions such as features that arbitrarily appear and disappear over time . In our work , we propose a novel adversarial learning method for a recurrent training approach that supervises both spatial content as well as temporal relationships . We apply our approach to two video-related tasks that offer substantially different challenges : video super-resolution ( VSR ) and unpaired video translation ( UVT ) . With no ground truth motion available , the spatio-temporal adversarial loss and the recurrent structure enable our model to generate realistic results while keeping the generated structures coherent over time . With the two learning tasks we demonstrate how spatio-temporal adversarial training can be employed in paired as well as unpaired data domains . In addition to the adversarial network which supervises the short-term temporal coherence , long-term consistency is self-supervised using a novel bi-directional loss formulation , which we refer to as “ Ping-Pong ” ( PP ) loss in the following . The PP loss effectively avoids the temporal accumulation of artifacts , which can potentially benefit a variety of recurrent architectures . The central contributions of our work are : a spatio-temporal discriminator unit together with a careful analysis of training objectives for realistic and coherent video generation tasks , a novel PP loss supervising long-term consistency , in addition to a set of metrics for quantifying temporal coherence based on motion estimation and perceptual distance . Together , our contributions lead to models that outperform previous work in terms of temporally-coherent detail , which we quantify with a wide range of metrics and user studies . 2 RELATED WORK . Deep learning has made great progress for image generation tasks . While regular losses such as L2 ( Kim et al. , 2016 ; Lai et al. , 2017 ) offer good performance for image super-resolution ( SR ) tasks in terms of PSNR metrics , GAN researchers found adversarial training ( Goodfellow et al. , 2014 ) to significantly improve the perceptual quality in multi-modal problems including image SR ( Ledig et al. , 2016 ) , image translations ( Zhu et al. , 2017 ; Isola et al. , 2017 ) , and others . Perceptual metrics ( Zhang et al. , 2018 ; Prashnani et al. , 2018 ) are proposed to reliably evaluate image similarity by considering semantic features instead of pixel-wise errors . Video generation tasks , on the other hand , require realistic results to change naturally over time . Recent works in VSR improve the spatial detail and temporal coherence by either using multiple low-resolution ( LR ) frames as inputs ( Jo et al. , 2018 ; Tao et al. , 2017 ; Liu et al. , 2017 ) , or recurrently using previously estimated outputs ( Sajjadi et al. , 2018 ) . The latter has the advantage to re-use high-frequency details over time . In general , adversarial learning is less explored for VSR and applying it in conjunction with a recurrent structure gives rise to a special form of temporal mode collapse , as we will explain below . For video translation tasks , GANs are more commonly used but discriminators typically only supervise the spatial content . E.g. , Zhu et al . ( 2017 ) does not employ temporal constrains and generators can fail to learn the temporal cycle-consistency . In order to learn temporal dynamics , RecycleGAN ( Bansal et al. , 2018 ) proposes to use a prediction network in addition to a generator , while a concurrent work ( Chen et al. , 2019 ) chose to learn motion translation in addition to spatial content translation . Being orthogonal to these works , we propose a spatiotemporal adversarial training for both VSR and UVT and we show that temporal self-supervision is crucial for improving spatio-temporal correlations without sacrificing spatial detail . While L2 temporal losses based on warping are used to enforce temporal smoothness in video style transfer tasks ( Ruder et al. , 2016 ; Chen et al. , 2017 ) , concurrent GAN-based VSR work ( Pérez-Pellitero et al. , 2018 ) and UVT work ( Park et al. , 2019 ) , it leads to an undesirable smooth over spatial detail and temporal changes in outputs . Likewise , the L2 temporal metric represents a sub-optimal way to quantify temporal coherence and perceptual metrics that evaluate natural temporal changes are unavailable up to now . We work on this open issue , propose two improved temporal metric and demonstrate the advantages of temporal self-supervision over direct temporal losses . Previous work , e.g . tempoGAN ( Xie et al. , 2018 ) and vid2vid ( Wang et al. , 2018b ) , have proposed adversarial temporal losses to achieve time consistency . While tempoGAN employs a second temporal discriminator with multiple aligned frames to assess the realism of temporal changes , it is not suitable for videos , as it relies on ground truth motions and employs a single-frame processing that is sub-optimal for natural images . On the other hand , vid2vid focuses on paired video translations and proposes a video discriminator based on a conditional motion input that is estimated from the paired ground-truth sequences . We focus on more difficult unpaired translation tasks instead , and demonstrate the gains in quality of our approach in the evaluation section . For tracking and optical flow estimation , L2-based time-cycle losses ( Wang et al. , 2019 ) were proposed to constrain motions and tracked correspondences using symmetric video inputs . By optimizing indirectly via motion compensation or tracking , this loss improves the accuracy of the results . For video generation , we propose a PP loss that also makes use of symmetric sequences . However , we directly constrain the PP loss via the generated video content , which successfully improves the long-term temporal consistency in the video results . 3 LEARNING TEMPORALLY COHERENT CONDITIONAL VIDEO GENERATION . Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + UVT 𝐷 , 0/1 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤or or Static Triplet 𝐼 𝑔 { } x3 or Static Triplet 𝐼 𝑏 { } x3 Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → a ) Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + UVT 𝐷 , 0/1 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤or or Static Triplet 𝐼 𝑔 { } x3 or Static Triplet 𝐼 𝑏 { } x3 Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → b ) Figure 2 : a ) G. b ) The UVT cycle link using recurrent G. Generative Network Before explaining the temporal self-supervision in more detail , we outline the generative model to be supervised . Our generator networks produce image sequences in a frame-recurrent manner with the help of a recurrent generator G and a flow estimator F . We follow previous work ( Sajjadi et al. , 2018 ) , where G produces output gt in the target domain B from conditional input frame at from the input domain A , and recursively uses the previous generated output gt−1 . F is trained to estimate the motion vt between at−1 and at , which is then used as a motion compensation that aligns gt−1 to the current frame . This procedure , also shown in Fig . 2a ) , can be summarized as : gt = G ( at , W ( gt−1 , vt ) ) , where vt = F ( at−1 , at ) and W is the warping operation . While one generator is enough to map data from A to B for paired tasks such as VSR , unpaired generation requires a second generator to establish cycle consistency . ( Zhu et al. , 2017 ) . In the UVT task , we use two recurrent generators , mapping from domain A to B and back . As shown in Fig . 2b ) , given ga→bt = Gab ( at , W ( g a→b t−1 , vt ) ) , we can use at as the labeled data of ga→b→at = Gba ( ga→bt , W ( ga→b→at−1 , vt ) ) to enforce consistency . A ResNet architecture is used for the VSR generator G and a encoder-decoder structure is applied to UVT generators and F . We intentionally keep generators simple and in line with previous work , in order to demonstrate the advantages of the temporal self-supervision that we will explain in the following paragraphs . Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + UVT 𝐷 , 0/1 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤or or Static Triplet 𝐼 𝑔 { } x3 or Static Triplet 𝐼 𝑏 { } x3 Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Figure 3 : Conditional VSRDs , t . Spatio-Temporal Adversarial Self-Supervision The central building block of our approach is a novel spatio-temporal discriminator Ds , t that receives triplets of frames . This contrasts with typically used spatial discriminators which supervise only a single image . By concatenating multiple adjacent frames along the channel dimension , the frame triplets form an important building block for learning because they can provide networks with gradient information regarding the realism of spatial structures as well as short-term temporal information , such as first- and second-order time derivatives . We propose a Ds , t architecture , illustrated in Fig . 3 and Fig . 4 , that primarily receives two types of triplets : three adjacent frames and the corresponding warped ones . We warp later frames back- ward and previous ones forward . While original frames contain the full spatio-temporal infor- 3 mation , warped frames more easily yield temporal information with their aligned content . For the input variants we use the following notation : Ig = { gt−1 , gt , gt+1 } , Ib = { bt−1 , bt , bt+1 } ; Iwg = { W ( gt−1 , vt ) , gt , W ( gt+1 , v′t ) } , Iwb = { W ( bt−1 , vt ) , bt , W ( bt+1 , v′t ) } . For VSR tasks , Ds , t should guide the generator to learn the correlation between LR inputs and highresolution ( HR ) targets . Therefore , three LR frames Ia = { at−1 , at , at+1 } from the input domain are used as a conditional input . The input of Ds , t can be summarized as Ibs , t = { Ib , Iwb , Ia } labelled as real and the generated inputs Igs , t = { Ig , Iwg , Ia } labelled as fake . In this way , the conditional Ds , t will penalize G if Ig contains less spatial details or unrealistic artifacts according to Ia , Ib . At the same time , temporal relationships between the generated images Iwg and those of the ground truth Iwb should match . With our setup , the discriminator profits from the warped frames to classify realistic and unnatural temporal changes , and for situations where the motion estimation is less accurate , the discriminator can fall back to the original , i.e . not warped , images . Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → For UVT tasks , we demonstrate that the temporal cycleconsistency between different domains can be established using the supervision of unconditional spatio-temporal discriminators . This is in contrast to previous work which focuses on the generative networks to form spatio-temporal cycle links . Our approach actually yields improved results , as we will show below , and Fig . 1 shows a preview of the quality that can be achieved using spatio-temporal discriminators . In practice , we found it crucial to ensure that generators first learn reasonable spatial features , and only then improve their temporal correlation . Therefore , different to the Ds , t of VST that always receives 3 concatenated triplets as an input , the unconditional Ds , t of UVT only takes one triplet at a time . Focusing on the generated data , the input for a single batch can either be a static triplet of Isg = { gt , gt , gt } , the warped triplet Iwg , or the original triplet Ig . The same holds for the reference data of the target domain , as shown in Fig . 4 . With sufficient but complex information contained in these triplets , transition techniques are applied so that the network can consider the spatio-temporal information step by step , i.e. , we initially start with 100 % static triplets Isg as the input . Then , over the course of training , 25 % of them transition to Iwg triplets with simpler temporal information , with another 25 % transition to Ig afterwards , leading to a ( 50 % ,25 % ,25 % ) distribution of triplets . Details of the transition calculations are given in Appendix D. Here , the warping is again performed via F . While non-adversarial training typically employs loss formulations with static goals , the GAN training yields dynamic goals due to discriminative networks discovering the learning objectives over the course of the training run . Therefore , their inputs have strong influence on the training process and the final results . Modifying the inputs in a controlled manner can lead to different results and substantial improvements if done correctly , as will be shown in Sec . 4 . Although the proposed concatenation of several frames seems like a simple change that has been used in a variety of projects , it is an important operation that allows discriminators to understand spatio-temporal data distributions . As will be shown below , it can effectively reduce temporal problems encountered by spatial GANs . While L2−based temporal losses are widely used in the field of video generation , the spatiotemporal adversarial loss is crucial for preventing the inference of blurred structures in multi-modal data-sets . Compared to GANs using multiple discriminators , the single Ds , t network can learn to balance the spatial and temporal aspects from the reference data and avoid inconsistent sharpness as well as overly smooth results . Additionally , by extracting shared spatio-temporal features , it allows for smaller network sizes . Self-Supervision for Long-term Temporal Consistency When relying on a previous output as input , i.e. , for frame-recurrent architectures , generated structures easily accumulate frame by frame . In an adversarial training , generators learn to heavily rely on previously generated frames and can easily converge towards strongly reinforcing spatial features over longer periods of time . For videos , this especially occurs along directions of motion , and these solutions can be seen as a special form of temporal mode collapse . We have noticed this issue in a variety of recurrent architectures , examples are shown in Fig . 5 a ) and the Dst in Fig . 1 . While this issue could be alleviated by training with longer sequences , we generally want generators to be able to work with sequences of arbitrary length for inference . To address this inherent problem of recurrent generators , we propose a new bi-directional “ Ping-Pong ” loss . For natural videos , a sequence with forward order as well as its reversed counterpart offer valid information . Thus , from any input of length n , we can construct a symmetric PP sequence in form of a1 , ... an−1 , an , an−1 , ... a1 as shown in Fig . 5 . When inferring this in a frame-recurrent manner , the generated result should not strengthen any invalid features from frame to frame . Rather , the result should stay close to valid information and be symmetric , i.e. , the forward result gt = G ( at , gt−1 ) and the one generated from the reversed part , g′t = G ( at , g′t+1 ) , should be identical . Based on this observation , we train our networks with extended PP sequences and constrain the generated outputs from both “ legs ” to be the same using the loss : Lpp = ∑n−1 i=1 ‖gt − gt′‖2 . Note that in contrast to the generator loss , the L2 norm is a correct choice here : We are not faced with multi-modal data where an L2 norm would lead to undesirable averaging , but rather aim to constrain the recurrent generator to its own , unique version over time . The PP terms provide constraints for short term consistency via ‖gn−1 − gn−1′‖2 , while terms such as ‖g1 − g1′‖2 prevent long-term drifts of the results . As shown in Fig . 5 ( b ) , this PP loss successfully removes drifting artifacts while appropriate high-frequency details are preserved . In addition , it effectively extends the training data set , and as such represents a useful form of data augmentation . A comparison is shown in Appendix E to disentangle the effects of the augmentation of PP sequences and the temporal constrains . The results show that the temporal constraint is the key to reliably suppressing the temporal accumulation of artifacts , achieving consistency , and allowing models to infer much longer sequences than seen during training . Perceptual Loss Terms As perceptual metrics , both pre-trained NNs ( Johnson et al. , 2016 ; Wang et al. , 2018a ) and in-training discriminators ( Xie et al. , 2018 ) were successfully used in previous work . Here , we use feature maps from a pre-trained VGG-19 network ( Simonyan & Zisserman , 2014 ) , as well as Ds , t itself . In the VSR task , we can encourage the generator to produce features similar to the ground truth ones by increasing the cosine similarity between their feature maps . In UVT tasks without paired ground truth data , we still want the generators to match the distribution of features in the target domain . Similar to a style loss in traditional style transfer ( Johnson et al. , 2016 ) , we here compute the Ds , t feature correlations measured by the Gram matrix instead . The feature maps of Ds , t contain both spatial and temporal information , and hence are especially well suited for the perceptual loss . Loss and Training Summary We now explain how to integrate the spatio-temporal discriminator into the paired and unpaired tasks . We use a standard discriminator loss for the Ds , t of VSR and a least-square discriminator loss for the Ds , t of UVT . Correspondingly , a non-saturated Ladv is used for the G and F of VSR , and a least-squares one is used for the UVT generators . As summarized in Table 1 , G and F are trained with the mean squared loss Lcontent , adversarial losses Ladv , perceptual losses Lφ , the PP loss LPP , and a warping loss Lwarp , where again g , b and Φ stand for generated samples , ground truth images and feature maps of VGG-19 or Ds , t . We only show losses for the mapping from A to B for UVT tasks , as the backward mapping simply mirrors the terms . We refer to our full model for both tasks as TecoGAN below.1 Training parameters and details are given in Appendix G. 1Source code , training data , and trained models will be published upon acceptance .
Augments the loss of video generation systems with a discriminator that considers multiple frames (as opposed to single frames independently) and a new objective termed ping-pong loss which is introduced in order to deal with “artifacts” that appear in video generation. The paper also proposes a few automatic metrics with which to compare systems. Although the performance does not convincingly exceed its competitors, the contribution seems to be getting the spatio-temporal adversarial loss to work at all.
SP:f719db5d0209fd670518cf1e28a66dfcd9de0a8c
Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation
1 INTRODUCTION . Generative adversarial models ( GANs ) have been extremely successful at learning complex distributions such as natural images ( Zhu et al. , 2017 ; Isola et al. , 2017 ) . However , for sequence generation , directly applying GANs without carefully engineered constraints typically results in strong artifacts over time due to the significant difficulties introduced by the temporal changes . In particular , conditional video generation tasks are very challenging learning problems where generators should not only learn to represent the data distribution of the target domain , but also learn to correlate the output distribution over time with conditional inputs . Their central objective is to faithfully reproduce the temporal dynamics of the target domain and not resort to trivial solutions such as features that arbitrarily appear and disappear over time . In our work , we propose a novel adversarial learning method for a recurrent training approach that supervises both spatial content as well as temporal relationships . We apply our approach to two video-related tasks that offer substantially different challenges : video super-resolution ( VSR ) and unpaired video translation ( UVT ) . With no ground truth motion available , the spatio-temporal adversarial loss and the recurrent structure enable our model to generate realistic results while keeping the generated structures coherent over time . With the two learning tasks we demonstrate how spatio-temporal adversarial training can be employed in paired as well as unpaired data domains . In addition to the adversarial network which supervises the short-term temporal coherence , long-term consistency is self-supervised using a novel bi-directional loss formulation , which we refer to as “ Ping-Pong ” ( PP ) loss in the following . The PP loss effectively avoids the temporal accumulation of artifacts , which can potentially benefit a variety of recurrent architectures . The central contributions of our work are : a spatio-temporal discriminator unit together with a careful analysis of training objectives for realistic and coherent video generation tasks , a novel PP loss supervising long-term consistency , in addition to a set of metrics for quantifying temporal coherence based on motion estimation and perceptual distance . Together , our contributions lead to models that outperform previous work in terms of temporally-coherent detail , which we quantify with a wide range of metrics and user studies . 2 RELATED WORK . Deep learning has made great progress for image generation tasks . While regular losses such as L2 ( Kim et al. , 2016 ; Lai et al. , 2017 ) offer good performance for image super-resolution ( SR ) tasks in terms of PSNR metrics , GAN researchers found adversarial training ( Goodfellow et al. , 2014 ) to significantly improve the perceptual quality in multi-modal problems including image SR ( Ledig et al. , 2016 ) , image translations ( Zhu et al. , 2017 ; Isola et al. , 2017 ) , and others . Perceptual metrics ( Zhang et al. , 2018 ; Prashnani et al. , 2018 ) are proposed to reliably evaluate image similarity by considering semantic features instead of pixel-wise errors . Video generation tasks , on the other hand , require realistic results to change naturally over time . Recent works in VSR improve the spatial detail and temporal coherence by either using multiple low-resolution ( LR ) frames as inputs ( Jo et al. , 2018 ; Tao et al. , 2017 ; Liu et al. , 2017 ) , or recurrently using previously estimated outputs ( Sajjadi et al. , 2018 ) . The latter has the advantage to re-use high-frequency details over time . In general , adversarial learning is less explored for VSR and applying it in conjunction with a recurrent structure gives rise to a special form of temporal mode collapse , as we will explain below . For video translation tasks , GANs are more commonly used but discriminators typically only supervise the spatial content . E.g. , Zhu et al . ( 2017 ) does not employ temporal constrains and generators can fail to learn the temporal cycle-consistency . In order to learn temporal dynamics , RecycleGAN ( Bansal et al. , 2018 ) proposes to use a prediction network in addition to a generator , while a concurrent work ( Chen et al. , 2019 ) chose to learn motion translation in addition to spatial content translation . Being orthogonal to these works , we propose a spatiotemporal adversarial training for both VSR and UVT and we show that temporal self-supervision is crucial for improving spatio-temporal correlations without sacrificing spatial detail . While L2 temporal losses based on warping are used to enforce temporal smoothness in video style transfer tasks ( Ruder et al. , 2016 ; Chen et al. , 2017 ) , concurrent GAN-based VSR work ( Pérez-Pellitero et al. , 2018 ) and UVT work ( Park et al. , 2019 ) , it leads to an undesirable smooth over spatial detail and temporal changes in outputs . Likewise , the L2 temporal metric represents a sub-optimal way to quantify temporal coherence and perceptual metrics that evaluate natural temporal changes are unavailable up to now . We work on this open issue , propose two improved temporal metric and demonstrate the advantages of temporal self-supervision over direct temporal losses . Previous work , e.g . tempoGAN ( Xie et al. , 2018 ) and vid2vid ( Wang et al. , 2018b ) , have proposed adversarial temporal losses to achieve time consistency . While tempoGAN employs a second temporal discriminator with multiple aligned frames to assess the realism of temporal changes , it is not suitable for videos , as it relies on ground truth motions and employs a single-frame processing that is sub-optimal for natural images . On the other hand , vid2vid focuses on paired video translations and proposes a video discriminator based on a conditional motion input that is estimated from the paired ground-truth sequences . We focus on more difficult unpaired translation tasks instead , and demonstrate the gains in quality of our approach in the evaluation section . For tracking and optical flow estimation , L2-based time-cycle losses ( Wang et al. , 2019 ) were proposed to constrain motions and tracked correspondences using symmetric video inputs . By optimizing indirectly via motion compensation or tracking , this loss improves the accuracy of the results . For video generation , we propose a PP loss that also makes use of symmetric sequences . However , we directly constrain the PP loss via the generated video content , which successfully improves the long-term temporal consistency in the video results . 3 LEARNING TEMPORALLY COHERENT CONDITIONAL VIDEO GENERATION . Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + UVT 𝐷 , 0/1 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤or or Static Triplet 𝐼 𝑔 { } x3 or Static Triplet 𝐼 𝑏 { } x3 Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → a ) Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + UVT 𝐷 , 0/1 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤or or Static Triplet 𝐼 𝑔 { } x3 or Static Triplet 𝐼 𝑏 { } x3 Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → b ) Figure 2 : a ) G. b ) The UVT cycle link using recurrent G. Generative Network Before explaining the temporal self-supervision in more detail , we outline the generative model to be supervised . Our generator networks produce image sequences in a frame-recurrent manner with the help of a recurrent generator G and a flow estimator F . We follow previous work ( Sajjadi et al. , 2018 ) , where G produces output gt in the target domain B from conditional input frame at from the input domain A , and recursively uses the previous generated output gt−1 . F is trained to estimate the motion vt between at−1 and at , which is then used as a motion compensation that aligns gt−1 to the current frame . This procedure , also shown in Fig . 2a ) , can be summarized as : gt = G ( at , W ( gt−1 , vt ) ) , where vt = F ( at−1 , at ) and W is the warping operation . While one generator is enough to map data from A to B for paired tasks such as VSR , unpaired generation requires a second generator to establish cycle consistency . ( Zhu et al. , 2017 ) . In the UVT task , we use two recurrent generators , mapping from domain A to B and back . As shown in Fig . 2b ) , given ga→bt = Gab ( at , W ( g a→b t−1 , vt ) ) , we can use at as the labeled data of ga→b→at = Gba ( ga→bt , W ( ga→b→at−1 , vt ) ) to enforce consistency . A ResNet architecture is used for the VSR generator G and a encoder-decoder structure is applied to UVT generators and F . We intentionally keep generators simple and in line with previous work , in order to demonstrate the advantages of the temporal self-supervision that we will explain in the following paragraphs . Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + UVT 𝐷 , 0/1 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤or or Static Triplet 𝐼 𝑔 { } x3 or Static Triplet 𝐼 𝑏 { } x3 Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Figure 3 : Conditional VSRDs , t . Spatio-Temporal Adversarial Self-Supervision The central building block of our approach is a novel spatio-temporal discriminator Ds , t that receives triplets of frames . This contrasts with typically used spatial discriminators which supervise only a single image . By concatenating multiple adjacent frames along the channel dimension , the frame triplets form an important building block for learning because they can provide networks with gradient information regarding the realism of spatial structures as well as short-term temporal information , such as first- and second-order time derivatives . We propose a Ds , t architecture , illustrated in Fig . 3 and Fig . 4 , that primarily receives two types of triplets : three adjacent frames and the corresponding warped ones . We warp later frames back- ward and previous ones forward . While original frames contain the full spatio-temporal infor- 3 mation , warped frames more easily yield temporal information with their aligned content . For the input variants we use the following notation : Ig = { gt−1 , gt , gt+1 } , Ib = { bt−1 , bt , bt+1 } ; Iwg = { W ( gt−1 , vt ) , gt , W ( gt+1 , v′t ) } , Iwb = { W ( bt−1 , vt ) , bt , W ( bt+1 , v′t ) } . For VSR tasks , Ds , t should guide the generator to learn the correlation between LR inputs and highresolution ( HR ) targets . Therefore , three LR frames Ia = { at−1 , at , at+1 } from the input domain are used as a conditional input . The input of Ds , t can be summarized as Ibs , t = { Ib , Iwb , Ia } labelled as real and the generated inputs Igs , t = { Ig , Iwg , Ia } labelled as fake . In this way , the conditional Ds , t will penalize G if Ig contains less spatial details or unrealistic artifacts according to Ia , Ib . At the same time , temporal relationships between the generated images Iwg and those of the ground truth Iwb should match . With our setup , the discriminator profits from the warped frames to classify realistic and unnatural temporal changes , and for situations where the motion estimation is less accurate , the discriminator can fall back to the original , i.e . not warped , images . Domain B VSR 𝐷 , 0/1 𝑎 FrameRecurrent Generator 𝑔 𝑤 Conditional LR Triplet 𝐼 𝑎 𝑎 𝑎 Original Triplet 𝐼 𝑔 𝑔 𝑔 Warped Triplet 𝐼 𝑔 𝑔 𝑤 𝑔 𝑤 Original Triplet 𝐼 𝑏 𝑏 𝑏 Warped Triplet 𝐼 𝑏 𝑏 𝑤 𝑏 𝑤+ + Domain A 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → Domain A Domain B 𝑎 𝑔 → 𝑔 → 𝐺 → 𝐺 → 𝑏 𝑔 → 𝑔 → 𝐺 → 𝐺 → For UVT tasks , we demonstrate that the temporal cycleconsistency between different domains can be established using the supervision of unconditional spatio-temporal discriminators . This is in contrast to previous work which focuses on the generative networks to form spatio-temporal cycle links . Our approach actually yields improved results , as we will show below , and Fig . 1 shows a preview of the quality that can be achieved using spatio-temporal discriminators . In practice , we found it crucial to ensure that generators first learn reasonable spatial features , and only then improve their temporal correlation . Therefore , different to the Ds , t of VST that always receives 3 concatenated triplets as an input , the unconditional Ds , t of UVT only takes one triplet at a time . Focusing on the generated data , the input for a single batch can either be a static triplet of Isg = { gt , gt , gt } , the warped triplet Iwg , or the original triplet Ig . The same holds for the reference data of the target domain , as shown in Fig . 4 . With sufficient but complex information contained in these triplets , transition techniques are applied so that the network can consider the spatio-temporal information step by step , i.e. , we initially start with 100 % static triplets Isg as the input . Then , over the course of training , 25 % of them transition to Iwg triplets with simpler temporal information , with another 25 % transition to Ig afterwards , leading to a ( 50 % ,25 % ,25 % ) distribution of triplets . Details of the transition calculations are given in Appendix D. Here , the warping is again performed via F . While non-adversarial training typically employs loss formulations with static goals , the GAN training yields dynamic goals due to discriminative networks discovering the learning objectives over the course of the training run . Therefore , their inputs have strong influence on the training process and the final results . Modifying the inputs in a controlled manner can lead to different results and substantial improvements if done correctly , as will be shown in Sec . 4 . Although the proposed concatenation of several frames seems like a simple change that has been used in a variety of projects , it is an important operation that allows discriminators to understand spatio-temporal data distributions . As will be shown below , it can effectively reduce temporal problems encountered by spatial GANs . While L2−based temporal losses are widely used in the field of video generation , the spatiotemporal adversarial loss is crucial for preventing the inference of blurred structures in multi-modal data-sets . Compared to GANs using multiple discriminators , the single Ds , t network can learn to balance the spatial and temporal aspects from the reference data and avoid inconsistent sharpness as well as overly smooth results . Additionally , by extracting shared spatio-temporal features , it allows for smaller network sizes . Self-Supervision for Long-term Temporal Consistency When relying on a previous output as input , i.e. , for frame-recurrent architectures , generated structures easily accumulate frame by frame . In an adversarial training , generators learn to heavily rely on previously generated frames and can easily converge towards strongly reinforcing spatial features over longer periods of time . For videos , this especially occurs along directions of motion , and these solutions can be seen as a special form of temporal mode collapse . We have noticed this issue in a variety of recurrent architectures , examples are shown in Fig . 5 a ) and the Dst in Fig . 1 . While this issue could be alleviated by training with longer sequences , we generally want generators to be able to work with sequences of arbitrary length for inference . To address this inherent problem of recurrent generators , we propose a new bi-directional “ Ping-Pong ” loss . For natural videos , a sequence with forward order as well as its reversed counterpart offer valid information . Thus , from any input of length n , we can construct a symmetric PP sequence in form of a1 , ... an−1 , an , an−1 , ... a1 as shown in Fig . 5 . When inferring this in a frame-recurrent manner , the generated result should not strengthen any invalid features from frame to frame . Rather , the result should stay close to valid information and be symmetric , i.e. , the forward result gt = G ( at , gt−1 ) and the one generated from the reversed part , g′t = G ( at , g′t+1 ) , should be identical . Based on this observation , we train our networks with extended PP sequences and constrain the generated outputs from both “ legs ” to be the same using the loss : Lpp = ∑n−1 i=1 ‖gt − gt′‖2 . Note that in contrast to the generator loss , the L2 norm is a correct choice here : We are not faced with multi-modal data where an L2 norm would lead to undesirable averaging , but rather aim to constrain the recurrent generator to its own , unique version over time . The PP terms provide constraints for short term consistency via ‖gn−1 − gn−1′‖2 , while terms such as ‖g1 − g1′‖2 prevent long-term drifts of the results . As shown in Fig . 5 ( b ) , this PP loss successfully removes drifting artifacts while appropriate high-frequency details are preserved . In addition , it effectively extends the training data set , and as such represents a useful form of data augmentation . A comparison is shown in Appendix E to disentangle the effects of the augmentation of PP sequences and the temporal constrains . The results show that the temporal constraint is the key to reliably suppressing the temporal accumulation of artifacts , achieving consistency , and allowing models to infer much longer sequences than seen during training . Perceptual Loss Terms As perceptual metrics , both pre-trained NNs ( Johnson et al. , 2016 ; Wang et al. , 2018a ) and in-training discriminators ( Xie et al. , 2018 ) were successfully used in previous work . Here , we use feature maps from a pre-trained VGG-19 network ( Simonyan & Zisserman , 2014 ) , as well as Ds , t itself . In the VSR task , we can encourage the generator to produce features similar to the ground truth ones by increasing the cosine similarity between their feature maps . In UVT tasks without paired ground truth data , we still want the generators to match the distribution of features in the target domain . Similar to a style loss in traditional style transfer ( Johnson et al. , 2016 ) , we here compute the Ds , t feature correlations measured by the Gram matrix instead . The feature maps of Ds , t contain both spatial and temporal information , and hence are especially well suited for the perceptual loss . Loss and Training Summary We now explain how to integrate the spatio-temporal discriminator into the paired and unpaired tasks . We use a standard discriminator loss for the Ds , t of VSR and a least-square discriminator loss for the Ds , t of UVT . Correspondingly , a non-saturated Ladv is used for the G and F of VSR , and a least-squares one is used for the UVT generators . As summarized in Table 1 , G and F are trained with the mean squared loss Lcontent , adversarial losses Ladv , perceptual losses Lφ , the PP loss LPP , and a warping loss Lwarp , where again g , b and Φ stand for generated samples , ground truth images and feature maps of VGG-19 or Ds , t . We only show losses for the mapping from A to B for UVT tasks , as the backward mapping simply mirrors the terms . We refer to our full model for both tasks as TecoGAN below.1 Training parameters and details are given in Appendix G. 1Source code , training data , and trained models will be published upon acceptance .
The paper presents a novel method for training video-to-video translation (vid2vid) models. The authors introduce a spatio-temporal adversarial discriminator for GAN training, that shows significant benefits over prior methods, in particular, parallel (as opposed to joint) spatial and temporal discriminators. In addition the authors introduce a self-supervised objective based on cycle dependency that is crucial for producing temporally consistent videos. A new set of metrics is introduced to validate the claims of the authors.
SP:f719db5d0209fd670518cf1e28a66dfcd9de0a8c
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
1 INTRODUCTION . The success of deep neural networks ( DNNs ) has motivated their deployment in some safety-critical environments , such as autonomous driving and facial recognition systems . Applications in these areas make understanding the robustness and security of deep neural networks urgently needed , especially their resilience under malicious , finely crafted inputs . Unfortunately , the performance of DNNs are often so brittle that even imperceptibly modified inputs , also known as adversarial examples , are able to completely break the model ( Goodfellow et al. , 2015 ; Szegedy et al. , 2013 ) . The robustness of DNNs under adversarial examples is well-studied from both attack ( crafting powerful adversarial examples ) and defence ( making the model more robust ) perspectives ( Athalye et al. , 2018 ; Carlini & Wagner , 2017a ; b ; Goodfellow et al. , 2015 ; Madry et al. , 2018 ; Papernot et al. , 2016 ; Xiao et al. , 2019b ; 2018b ; c ; Eykholt et al. , 2018 ; Chen et al. , 2018 ; Xu et al. , 2018 ; Zhang et al. , 2019b ) . Recently , it has been shown that defending against adversarial examples is a very difficult task , especially under strong and adaptive attacks . Early defenses such as distillation ( Papernot et al. , 2016 ) have been broken by stronger attacks like C & W ( Carlini & Wagner , 2017b ) . Many defense methods have been proposed recently ( Guo et al. , 2018 ; Song et al. , 2017 ; Buckman et al. , 2018 ; Ma et al. , 2018 ; Samangouei et al. , 2018 ; Xiao et al. , 2018a ; 2019a ) , but their robustness improvement can not be certified – no provable guarantees can be given to verify their robustness . In fact , most of these uncertified defenses become vulnerable under stronger attacks ( Athalye et al. , 2018 ; He et al. , 2017 ) . Several recent works in the literature seeking to give provable guarantees on the robustness performance , such as linear relaxations ( Wong & Kolter , 2018 ; Mirman et al. , 2018 ; Wang et al. , 2018a ; Dvijotham et al. , 2018b ; Weng et al. , 2018 ; Zhang et al. , 2018 ) , interval bound propagation ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) , ReLU stability regularization ( Xiao et al. , 2019c ) , and distributionally ∗Work partially done during an internship at DeepMind . robust optimization ( Sinha et al. , 2018 ) and semidefinite relaxations ( Raghunathan et al. , 2018a ; Dvijotham et al. ) . Linear relaxations of neural networks , first proposed by Wong & Kolter ( 2018 ) , is one of the most popular categories among these certified defences . They use the dual of linear programming or several similar approaches to provide a linear relaxation of the network ( referred to as a “ convex adversarial polytope ” ) and the resulting bounds are tractable for robust optimization . However , these methods are both computationally and memory intensive , and can increase model training time by a factor of hundreds . On the other hand , interval bound propagation ( IBP ) is a simple and efficient method for training verifiable neural networks ( Gowal et al. , 2018 ) , which achieved state-of-the-art verified error on many datasets . However , since the IBP bounds are very loose during the initial phase of training , the training procedure can be unstable and sensitive to hyperparameters . In this paper , we first discuss the strengths and weakness of existing linear relaxation based and interval bound propagation based certified robust training methods . Then we propose a new certified robust training method , CROWN-IBP , which marries the efficiency of IBP and the tightness of a linear relaxation based verification bound , CROWN ( Zhang et al. , 2018 ) . CROWN-IBP bound propagation involves a IBP based fast forward bounding pass , and a tight convex relaxation based backward bounding pass ( CROWN ) which scales linearly with the size of neural network output and is very efficient for problems with low output dimensions . Additional , CROWN-IBP provides flexibility for exploiting the strengths of both IBP and convex relaxation based verifiable training methods . The efficiency , tightness and flexibility of CROWN-IBP allow it to outperform state-of-the-art methods for training verifiable neural networks with ` ∞ robustness under all settings on MNIST and CIFAR10 datasets . In our experiment , on MNIST dataset we reach 7.02 % and 12.06 % IBP verified error under ` ∞ distortions = 0.3 and = 0.4 , respectively , outperforming the state-of-the-art baseline results by IBP ( 8.55 % and 15.01 % ) . On CIFAR-10 , at = 2255 , CROWN-IBP decreases the verified error from 55.88 % ( IBP ) to 46.03 % and matches convex relaxation based methods ; at a larger , CROWN-IBP outperforms all other methods with a noticeable margin . 2 RELATED WORK AND BACKGROUND . 2.1 ROBUSTNESS VERIFICATION AND RELAXATIONS OF NEURAL NETWORKS . Neural network robustness verification algorithms seek for upper and lower bounds of an output neuron for all possible inputs within a set S , typically a norm bounded perturbation . Most importantly , the margins between the ground-truth class and any other classes determine model robustness . However , it has already been shown that finding the exact output range is a non-convex problem and NP-complete ( Katz et al. , 2017 ; Weng et al. , 2018 ) . Therefore , recent works resorted to giving relatively tight but computationally tractable bounds of the output range with necessary relaxations of the original problem . Many of these robustness verification approaches are based on linear relaxations of non-linear units in neural networks , including CROWN ( Zhang et al. , 2018 ) , DeepPoly ( Singh et al. , 2019 ) , Fast-Lin ( Weng et al. , 2018 ) , DeepZ ( Singh et al. , 2018 ) and Neurify ( Wang et al. , 2018b ) . We refer the readers to ( Salman et al. , 2019b ) for a comprehensive survey on this topic . After linear relaxation , they bound the output of a neural network fi ( · ) by linear upper/lower hyper-planes : Ai , :∆x + bL ≤ fi ( x0 + ∆x ) ≤ Ai , :∆x + bU ( 1 ) where a row vector Ai , : = W ( L ) i , : D ( L−1 ) W ( L−1 ) · · ·D ( 1 ) W ( 1 ) is the product of the network weight matrices W ( l ) and diagonal matrices D ( l ) reflecting the ReLU relaxations for output neuron i ; bL and bU are two bias terms unrelated to ∆x . Additionally , Dvijotham et al . ( 2018c ; a ) ; Qin et al . ( 2019 ) solve the Lagrangian dual of verification problem ; Raghunathan et al . ( 2018a ; b ) ; Dvijotham et al . propose semidefinite relaxations which are tighter compared to linear relaxation based methods , but computationally expensive . Bounds on neural network local Lipschitz constant can also be used for verification ( Zhang et al. , 2019c ; Hein & Andriushchenko , 2017 ) . Besides these deterministic verification approaches , randomized smoothing can be used to certify the robustness of any model in a probabilistic manner ( Cohen et al. , 2019 ; Salman et al. , 2019a ; Lecuyer et al. , 2018 ; Li et al. , 2018 ) . 2.2 ROBUST OPTIMIZATION AND VERIFIABLE ADVERSARIAL DEFENSE . To improve the robustness of neural networks against adversarial perturbations , a natural idea is to generate adversarial examples by attacking the network and then use them to augment the training set ( Kurakin et al. , 2017 ) . More recently , Madry et al . ( 2018 ) showed that adversarial training can be formulated as solving a minimax robust optimization problem as in ( 2 ) . Given a model with parameter θ , loss function L , and training data distribution X , the training algorithm aims to minimize the robust loss , which is defined as the maximum loss within a neighborhood { x+ δ|δ ∈ S } of each data point x , leading to the following robust optimization problem : min θ E ( x , y ) ∈X [ max δ∈S L ( x+ δ ; y ; θ ) ] . ( 2 ) Madry et al . ( 2018 ) proposed to use projected gradient descent ( PGD ) to approximately solve the inner max and then use the loss on the perturbed example x + δ to update the model . Networks trained by this procedure achieve state-of-the-art test accuracy under strong attacks ( Athalye et al. , 2018 ; Wang et al. , 2018a ; Zheng et al. , 2018 ) . Despite being robust under strong attacks , models obtained by this PGD-based adversarial training do not have verified error guarantees . Due to the nonconvexity of neural networks , PGD attack can only compute the lower bound of robust loss ( the inner maximization problem ) . Minimizing a lower bound of the inner max can not guarantee ( 2 ) is minimized . In other words , even if PGD-attack can not find a perturbation with large loss , that does not mean there exists no such perturbation . This becomes problematic in safety-critical applications since those models need certified safety . Verifiable adversarial training methods , on the other hand , aim to obtain a network with good robustness that can be verified efficiently . This can be done by combining adversarial training and robustness verification—instead of using PGD to find a lower bound of inner max , certified adversarial training uses a verification method to find an upper bound of the inner max , and then update the parameters based on this upper bound of robust loss . Minimizing an upper bound of the inner max guarantees to minimize the robust loss . There are two certified robust training methods that are related to our work and we describe them in detail below . Linear Relaxation Based Verifiable Adversarial Training . One of the most popular verifiable adversarial training method was proposed in ( Wong & Kolter , 2018 ) using linear relaxations of neural networks to give an upper bound of the inner max . Other similar approaches include Mirman et al . ( 2018 ) ; Wang et al . ( 2018a ) ; Dvijotham et al . ( 2018b ) . Since the bound propagation process of a convex adversarial polytope is too expensive , several methods were proposed to improve its efficiency , like Cauchy projection ( Wong et al. , 2018 ) and dynamic mixed training ( Wang et al. , 2018a ) . However , even with these speed-ups , the training process is still slow . Also , this method may significantly reduce a model ’ s standard accuracy ( accuracy on natural , unmodified test set ) . As we will demonstrate shortly , we find that this method tends to over-regularize the network during training , which is harmful for obtaining good accuracy . Interval Bound Propagation ( IBP ) . Interval Bound Propagation ( IBP ) uses a very simple rule to compute the pre-activation outer bounds for each layer of the neural network . Unlike linear relaxation based methods , IBP does not relax ReLU neurons and does not consider the correlations between neurons of different layers , yielding much looser bounds . Mirman et al . ( 2018 ) proposed a variety of abstract domains to give sound over-approximations for neural networks , including the “ Box/Interval Domain ” ( referred to as IBP in Gowal et al . ( 2018 ) ) and showed that it could scale to much larger networks than other works ( Raghunathan et al. , 2018a ) could at the time . Gowal et al . ( 2018 ) demonstrated that IBP could outperform many state-of-the-art results by a large margin with more precise approximations for the last linear layer and better training schemes . However , IBP can be unstable to use and hard to tune in practice , since the bounds can be very loose especially during the initial phase of training , posing a challenge to the optimizer . To mitigate instability , Gowal et al . ( 2018 ) use a mixture of regular and minimax robust cross-entropy loss as the model ’ s training loss .
This paper proposes a new method for training certifiably robust models that achieves better results than the previous SOTA results by IBP, with a moderate increase in training time. It uses a CROWN-based bound in the warm up phase of IBP, which serves as a better initialization for the later phase of IBP and lead to improvements in both robust and standard accuracy. The CROWN-based bound uses IBP to compute bounds for intermediate pre-activations and applies CROWN only to computing the bounds of the margins, which has a complexity between IBP and CROWN. The experimental results are verify detailed to demonstrate the improvement.
SP:5c78aac08d907ff07205fe28bf9fa4385c58f40d
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
1 INTRODUCTION . The success of deep neural networks ( DNNs ) has motivated their deployment in some safety-critical environments , such as autonomous driving and facial recognition systems . Applications in these areas make understanding the robustness and security of deep neural networks urgently needed , especially their resilience under malicious , finely crafted inputs . Unfortunately , the performance of DNNs are often so brittle that even imperceptibly modified inputs , also known as adversarial examples , are able to completely break the model ( Goodfellow et al. , 2015 ; Szegedy et al. , 2013 ) . The robustness of DNNs under adversarial examples is well-studied from both attack ( crafting powerful adversarial examples ) and defence ( making the model more robust ) perspectives ( Athalye et al. , 2018 ; Carlini & Wagner , 2017a ; b ; Goodfellow et al. , 2015 ; Madry et al. , 2018 ; Papernot et al. , 2016 ; Xiao et al. , 2019b ; 2018b ; c ; Eykholt et al. , 2018 ; Chen et al. , 2018 ; Xu et al. , 2018 ; Zhang et al. , 2019b ) . Recently , it has been shown that defending against adversarial examples is a very difficult task , especially under strong and adaptive attacks . Early defenses such as distillation ( Papernot et al. , 2016 ) have been broken by stronger attacks like C & W ( Carlini & Wagner , 2017b ) . Many defense methods have been proposed recently ( Guo et al. , 2018 ; Song et al. , 2017 ; Buckman et al. , 2018 ; Ma et al. , 2018 ; Samangouei et al. , 2018 ; Xiao et al. , 2018a ; 2019a ) , but their robustness improvement can not be certified – no provable guarantees can be given to verify their robustness . In fact , most of these uncertified defenses become vulnerable under stronger attacks ( Athalye et al. , 2018 ; He et al. , 2017 ) . Several recent works in the literature seeking to give provable guarantees on the robustness performance , such as linear relaxations ( Wong & Kolter , 2018 ; Mirman et al. , 2018 ; Wang et al. , 2018a ; Dvijotham et al. , 2018b ; Weng et al. , 2018 ; Zhang et al. , 2018 ) , interval bound propagation ( Mirman et al. , 2018 ; Gowal et al. , 2018 ) , ReLU stability regularization ( Xiao et al. , 2019c ) , and distributionally ∗Work partially done during an internship at DeepMind . robust optimization ( Sinha et al. , 2018 ) and semidefinite relaxations ( Raghunathan et al. , 2018a ; Dvijotham et al. ) . Linear relaxations of neural networks , first proposed by Wong & Kolter ( 2018 ) , is one of the most popular categories among these certified defences . They use the dual of linear programming or several similar approaches to provide a linear relaxation of the network ( referred to as a “ convex adversarial polytope ” ) and the resulting bounds are tractable for robust optimization . However , these methods are both computationally and memory intensive , and can increase model training time by a factor of hundreds . On the other hand , interval bound propagation ( IBP ) is a simple and efficient method for training verifiable neural networks ( Gowal et al. , 2018 ) , which achieved state-of-the-art verified error on many datasets . However , since the IBP bounds are very loose during the initial phase of training , the training procedure can be unstable and sensitive to hyperparameters . In this paper , we first discuss the strengths and weakness of existing linear relaxation based and interval bound propagation based certified robust training methods . Then we propose a new certified robust training method , CROWN-IBP , which marries the efficiency of IBP and the tightness of a linear relaxation based verification bound , CROWN ( Zhang et al. , 2018 ) . CROWN-IBP bound propagation involves a IBP based fast forward bounding pass , and a tight convex relaxation based backward bounding pass ( CROWN ) which scales linearly with the size of neural network output and is very efficient for problems with low output dimensions . Additional , CROWN-IBP provides flexibility for exploiting the strengths of both IBP and convex relaxation based verifiable training methods . The efficiency , tightness and flexibility of CROWN-IBP allow it to outperform state-of-the-art methods for training verifiable neural networks with ` ∞ robustness under all settings on MNIST and CIFAR10 datasets . In our experiment , on MNIST dataset we reach 7.02 % and 12.06 % IBP verified error under ` ∞ distortions = 0.3 and = 0.4 , respectively , outperforming the state-of-the-art baseline results by IBP ( 8.55 % and 15.01 % ) . On CIFAR-10 , at = 2255 , CROWN-IBP decreases the verified error from 55.88 % ( IBP ) to 46.03 % and matches convex relaxation based methods ; at a larger , CROWN-IBP outperforms all other methods with a noticeable margin . 2 RELATED WORK AND BACKGROUND . 2.1 ROBUSTNESS VERIFICATION AND RELAXATIONS OF NEURAL NETWORKS . Neural network robustness verification algorithms seek for upper and lower bounds of an output neuron for all possible inputs within a set S , typically a norm bounded perturbation . Most importantly , the margins between the ground-truth class and any other classes determine model robustness . However , it has already been shown that finding the exact output range is a non-convex problem and NP-complete ( Katz et al. , 2017 ; Weng et al. , 2018 ) . Therefore , recent works resorted to giving relatively tight but computationally tractable bounds of the output range with necessary relaxations of the original problem . Many of these robustness verification approaches are based on linear relaxations of non-linear units in neural networks , including CROWN ( Zhang et al. , 2018 ) , DeepPoly ( Singh et al. , 2019 ) , Fast-Lin ( Weng et al. , 2018 ) , DeepZ ( Singh et al. , 2018 ) and Neurify ( Wang et al. , 2018b ) . We refer the readers to ( Salman et al. , 2019b ) for a comprehensive survey on this topic . After linear relaxation , they bound the output of a neural network fi ( · ) by linear upper/lower hyper-planes : Ai , :∆x + bL ≤ fi ( x0 + ∆x ) ≤ Ai , :∆x + bU ( 1 ) where a row vector Ai , : = W ( L ) i , : D ( L−1 ) W ( L−1 ) · · ·D ( 1 ) W ( 1 ) is the product of the network weight matrices W ( l ) and diagonal matrices D ( l ) reflecting the ReLU relaxations for output neuron i ; bL and bU are two bias terms unrelated to ∆x . Additionally , Dvijotham et al . ( 2018c ; a ) ; Qin et al . ( 2019 ) solve the Lagrangian dual of verification problem ; Raghunathan et al . ( 2018a ; b ) ; Dvijotham et al . propose semidefinite relaxations which are tighter compared to linear relaxation based methods , but computationally expensive . Bounds on neural network local Lipschitz constant can also be used for verification ( Zhang et al. , 2019c ; Hein & Andriushchenko , 2017 ) . Besides these deterministic verification approaches , randomized smoothing can be used to certify the robustness of any model in a probabilistic manner ( Cohen et al. , 2019 ; Salman et al. , 2019a ; Lecuyer et al. , 2018 ; Li et al. , 2018 ) . 2.2 ROBUST OPTIMIZATION AND VERIFIABLE ADVERSARIAL DEFENSE . To improve the robustness of neural networks against adversarial perturbations , a natural idea is to generate adversarial examples by attacking the network and then use them to augment the training set ( Kurakin et al. , 2017 ) . More recently , Madry et al . ( 2018 ) showed that adversarial training can be formulated as solving a minimax robust optimization problem as in ( 2 ) . Given a model with parameter θ , loss function L , and training data distribution X , the training algorithm aims to minimize the robust loss , which is defined as the maximum loss within a neighborhood { x+ δ|δ ∈ S } of each data point x , leading to the following robust optimization problem : min θ E ( x , y ) ∈X [ max δ∈S L ( x+ δ ; y ; θ ) ] . ( 2 ) Madry et al . ( 2018 ) proposed to use projected gradient descent ( PGD ) to approximately solve the inner max and then use the loss on the perturbed example x + δ to update the model . Networks trained by this procedure achieve state-of-the-art test accuracy under strong attacks ( Athalye et al. , 2018 ; Wang et al. , 2018a ; Zheng et al. , 2018 ) . Despite being robust under strong attacks , models obtained by this PGD-based adversarial training do not have verified error guarantees . Due to the nonconvexity of neural networks , PGD attack can only compute the lower bound of robust loss ( the inner maximization problem ) . Minimizing a lower bound of the inner max can not guarantee ( 2 ) is minimized . In other words , even if PGD-attack can not find a perturbation with large loss , that does not mean there exists no such perturbation . This becomes problematic in safety-critical applications since those models need certified safety . Verifiable adversarial training methods , on the other hand , aim to obtain a network with good robustness that can be verified efficiently . This can be done by combining adversarial training and robustness verification—instead of using PGD to find a lower bound of inner max , certified adversarial training uses a verification method to find an upper bound of the inner max , and then update the parameters based on this upper bound of robust loss . Minimizing an upper bound of the inner max guarantees to minimize the robust loss . There are two certified robust training methods that are related to our work and we describe them in detail below . Linear Relaxation Based Verifiable Adversarial Training . One of the most popular verifiable adversarial training method was proposed in ( Wong & Kolter , 2018 ) using linear relaxations of neural networks to give an upper bound of the inner max . Other similar approaches include Mirman et al . ( 2018 ) ; Wang et al . ( 2018a ) ; Dvijotham et al . ( 2018b ) . Since the bound propagation process of a convex adversarial polytope is too expensive , several methods were proposed to improve its efficiency , like Cauchy projection ( Wong et al. , 2018 ) and dynamic mixed training ( Wang et al. , 2018a ) . However , even with these speed-ups , the training process is still slow . Also , this method may significantly reduce a model ’ s standard accuracy ( accuracy on natural , unmodified test set ) . As we will demonstrate shortly , we find that this method tends to over-regularize the network during training , which is harmful for obtaining good accuracy . Interval Bound Propagation ( IBP ) . Interval Bound Propagation ( IBP ) uses a very simple rule to compute the pre-activation outer bounds for each layer of the neural network . Unlike linear relaxation based methods , IBP does not relax ReLU neurons and does not consider the correlations between neurons of different layers , yielding much looser bounds . Mirman et al . ( 2018 ) proposed a variety of abstract domains to give sound over-approximations for neural networks , including the “ Box/Interval Domain ” ( referred to as IBP in Gowal et al . ( 2018 ) ) and showed that it could scale to much larger networks than other works ( Raghunathan et al. , 2018a ) could at the time . Gowal et al . ( 2018 ) demonstrated that IBP could outperform many state-of-the-art results by a large margin with more precise approximations for the last linear layer and better training schemes . However , IBP can be unstable to use and hard to tune in practice , since the bounds can be very loose especially during the initial phase of training , posing a challenge to the optimizer . To mitigate instability , Gowal et al . ( 2018 ) use a mixture of regular and minimax robust cross-entropy loss as the model ’ s training loss .
This work proposes CROWN-IBP - novel and efficient certified defense method against adversarial attacks, by combining linear relaxation methods which tend to have tighter bounds with the more efficient interval-based methods. With an attempt to augment the IBP method with its lower computation complexity with the tight CROWN bounds, to get the best of both worlds. One of the primary contributions here is that reduction of computation complexity by an order of \Ln while maintaining similar or better bounds on error. The authors show compelling results with varied sized networks on both MNIST and CIFAR dataset, providing significant improvements over past baselines.
SP:5c78aac08d907ff07205fe28bf9fa4385c58f40d
Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients
Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns . Inspired by the intuition that humans tend to be more sensitive to lower-frequency ( larger-scale ) patterns , we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel . We apply our regularization onto several popular training methods , demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness . Further , building on recent work establishing connections between adversarial robustness and interpretability , we show that our method appears to give more perceptually-aligned gradients . 1 INTRODUCTION . In recent years , deep learning models have demonstrated remarkable capabilities for predictive modeling in computer vision , leading some to liken their abilities on perception tasks to those of humans ( e.g. , Weyand et al. , 2016 ) . However , under closer inspection , the limits of such claims to the narrow scope of i.i.d . data become clear . For example , when faced with adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) or even in non-adversarial domain-agnostic cross-domain evaluations ( Wang et al. , 2019a ; b ; Carlucci et al. , 2019 ) , performance collapses , dispelling claims of human-like perceptive capabilities and calling into doubt more ambitious applications of this technology in the wild . A long line of recent research has investigated the robustness of neural networks , including investigations of the high-dimension nature of models ( Fawzi et al. , 2018 ) , enlarging the gaps between decision boundaries ( Zhang et al. , 2019a ) , training the models with augmented examples through attack methods ( Madry et al. , 2018 ) , and even guaranteeing the robustness of models within given radii of perturbation ( Wong & Kolter , 2018 ; Cohen et al. , 2019 ) . Compared to earlier methods , these recent works enjoy stronger robustness both as assessed via theoretical guarantees and empirically via quantitative performance against strong attacks . However , despite the success of these techniques , vulnerabilities to new varieties of attacks are frequently discovered ( Zhang et al. , 2019b ) . In this paper , we aim to lessen the dependency of neural networks on high-frequency patterns in images , regularizing CNNs to focus on the low-frequency components . Therefore , the main argument of this paper is that : by regularizing the CNN to be most sensitive to the low-frequency components of an image , we can improve the robustness of models . Interestingly , this also appears to lead to more perceptually-aligned gradients . Further , as Wang et al . ( 2019c ) explicitly defined the low ( or high ) -frequency components as images reconstructed from the low ( or high ) -end of the image frequency domain ( as is frequently discussed in neuroscience literature addressing human recognition of shape ( Bar , 2004 ) or face ( Awasthi et al. , 2011 ) ) , we continue with this definition and demonstrate that a smooth kernel can filter out the high-frequency components and improve the models ’ robustness . We test our ideas and show the empirical improvement over popular adversarial robust methods with standard evaluations and further use model interpretation methods to understand how the models make decisions and demonstrate that the regularization helps the model to generate more perceptually-aligned gradients . 2 RELATED WORK . Adversarial examples are samples with small perturbations applied that are imperceptible to humans but can nevertheless induce misclassification in machine learning models ( Szegedy et al. , 2013 ) ) . The discovery of adversarial examples spurred a torrent of research , much of it consisting of an arm race between those inventing new attack methods and others offering defenses to make classifiers robust to these sorts of attacks . We refer to survey papers such as ( Akhtar & Mian , 2018 ; Chakraborty et al. , 2018 ) and only list a few most relevant works about applying regularizations to the networks to improve the adversarial robustness , such as regularizations constraining the Lipschitz constant of the network ( Cisse et al. , 2017 ) ( Lipschitz smoothness ) , regularizing the scale of gradients ( Ross & Doshi-Velez , 2018 ; Jakubovitz & Giryes , 2018 ) ( smooth gradients ) , regularizing the curvature of the loss surface ( Moosavi-Dezfooli et al. , 2019 ) ( smooth loss curvature ) , and promoting the smoothness of the model distribution ( Miyato et al. , 2015 ) . These regularizations also use the concept of “ smoothness , ” but different from ours ( small differences among the adjacent weights ) . Recently , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) has become one of the most popular defense methods , based on the simple idea of augmenting the training data with samples generated through attack methods ( i.e. , threat models ) . While adversarial training excels across many evaluations , recent evidence exposes its new limitations ( Zhang et al. , 2019b ) , suggesting that adversarial robustness remains a challenge . Key differences : In this paper , we present a new technique penalizing differences among the adjacent components of convolutional kernels . Moreover , we expand upon the recent literature demonstrating connections between adversarial robustness and perceptually-aligned gradients . 3 SMOOTH KERNEL REGULARIZATION . Intuition . High-frequency components of images are those reconstructed from the high-end of the image frequency-domain through inverse Fourier transform . This definition was also verified previously by neuroscientists who demonstrated that humans tend to rely on the low-frequency component of images to recognize shapes ( Bar , 2004 ) and faces ( Awasthi et al. , 2011 ) . Therefore , we argue that the smooth kernel regularization is effective because it helps to produce models less sensitive to high-frequency patterns in images . We define a smooth kernel as a convolutional kernel whose weight at each position does not differ much from those of its neighbors , i.e. , ( wi , j −wh , k∈N ( i , j ) ) 2 is a small number , where w denotes the convolutional kernel weight , i , j denote the indices of the convolutional kernel w , and N ( i , j ) denotes the set of the spatial neighbors of i , j . We note two points that support our intuition . 1 . The frequency domain of a smooth kernel has only negligible high-frequency components . This argument can be shown with Theorem 1 in ( Platonov , 2005 ) . Roughly , the idea is to view the weight matrix w as a function that maps the index of weights to the weights : w ( i , j ) → wi , j , then a smooth kernel can be seen as a Lipschitz function with constant α . As pointed out by Platonov ( 2005 ) , Titchmarsh ( 1948 ) showed that when 0 < α < 1 , in the frequency domain , the sum of all the high frequency components with a radius greater than r will converge to a small number , suggesting that the high-frequency components ( when r is large ) are negligible . 2 . The kernel with negligible high-frequency components will weigh the high-frequency components of input images accordingly . This argument can be shown through Convolution Theorem ( Bracewell , 1986 ) , which states w~x = F−1 ( F ( w ) F ( x ) ) , where F ( · ) stands for Fourier transform , ~ stands for convolution operation , and stands for point-wise multiplication . As the theorem states , the convolution operation of images is equivalent to the element-wise multiplication of image frequency domain . Therefore , roughly , if w has negligible high-frequency components in the frequency domain , it will weigh the high-frequency components of x accordingly with negligible weights . Naturally , this argument only pertains to a single convolution , and we rely on our intuition that repeated applications of these smooth kernels across multiple convolution layers in a nonlinear deep network will have some cumulative benefit . Formally , we calculate our regularization term R0 ( w ) as follows : R0 ( w ) = ∑ i , j ∑ h , k∈N ( i , j ) ( wi , j −wh , k ) 2 , We also aim to improve this regularization by trying a few additional heuristics : • First , we notice that directly appending R0 ( w ) will sometimes lead to models that achieve the a small value of R0 ( w ) by directly scaling down the every coefficient of w proportionally , without changing the fluctuation pattern of the weights . To fix this problem , we directly subtract the scale of w ( i.e. , ∑ i , j w 2 i , j ) after R0 ( w ) . • Another heuristic to fix this same problem is to directly divide R0 ( w ) by the scale of w. Empirically , we do not observe significant differences between these two heuristics . We settle with the first heuristic because of the difficulty in calculating gradient when a matrix is the denominator . • Finally , we empirically observe that the regularization above will play a significant role during the early stage of training , but may damage the overall performances later when the regularization pulls towards smoothness too much . To mitigate this problem , we use an exponential function to strengthen the effects of the regularization when the value is big and to weaken it when the value is small . Overall , our final regularization is : R ( w ) = exp ∑ i , j ∑ h , k∈N ( i , j ) ( wi , j −wh , k ) 2 − ∑ i , j w2i , j In practice , the convolutional kernel is usually a 4-dimensional tensor , while our method only encourages smoothness over the two spatial dimensions corresponding to the 2D images . Thus , we only regularize through these two dimensions broadcasting the operation through the channels . Because a repeated calculation of each kernel component ’ s distance with its neighbors will double count some pairs , our implementation instead enumerates over all pairs of neighbors , counting each squared difference only once towards the total penalty . We can directly append the regularization λR ( w ) to most loss functions , where λ is a tuning hyperparameter . In the following experiments , we append λR ( w ) to the vanilla loss function ( crossentropy loss ) , Trades loss ( Zhang et al. , 2019a ) , adversarial training loss ( Madry et al. , 2018 ) , and a variation of logit pairing loss ( Kannan et al. , 2018 ) , as introduced in the following paragraphs . Adversarial training works by fitting the model using adversarial examples generated on the fly at train time by the threat model . Trades loss fits the model with clean examples while regularizing the softmax of augmented adversarial examples to be close to that produced for corresponding clean examples , a natural alternative is to fit the model with augmented adversarial examples while regularizing the softmax of clean examples to be close to that of the corresponding adversarial examples , which is related to logit pairing . However , to make the comparison consistent , we use a variation of logit pairing , penalizing the KL divergence of softmax ( rather than ` 2 distance over logits ) , following the Trades loss , which also uses KL divergence over softmax as the distance metric . To be specific , with the standard notations such as 〈X , Y〉 denoting a data set , 〈x , y〉 denoting a sample , the logit pairing loss is formalized as : minE〈x , y〉∼〈X , Y〉l ( f ( x′ ; θ ) ; y ) + γk ( fl ( x′ ; θ ) , fl ( x ; θ ) ) where x′ = argmax d ( x′ , x ) ≤ l ( f ( x′ ; θ ) ; y ) where d ( · , · ) and k ( · , · ) are distance functions , fl ( · ; · ) denotes the model f ( · ; · ) but outputs the softmax instead of a prediction , l ( · , · ) is a cost function , γ is a tuning hyperparameter , and is the upper bound of perturbation . In our following experiments , we consider d ( · , · ) as ` ∞ norm following popular adversarial training set-up and k ( · , · ) as KL divergence following standard Trades loss . Intuitively , our usage of KL divergence in logit pairing loss is argued to be advantageous because Pinsker ’ s inequality suggests that KL divergence upper-bounds the total variation ( TV ) distance ( e.g. , Csiszar & Körner , 2011 ) , the usage of KL divergence can be seen as a regularization that limits the hypothesis space to the parameters that yield small TV distance over perturbations of samples , which is linked to the robustness of an estimator , a topic that has been studied by the statistics community for over decades ( e.g. , see ( Diakonikolas et al. , 2019 ) and references within ) .
Paper summary: This paper argues that reducing the reliance of neural networks on high-frequency components of images could help robustness against adversarial examples. To attain this goal, the authors propose a new regularization scheme that encourages convolutional kernels to be smoother. The authors augment standard loss functions with the proposed regularization scheme and study the effect on adversarial robustness, as well as perceptual-alignment of model gradients.
SP:687a3382a219565eb3eb85b707017eb582439565
Smooth Kernels Improve Adversarial Robustness and Perceptually-Aligned Gradients
Recent research has shown that CNNs are often overly sensitive to high-frequency textural patterns . Inspired by the intuition that humans tend to be more sensitive to lower-frequency ( larger-scale ) patterns , we design a regularization scheme that penalizes large differences between adjacent components within each convolutional kernel . We apply our regularization onto several popular training methods , demonstrating that the models with the proposed smooth kernels enjoy improved adversarial robustness . Further , building on recent work establishing connections between adversarial robustness and interpretability , we show that our method appears to give more perceptually-aligned gradients . 1 INTRODUCTION . In recent years , deep learning models have demonstrated remarkable capabilities for predictive modeling in computer vision , leading some to liken their abilities on perception tasks to those of humans ( e.g. , Weyand et al. , 2016 ) . However , under closer inspection , the limits of such claims to the narrow scope of i.i.d . data become clear . For example , when faced with adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) or even in non-adversarial domain-agnostic cross-domain evaluations ( Wang et al. , 2019a ; b ; Carlucci et al. , 2019 ) , performance collapses , dispelling claims of human-like perceptive capabilities and calling into doubt more ambitious applications of this technology in the wild . A long line of recent research has investigated the robustness of neural networks , including investigations of the high-dimension nature of models ( Fawzi et al. , 2018 ) , enlarging the gaps between decision boundaries ( Zhang et al. , 2019a ) , training the models with augmented examples through attack methods ( Madry et al. , 2018 ) , and even guaranteeing the robustness of models within given radii of perturbation ( Wong & Kolter , 2018 ; Cohen et al. , 2019 ) . Compared to earlier methods , these recent works enjoy stronger robustness both as assessed via theoretical guarantees and empirically via quantitative performance against strong attacks . However , despite the success of these techniques , vulnerabilities to new varieties of attacks are frequently discovered ( Zhang et al. , 2019b ) . In this paper , we aim to lessen the dependency of neural networks on high-frequency patterns in images , regularizing CNNs to focus on the low-frequency components . Therefore , the main argument of this paper is that : by regularizing the CNN to be most sensitive to the low-frequency components of an image , we can improve the robustness of models . Interestingly , this also appears to lead to more perceptually-aligned gradients . Further , as Wang et al . ( 2019c ) explicitly defined the low ( or high ) -frequency components as images reconstructed from the low ( or high ) -end of the image frequency domain ( as is frequently discussed in neuroscience literature addressing human recognition of shape ( Bar , 2004 ) or face ( Awasthi et al. , 2011 ) ) , we continue with this definition and demonstrate that a smooth kernel can filter out the high-frequency components and improve the models ’ robustness . We test our ideas and show the empirical improvement over popular adversarial robust methods with standard evaluations and further use model interpretation methods to understand how the models make decisions and demonstrate that the regularization helps the model to generate more perceptually-aligned gradients . 2 RELATED WORK . Adversarial examples are samples with small perturbations applied that are imperceptible to humans but can nevertheless induce misclassification in machine learning models ( Szegedy et al. , 2013 ) ) . The discovery of adversarial examples spurred a torrent of research , much of it consisting of an arm race between those inventing new attack methods and others offering defenses to make classifiers robust to these sorts of attacks . We refer to survey papers such as ( Akhtar & Mian , 2018 ; Chakraborty et al. , 2018 ) and only list a few most relevant works about applying regularizations to the networks to improve the adversarial robustness , such as regularizations constraining the Lipschitz constant of the network ( Cisse et al. , 2017 ) ( Lipschitz smoothness ) , regularizing the scale of gradients ( Ross & Doshi-Velez , 2018 ; Jakubovitz & Giryes , 2018 ) ( smooth gradients ) , regularizing the curvature of the loss surface ( Moosavi-Dezfooli et al. , 2019 ) ( smooth loss curvature ) , and promoting the smoothness of the model distribution ( Miyato et al. , 2015 ) . These regularizations also use the concept of “ smoothness , ” but different from ours ( small differences among the adjacent weights ) . Recently , adversarial training ( Goodfellow et al. , 2015 ; Madry et al. , 2018 ) has become one of the most popular defense methods , based on the simple idea of augmenting the training data with samples generated through attack methods ( i.e. , threat models ) . While adversarial training excels across many evaluations , recent evidence exposes its new limitations ( Zhang et al. , 2019b ) , suggesting that adversarial robustness remains a challenge . Key differences : In this paper , we present a new technique penalizing differences among the adjacent components of convolutional kernels . Moreover , we expand upon the recent literature demonstrating connections between adversarial robustness and perceptually-aligned gradients . 3 SMOOTH KERNEL REGULARIZATION . Intuition . High-frequency components of images are those reconstructed from the high-end of the image frequency-domain through inverse Fourier transform . This definition was also verified previously by neuroscientists who demonstrated that humans tend to rely on the low-frequency component of images to recognize shapes ( Bar , 2004 ) and faces ( Awasthi et al. , 2011 ) . Therefore , we argue that the smooth kernel regularization is effective because it helps to produce models less sensitive to high-frequency patterns in images . We define a smooth kernel as a convolutional kernel whose weight at each position does not differ much from those of its neighbors , i.e. , ( wi , j −wh , k∈N ( i , j ) ) 2 is a small number , where w denotes the convolutional kernel weight , i , j denote the indices of the convolutional kernel w , and N ( i , j ) denotes the set of the spatial neighbors of i , j . We note two points that support our intuition . 1 . The frequency domain of a smooth kernel has only negligible high-frequency components . This argument can be shown with Theorem 1 in ( Platonov , 2005 ) . Roughly , the idea is to view the weight matrix w as a function that maps the index of weights to the weights : w ( i , j ) → wi , j , then a smooth kernel can be seen as a Lipschitz function with constant α . As pointed out by Platonov ( 2005 ) , Titchmarsh ( 1948 ) showed that when 0 < α < 1 , in the frequency domain , the sum of all the high frequency components with a radius greater than r will converge to a small number , suggesting that the high-frequency components ( when r is large ) are negligible . 2 . The kernel with negligible high-frequency components will weigh the high-frequency components of input images accordingly . This argument can be shown through Convolution Theorem ( Bracewell , 1986 ) , which states w~x = F−1 ( F ( w ) F ( x ) ) , where F ( · ) stands for Fourier transform , ~ stands for convolution operation , and stands for point-wise multiplication . As the theorem states , the convolution operation of images is equivalent to the element-wise multiplication of image frequency domain . Therefore , roughly , if w has negligible high-frequency components in the frequency domain , it will weigh the high-frequency components of x accordingly with negligible weights . Naturally , this argument only pertains to a single convolution , and we rely on our intuition that repeated applications of these smooth kernels across multiple convolution layers in a nonlinear deep network will have some cumulative benefit . Formally , we calculate our regularization term R0 ( w ) as follows : R0 ( w ) = ∑ i , j ∑ h , k∈N ( i , j ) ( wi , j −wh , k ) 2 , We also aim to improve this regularization by trying a few additional heuristics : • First , we notice that directly appending R0 ( w ) will sometimes lead to models that achieve the a small value of R0 ( w ) by directly scaling down the every coefficient of w proportionally , without changing the fluctuation pattern of the weights . To fix this problem , we directly subtract the scale of w ( i.e. , ∑ i , j w 2 i , j ) after R0 ( w ) . • Another heuristic to fix this same problem is to directly divide R0 ( w ) by the scale of w. Empirically , we do not observe significant differences between these two heuristics . We settle with the first heuristic because of the difficulty in calculating gradient when a matrix is the denominator . • Finally , we empirically observe that the regularization above will play a significant role during the early stage of training , but may damage the overall performances later when the regularization pulls towards smoothness too much . To mitigate this problem , we use an exponential function to strengthen the effects of the regularization when the value is big and to weaken it when the value is small . Overall , our final regularization is : R ( w ) = exp ∑ i , j ∑ h , k∈N ( i , j ) ( wi , j −wh , k ) 2 − ∑ i , j w2i , j In practice , the convolutional kernel is usually a 4-dimensional tensor , while our method only encourages smoothness over the two spatial dimensions corresponding to the 2D images . Thus , we only regularize through these two dimensions broadcasting the operation through the channels . Because a repeated calculation of each kernel component ’ s distance with its neighbors will double count some pairs , our implementation instead enumerates over all pairs of neighbors , counting each squared difference only once towards the total penalty . We can directly append the regularization λR ( w ) to most loss functions , where λ is a tuning hyperparameter . In the following experiments , we append λR ( w ) to the vanilla loss function ( crossentropy loss ) , Trades loss ( Zhang et al. , 2019a ) , adversarial training loss ( Madry et al. , 2018 ) , and a variation of logit pairing loss ( Kannan et al. , 2018 ) , as introduced in the following paragraphs . Adversarial training works by fitting the model using adversarial examples generated on the fly at train time by the threat model . Trades loss fits the model with clean examples while regularizing the softmax of augmented adversarial examples to be close to that produced for corresponding clean examples , a natural alternative is to fit the model with augmented adversarial examples while regularizing the softmax of clean examples to be close to that of the corresponding adversarial examples , which is related to logit pairing . However , to make the comparison consistent , we use a variation of logit pairing , penalizing the KL divergence of softmax ( rather than ` 2 distance over logits ) , following the Trades loss , which also uses KL divergence over softmax as the distance metric . To be specific , with the standard notations such as 〈X , Y〉 denoting a data set , 〈x , y〉 denoting a sample , the logit pairing loss is formalized as : minE〈x , y〉∼〈X , Y〉l ( f ( x′ ; θ ) ; y ) + γk ( fl ( x′ ; θ ) , fl ( x ; θ ) ) where x′ = argmax d ( x′ , x ) ≤ l ( f ( x′ ; θ ) ; y ) where d ( · , · ) and k ( · , · ) are distance functions , fl ( · ; · ) denotes the model f ( · ; · ) but outputs the softmax instead of a prediction , l ( · , · ) is a cost function , γ is a tuning hyperparameter , and is the upper bound of perturbation . In our following experiments , we consider d ( · , · ) as ` ∞ norm following popular adversarial training set-up and k ( · , · ) as KL divergence following standard Trades loss . Intuitively , our usage of KL divergence in logit pairing loss is argued to be advantageous because Pinsker ’ s inequality suggests that KL divergence upper-bounds the total variation ( TV ) distance ( e.g. , Csiszar & Körner , 2011 ) , the usage of KL divergence can be seen as a regularization that limits the hypothesis space to the parameters that yield small TV distance over perturbations of samples , which is linked to the robustness of an estimator , a topic that has been studied by the statistics community for over decades ( e.g. , see ( Diakonikolas et al. , 2019 ) and references within ) .
The authors propose a method for learning smoother convolutional kernels with the goal of improving robustness and human alignment. Specifically, they propose a regularizer penalizing large changes between consecutive pixels of the kernel with the intuition of penalizing the use of high-frequency input components. They evaluate the impact of their method on the adversarial robustness of various models and class visualization methods.
SP:687a3382a219565eb3eb85b707017eb582439565
Discovering Topics With Neural Topic Models Built From PLSA Loss
1 INTRODUCTION . Nowadays , with the digital era , electronic text corpora are ubiquitous . These corpora can be company emails , news groups articles , online journal articles , Wikipedia articles , video metadata ( titles , descriptions , tags ) . These corpora can be very large , thus requiring automatic analysis methods that are investigated by the researchers working on text content analysis ( Collobert et al. , 2011 ; Cambria & White , 2014 ) . Investigated methods are about named entity recognition , text classification , etc ( Nadeau & Sekine , 2007 ; S. , 2002 ) . An important problem in text analysis is about structuring texts corpora around topics ( Daud et al. , 2010 ; Liu & Zhang , 2012 ) . Developed tools would allow to summarize very large amount of text documents into a limited , human understandable , number of topics . In computer science many definitions of the concept of topic can be encountered . Two definitions are very popular . The first one defines a topic as an entity of a knowledge graph such as Freebase or Wikidata ( Bollacker et al. , 2008 ; Vrandečić & Krötzsch , 2014 ) . The second one defines a topic as probability distribution over words of a given vocabulary ( Hofmann , 2001 ; Blei et al. , 2003 ) . When topics are represented as knowledge graph entities , documents can be associated to identified concepts with very precise meaning . The main drawback is that knowledge graphs are in general composed of a very large number of entities . For example , in 2019 , Wikidata counts about 40 million entities . Automatically identifying these entities requires building extreme classifiers trained with expensive labelled data ( Puurula et al. , 2014 ; Liu et al. , 2017 ) . When topics are defined as probability distribution over words of vocabulary , they can be identified using unsupervised methods that automatically extract them from text corpora . A precursor of such methods is the latent semantic analysis ( LSA ) model which is based on the word-document co-occurrence counts matrix factorization ( Dumais , 1990 ) . Since then , LSA has been extended to various probabilistic based models ( Hofmann , 2001 ; Blei et al. , 2003 ) , and more recently to neural network based models ( Salakhutdinov & Hinton , 2009 ; Larochelle & Lauly , 2012 ; Wan et al. , 2011 ; Yao et al. , 2017 ; Dieng et al. , 2017 ) . In this paper , we propose a novel neural network based model to automatically , in an unsupervised fashion , discover topics in a text corpus . The first variation of the model is based on a neural networks that uses as input or parameters documents , words , and topics discrete lookup table embedding to represent probabilities of words given documents , probabilities of words given topics , and probabilities of topics given documents . However because in a given corpus , the number of documents can be very large , discrete lookup table embedding explicitly associating to each document a given embedded vector can be unpractical . For example , for the case of online stores such as Amazon , or video platforms such as Dailymotion or Youtube , the number of documents are in the order of billions . To overcome this limitation , we propose a model that generates continuous document embedding using a neural auto-encoder ( Kingma & Welling , 2013 ) . Our neural topic models are trained using cross entropy loss exploiting probabilistic latent semantic analysis ( PLSA ) assumptions stating that given topics , words and documents can be considered independent . The proposed models are evaluated on six datasets : KOS , NIPS , NYtimes , TwentyNewsGroup , Wikipedia English 2012 , and Dailymotion English . The four first datasets are classically used to benchmark topic models based on bag-of-word representation ( Dua & Graff , 2017 ) . Wikipedia , and Dailymotion are large scale datasets counting about one million documents . These latter datasets are used to qualitatively assess how our models behave on large scale datasets . Conducted experiments demonstrate that the proposed models are effective in discovering latent topics . Furthermore , evaluation results show that our models achieve lower perplexity than latent Dirichlet allocation ( LDA ) trained on the same datasets . The remainder of this paper is organized as follows . Section 2 discusses related work . Section 3 briefly presents principles of topics generation with PLSA . Section 4 presents the first version of the model we propose which is based on discrete topics , documents , and words embedding . Section 5 gives details about the second version of the model which is based on embedding documents using a continuous neural auto-encoder . Section 6 provides details about the experiments conducted to assess the effectiveness of the proposed models . Finally Section 7 derives conclusions and gives future research directions . 2 RELATED WORK . Unsupervised text analysis with methods related to latent semantic analysis ( LSA ) has a long research history . Latent semantic analysis takes a high dimensional text vector representation and apply linear dimensionality reduction methods such as singular value decomposition ( SVD ) to the word-document counts matrix ( Dumais , 1990 ) . The main drawback of LSA is related to it ’ s lack of statistical foundations limiting the model interpretability . Probabilistic latent semantic analysis ( PLSA ) was proposed by Hofmann ( 2001 ) to ground LSA on solid statistical foundations . PLSA is based on a well defined generative model for text generation based on the bag-of-words assumption . PLSA can be interpreted as a probabilistic matrix factorisation of the word-document counts matrix . Because PLSA model defines a probabilistic mixture model , it ’ s parameters can be estimated using the classical Expectation-Maximization ( EM ) algorithm ( Moon , 1996 ) . PLSA has been exploited in many applications related to text modelling by Hofmann ( 2001 ) , to collaborative filtering by Popescul et al . ( 2001 ) , to web links analysis by Cohn & Hofmann ( 2001 ) , and to visual scene classification by Quelhas et al . ( 2007 ) . The main drawback of PLSA is that it is a generative model of the training data . It does not apply to unseen data . To extend PLSA to unseen data , Blei et al . ( 2003 ) proposed the latent Dirichlet Allocation ( LDA ) which models documents via hidden Dirichlet random variables specifying probabilities on a lower dimensional hidden spaces . The distribution over words of an unseen document is a continuous mixture over document space and a discrete mixture over all possible topics . Modeling with LDA has been thoroughly investigated resulting in dynamic topic models to account for topics temporal dynamics by Blei & Lafferty ( 2006 ) ; Wang et al . ( 2008 ) ; Shalit et al . ( 2013 ) ; Varadarajan et al . ( 2013 ) ; Farrahi & Gatica-Perez ( 2014 ) , hierarchical topic models to account topic hierarchical structures by Blei et al . ( 2004 ) , and multi-lingual topic model to account for multi-lingual corpora by Boyd-Grabber & Blei ( 2009 ) ; Vulic et al . ( 2015 ) , and supervised topic model to account for corpora composed by categorized documents ( Blei & McAuliffe , 2008 ) . Beside text modelling , LDA has been applied to discover people ’ s socio-geographic routines from mobiles phones data by Farrahi & Gatica-Perez ( 2010 ; 2011 ; 2014 ) , mining recurrent activities from long term videos logs by Varadarajan et al . ( 2013 ) . Learning a topic models based on LSA , PLSA or LDA requires considering jointly all words , documents , and topics . This is a strong limitation when the vocabulary and the number of documents are very large . For example , for PLSA or LDA , learning the model requires maintaining a large matrix containing the probability of a topics given words and documents ( Hofmann , 2001 ; Blei et al. , 2003 ) . To overcome this limitation Hoffman et al . ( 2010 ) proposed online training of LDA models using stochastic variational inference . Recently , with the rise of deep learning with neural networks that are trained using stochastic gradient descent on sample batches , novel topic models based on neural networks have been proposed . Salakhutdinov & Hinton ( 2009 ) proposed a two layer restricted Boltzmann machine ( RBM ) called the replicated-softmax to extract low level latent topics from a large collection of unstructured documents . The model is trained using the contrastive divergence formalism proposed by CarreiraPerpiñán & Hinton ( 2005 ) . Benchmarking the model performance against LDA showed improvement in term on unseen documents ’ perplexity and accuracy on retrieval tasks . Larochelle & Lauly ( 2012 ) proposed a neural auto-regressive topic model inspired from the replicated softmax model but replacing the RBM model with neural auto-regressive distribution estimator ( NADE ) which is a generative model over vectors of binary observations ( Larochelle & Murray , 2011 ) . An advantage of the NADE over the RBM is that during training , unlike for the RBM , computing the data negative log-likelihood ’ s gradient with respect to the model parameters does not requires Monte Carlo approximation . Srivastava et al . ( 2013 ) generalized the replicated softmax model proposed by Salakhutdinov & Hinton ( 2009 ) to deep RBM which has more representation power . Cao et al . ( 2015 ) proposed neural topic model ( NTM ) , and it ’ s supervised extension ( sNTM ) where words and documents embedding are combined . This work goes beyond the bag-of-words representation by embedding word n-grams with word2vec embedding as proposed by Mikolov et al . ( 2013 ) . Moody ( 2016 ) proposed the lda2vec , a model combining Dirichlet topic model as Blei et al . ( 2003 ) ) and word embedding as Mikolov et al . ( 2013 ) . The goal of lda2vec is to embed both words and documents in the same space in order to learn both representations simultaneously . Other interesting works combine probabilistic topic models such as LDA , and neural network modelling ( Wan et al. , 2011 ; Yao et al. , 2017 ; Dieng et al. , 2017 ) . Wan et al . ( 2011 ) proposed a hybrid model combining a neural network and a latent topic model . The neural network provides a lower dimensional embedding of the input data , while the topic model extracts further structure from the neural network output features . The proposed model was validated on computer vision tasks . Yao et al . ( 2017 ) proposed to integrate knowledge graph embedding into probabilistic topic modelling by using as observation for the probabilistic topic model document-level word counts and knowledge graph entities embedded into vector forms . Dieng et al . ( 2017 ) integrated to a recurrent neural network based language model global word semantic information extracted using a probabilistic topic model . 3 TOPIC MODELLING WITH PROBABILISTIC LATENT SEMANTIC ANALYSIS . Probabilistic latent semantic analysis ( PLSA ) proposed by Hofmann ( 2001 ) is based on the bag-ofwords representation defined in the following . 3.1 BAG OF WORDS REPRESENTATION . The grounding assumption of the bag-of-word representation is , that for text content representation , only word occurrences matter . Word orders can be ignored without harm to understanding . Let us assume available a corpus of documents D = { doc1 , doc2 , ... , doci , ... , docI } . Every document is represented as the occurrence count of words of a given vocabulary W = { word1 , word2 , ... , wordn , ... , wordN } . Let us denote by c ( wordn , doci ) the occurrence count of the n ’ th vocabulary word into the i ’ th document . The normalized bag-of-words representation of the i ’ th document is given by the empirical word occurrences probabilities : fni = c ( wordn , doci ) ∑N m=1 c ( wordm , doci ) , n = 1 , ... , N. ( 1 ) With the bag-of-words assumption , fni , is an empirical approximation of the probability that wordn appears in document doci denoted p ( wordn|doci ) .
This paper proposes a neural topic model that aim to discover topics by minimizing a version of the PLSA loss. According to PLSA, a document is presented as a mixture of topics, while a topic is a probability distribution over words, with documents and words assumed independent given topics. Thanks to this assumption, each of these probability distributions (word|topic, topic|document, and word|document) can essentially be expressed as a matrix multiplication of the other two, and EM is usually adopted for the optimization. This paper proposes to embed these relationships in a neural network and then optimize the model using SGD.
SP:b9b8e3efa69342c90b91dcb29bda1e2f8127581e
Discovering Topics With Neural Topic Models Built From PLSA Loss
1 INTRODUCTION . Nowadays , with the digital era , electronic text corpora are ubiquitous . These corpora can be company emails , news groups articles , online journal articles , Wikipedia articles , video metadata ( titles , descriptions , tags ) . These corpora can be very large , thus requiring automatic analysis methods that are investigated by the researchers working on text content analysis ( Collobert et al. , 2011 ; Cambria & White , 2014 ) . Investigated methods are about named entity recognition , text classification , etc ( Nadeau & Sekine , 2007 ; S. , 2002 ) . An important problem in text analysis is about structuring texts corpora around topics ( Daud et al. , 2010 ; Liu & Zhang , 2012 ) . Developed tools would allow to summarize very large amount of text documents into a limited , human understandable , number of topics . In computer science many definitions of the concept of topic can be encountered . Two definitions are very popular . The first one defines a topic as an entity of a knowledge graph such as Freebase or Wikidata ( Bollacker et al. , 2008 ; Vrandečić & Krötzsch , 2014 ) . The second one defines a topic as probability distribution over words of a given vocabulary ( Hofmann , 2001 ; Blei et al. , 2003 ) . When topics are represented as knowledge graph entities , documents can be associated to identified concepts with very precise meaning . The main drawback is that knowledge graphs are in general composed of a very large number of entities . For example , in 2019 , Wikidata counts about 40 million entities . Automatically identifying these entities requires building extreme classifiers trained with expensive labelled data ( Puurula et al. , 2014 ; Liu et al. , 2017 ) . When topics are defined as probability distribution over words of vocabulary , they can be identified using unsupervised methods that automatically extract them from text corpora . A precursor of such methods is the latent semantic analysis ( LSA ) model which is based on the word-document co-occurrence counts matrix factorization ( Dumais , 1990 ) . Since then , LSA has been extended to various probabilistic based models ( Hofmann , 2001 ; Blei et al. , 2003 ) , and more recently to neural network based models ( Salakhutdinov & Hinton , 2009 ; Larochelle & Lauly , 2012 ; Wan et al. , 2011 ; Yao et al. , 2017 ; Dieng et al. , 2017 ) . In this paper , we propose a novel neural network based model to automatically , in an unsupervised fashion , discover topics in a text corpus . The first variation of the model is based on a neural networks that uses as input or parameters documents , words , and topics discrete lookup table embedding to represent probabilities of words given documents , probabilities of words given topics , and probabilities of topics given documents . However because in a given corpus , the number of documents can be very large , discrete lookup table embedding explicitly associating to each document a given embedded vector can be unpractical . For example , for the case of online stores such as Amazon , or video platforms such as Dailymotion or Youtube , the number of documents are in the order of billions . To overcome this limitation , we propose a model that generates continuous document embedding using a neural auto-encoder ( Kingma & Welling , 2013 ) . Our neural topic models are trained using cross entropy loss exploiting probabilistic latent semantic analysis ( PLSA ) assumptions stating that given topics , words and documents can be considered independent . The proposed models are evaluated on six datasets : KOS , NIPS , NYtimes , TwentyNewsGroup , Wikipedia English 2012 , and Dailymotion English . The four first datasets are classically used to benchmark topic models based on bag-of-word representation ( Dua & Graff , 2017 ) . Wikipedia , and Dailymotion are large scale datasets counting about one million documents . These latter datasets are used to qualitatively assess how our models behave on large scale datasets . Conducted experiments demonstrate that the proposed models are effective in discovering latent topics . Furthermore , evaluation results show that our models achieve lower perplexity than latent Dirichlet allocation ( LDA ) trained on the same datasets . The remainder of this paper is organized as follows . Section 2 discusses related work . Section 3 briefly presents principles of topics generation with PLSA . Section 4 presents the first version of the model we propose which is based on discrete topics , documents , and words embedding . Section 5 gives details about the second version of the model which is based on embedding documents using a continuous neural auto-encoder . Section 6 provides details about the experiments conducted to assess the effectiveness of the proposed models . Finally Section 7 derives conclusions and gives future research directions . 2 RELATED WORK . Unsupervised text analysis with methods related to latent semantic analysis ( LSA ) has a long research history . Latent semantic analysis takes a high dimensional text vector representation and apply linear dimensionality reduction methods such as singular value decomposition ( SVD ) to the word-document counts matrix ( Dumais , 1990 ) . The main drawback of LSA is related to it ’ s lack of statistical foundations limiting the model interpretability . Probabilistic latent semantic analysis ( PLSA ) was proposed by Hofmann ( 2001 ) to ground LSA on solid statistical foundations . PLSA is based on a well defined generative model for text generation based on the bag-of-words assumption . PLSA can be interpreted as a probabilistic matrix factorisation of the word-document counts matrix . Because PLSA model defines a probabilistic mixture model , it ’ s parameters can be estimated using the classical Expectation-Maximization ( EM ) algorithm ( Moon , 1996 ) . PLSA has been exploited in many applications related to text modelling by Hofmann ( 2001 ) , to collaborative filtering by Popescul et al . ( 2001 ) , to web links analysis by Cohn & Hofmann ( 2001 ) , and to visual scene classification by Quelhas et al . ( 2007 ) . The main drawback of PLSA is that it is a generative model of the training data . It does not apply to unseen data . To extend PLSA to unseen data , Blei et al . ( 2003 ) proposed the latent Dirichlet Allocation ( LDA ) which models documents via hidden Dirichlet random variables specifying probabilities on a lower dimensional hidden spaces . The distribution over words of an unseen document is a continuous mixture over document space and a discrete mixture over all possible topics . Modeling with LDA has been thoroughly investigated resulting in dynamic topic models to account for topics temporal dynamics by Blei & Lafferty ( 2006 ) ; Wang et al . ( 2008 ) ; Shalit et al . ( 2013 ) ; Varadarajan et al . ( 2013 ) ; Farrahi & Gatica-Perez ( 2014 ) , hierarchical topic models to account topic hierarchical structures by Blei et al . ( 2004 ) , and multi-lingual topic model to account for multi-lingual corpora by Boyd-Grabber & Blei ( 2009 ) ; Vulic et al . ( 2015 ) , and supervised topic model to account for corpora composed by categorized documents ( Blei & McAuliffe , 2008 ) . Beside text modelling , LDA has been applied to discover people ’ s socio-geographic routines from mobiles phones data by Farrahi & Gatica-Perez ( 2010 ; 2011 ; 2014 ) , mining recurrent activities from long term videos logs by Varadarajan et al . ( 2013 ) . Learning a topic models based on LSA , PLSA or LDA requires considering jointly all words , documents , and topics . This is a strong limitation when the vocabulary and the number of documents are very large . For example , for PLSA or LDA , learning the model requires maintaining a large matrix containing the probability of a topics given words and documents ( Hofmann , 2001 ; Blei et al. , 2003 ) . To overcome this limitation Hoffman et al . ( 2010 ) proposed online training of LDA models using stochastic variational inference . Recently , with the rise of deep learning with neural networks that are trained using stochastic gradient descent on sample batches , novel topic models based on neural networks have been proposed . Salakhutdinov & Hinton ( 2009 ) proposed a two layer restricted Boltzmann machine ( RBM ) called the replicated-softmax to extract low level latent topics from a large collection of unstructured documents . The model is trained using the contrastive divergence formalism proposed by CarreiraPerpiñán & Hinton ( 2005 ) . Benchmarking the model performance against LDA showed improvement in term on unseen documents ’ perplexity and accuracy on retrieval tasks . Larochelle & Lauly ( 2012 ) proposed a neural auto-regressive topic model inspired from the replicated softmax model but replacing the RBM model with neural auto-regressive distribution estimator ( NADE ) which is a generative model over vectors of binary observations ( Larochelle & Murray , 2011 ) . An advantage of the NADE over the RBM is that during training , unlike for the RBM , computing the data negative log-likelihood ’ s gradient with respect to the model parameters does not requires Monte Carlo approximation . Srivastava et al . ( 2013 ) generalized the replicated softmax model proposed by Salakhutdinov & Hinton ( 2009 ) to deep RBM which has more representation power . Cao et al . ( 2015 ) proposed neural topic model ( NTM ) , and it ’ s supervised extension ( sNTM ) where words and documents embedding are combined . This work goes beyond the bag-of-words representation by embedding word n-grams with word2vec embedding as proposed by Mikolov et al . ( 2013 ) . Moody ( 2016 ) proposed the lda2vec , a model combining Dirichlet topic model as Blei et al . ( 2003 ) ) and word embedding as Mikolov et al . ( 2013 ) . The goal of lda2vec is to embed both words and documents in the same space in order to learn both representations simultaneously . Other interesting works combine probabilistic topic models such as LDA , and neural network modelling ( Wan et al. , 2011 ; Yao et al. , 2017 ; Dieng et al. , 2017 ) . Wan et al . ( 2011 ) proposed a hybrid model combining a neural network and a latent topic model . The neural network provides a lower dimensional embedding of the input data , while the topic model extracts further structure from the neural network output features . The proposed model was validated on computer vision tasks . Yao et al . ( 2017 ) proposed to integrate knowledge graph embedding into probabilistic topic modelling by using as observation for the probabilistic topic model document-level word counts and knowledge graph entities embedded into vector forms . Dieng et al . ( 2017 ) integrated to a recurrent neural network based language model global word semantic information extracted using a probabilistic topic model . 3 TOPIC MODELLING WITH PROBABILISTIC LATENT SEMANTIC ANALYSIS . Probabilistic latent semantic analysis ( PLSA ) proposed by Hofmann ( 2001 ) is based on the bag-ofwords representation defined in the following . 3.1 BAG OF WORDS REPRESENTATION . The grounding assumption of the bag-of-word representation is , that for text content representation , only word occurrences matter . Word orders can be ignored without harm to understanding . Let us assume available a corpus of documents D = { doc1 , doc2 , ... , doci , ... , docI } . Every document is represented as the occurrence count of words of a given vocabulary W = { word1 , word2 , ... , wordn , ... , wordN } . Let us denote by c ( wordn , doci ) the occurrence count of the n ’ th vocabulary word into the i ’ th document . The normalized bag-of-words representation of the i ’ th document is given by the empirical word occurrences probabilities : fni = c ( wordn , doci ) ∑N m=1 c ( wordm , doci ) , n = 1 , ... , N. ( 1 ) With the bag-of-words assumption , fni , is an empirical approximation of the probability that wordn appears in document doci denoted p ( wordn|doci ) .
I am unimpressed with the quality of writing and presentation, to begin with. There are numerous grammatical errors and typos that make the paper a very difficult read. The presentation also follows an inequitable pattern where the backgrounds and related works are overemphasized and the actual contribution of the paper seems very limited. In its current form, this paper is not ready for publication in ICLR.
SP:b9b8e3efa69342c90b91dcb29bda1e2f8127581e
GDP: Generalized Device Placement for Dataflow Graphs
1 INTRODUCTION . Neural networks have demonstrated remarkable scalability–improved performance can usually be achieved by training a larger model on a larger dataset ( Hestness et al. , 2017 ; Shazeer et al. , 2017 ; Jozefowicz et al. , 2016 ; Mahajan et al. , 2018 ; Radford et al. ) . Training such large models efficiently while meeting device constraints , like memory limitations , necessitate partitioning of the underlying dataflow graphs for the models across multiple devices . However , devising a good partitioning and placement of the dataflow graphs requires deep understanding of the model architecture , optimizations performed by domain-specific compilers , as well as the device characteristics , and is therefore extremely hard even for experts . ML practitioners often rely on their understanding of model architecture to determine a reasonable partitioning and placement for graphs . However , relying solely on the model architecture while ignoring the effect of the partitioning on subsequent compiler optimizations like op-fusion can lead to sub-optimal placements and consequently under-utilization of available devices . The goal of automated device placement is to find the optimal assignment of operations to devices such that the end-to-end execution time for a single step is minimized and all device constraints like memory limitations are satisfied . Since this objective function is non-differentiable , prior approaches ( Mirhoseini et al. , 2017 ; 2018 ; Gao et al. , 2018 ) have explored solutions based on reinforcement learning ( RL ) . However , these RL policies are usually not transferable and require training a new policy from scratch for each individual graph . This makes such approaches impractical due to the significant amount of compute required for the policy search itself , at times offsetting gains made by the reduced step time . In this paper , we propose an end-to-end deep RL method for device placement where the learned policy is generalizable to new graphs . Specifically , the policy network consists of a graph-embedding network that encodes operation features and dependencies into a trainable graph representation , followed by a scalable sequence-to-sequence placement network based on an improved Transformer ( Vaswani et al. , 2017 ; Dai et al. , 2019 ) . The placement network transforms the graph representations into a placement decision with soft attention , removing hard constraints such as hierarchical grouping of operations ( Mirhoseini et al. , 2018 ) or co-location heuristics ( to reduce the placement complexity ) ( Mirhoseini et al. , 2017 ) . Both of our graph-embedding network and placement network can be jointly trained in an end-to-end fashion using a supervised reward , without the need to manipulate the loss functions at multiple levels . We empirically show that the network learns flexible placement policies at a per-node granularity and can scale to problems over 50,000 nodes . To generalize to arbitrary and held-out graphs , our policy is trained jointly over a set of dataflow graphs ( instead of one at a time ) and then fine-tuned on each graph individually . By transferring the learned graph embeddings and placement policies , we are able to achieve faster convergence and thus use less resources to obtain high-quality placements . We also use super-positioning , i.e. , a feature conditioning mechanism based on the input graph embeddings , to effectively orchestrate the optimization dynamics of graphs with drastically different sizes in the same batch . Our contributions can be summarized as follows : 1 . An end-to-end device placement network that can generalize to arbitrary and held-out graphs . This is enabled by jointly learning a transferable graph neural network along with the placement network . 2 . A scalable placement network with an efficient recurrent attention mechanism , which eliminates the need for an explicit grouping stage before placement . The proposed end-to-end network provides 15× faster convergence as compared to the hierarchical LSTM model used in earlier works ( Mirhoseini et al. , 2017 ; 2018 ) . 3 . A new batch pre-training and fine-tuning strategy based on network superposition , which leads to improved transferability , better placements especially for larger graphs , and 10× reduction in policy search time as compared to training individual graphs from scratch . 4 . Superior performance over a wide set of workloads , including InceptionV3 ( Szegedy et al. , 2015 ) , AmoebaNet ( Real et al. , 2018 ) , RNNs , GNMT ( Wu et al. , 2016 ) , Transformer-XL ( Dai et al. , 2019 ) , WaveNet ( van den Oord et al. , 2016 ) , and more . 2 RELATED WORK . Device Placement Reinforcement learning has been used for device placement of a given dataflow graph ( Mirhoseini et al. , 2017 ) and demonstrated run time reduction over human crafted placement and conventional heuristics . For improved scalability , a hierarchical device placement strategy ( HDP ) ( Mirhoseini et al. , 2018 ) has been proposed that clusters operations into groups before placing the operation groups onto devices . Spotlight ( Gao et al. , 2018 ) applies proximal policy optimization and cross-entropy minimization to lower training overhead . Both HDP and Spotlight rely on LSTM controllers that are difficult to train and struggle to capture very long-term dependencies over large graphs . In addition , both methods are restricted to process only a single graph at a time , and can not generalize to arbitrary and held-out graphs . Placeto ( Addanki et al. , 2019 ) represents the first attempt to generalize device placement using a graph embedding network . But like HDP , Placeto also relies on hierarchical grouping and only generates placement for one node at each time step . Our approach ( GDP ) leverages a recurrent attention mechanism and generates the whole graph placement at once . This significantly reduces the training time for the controller . We also demonstrate the generalization ability of GDP over a wider set of important workloads . Parallelization Strategy Mesh-TensorFlow is a language that provides a general class of distributed tensor computations . While data-parallelism can be viewed as splitting tensors and operations along the “ batch ” dimension , in Mesh-TensorFlow the user can specify any tensor-dimensions to be split across any dimensions of a multi-dimensional mesh of processors . FlexFlow ( Jia et al. , 2018 ) introduces SOAP , a more comprehensive search space of parallelization strategies for DNNs which allows parallelization of a DNN in the Sample , Operator , Attribute , and Parameter dimensions . It uses guided randomized search of the SOAP space to find a parallelization strategy for a specific parallel machine . GPipe ( Huang et al. , 2018 ) proposed pipeline parallelism , by partitioning a model across different accelerators and automatically splitting a mini-batch of training examples into smaller micro-batches . By pipelining the execution across micro-batches , accelerators can operate in parallel . Our GDP focuses on a general deep RL method for automating device placement on arbitrary graphs , and is therefore orthogonal to existing parallelization strategies . Compiler Optimization REGAL ( Paliwal et al. , 2019 ) uses deep RL to optimize the execution cost of computation graphs in a static compiler . The method leverages the policy ’ s ability to transfer to new graphs to improve the quality of the genetic algorithm for the same objective budget . However , REGAL only targets peak memory minimization while GDP focuses on graph run time and scalability while also meeting the peak memory constraints of the devices . Specifically , we generalize graph partitioning and placement into a single end-to-end problem , with and without simulation , which can handle graphs with over 50,000 nodes . 3 END-TO-END PLACEMENT POLICY . Given a dataflow graph G ( V , E ) where V represents atomic computational operations ( ops ) and E represents the data dependency , our goal is to learn a policy π : G 7→ D that assigns a placement D ∈ D for all the ops in the given graph G ∈ G , to maximize the reward rG , D defined based on the run time . D is the allocated devices that can be a mixture of CPUs and GPUs . In this work , we represent policy πθ as a neural network parameterized by θ . Unlike prior works that focus on a single graph only , the RL objective in GDP is defined to simultaneously reduce the expected runtime of the placements over a set of N dataflow graphs : J ( θ ) = EG∼G , D∼πθ ( G ) [ rG , D ] ≈ 1 N ∑ G ED∼πθ ( G ) [ rG , D ] ( 1 ) In the following , we refer to the case whenN = 1 as individual training and the case whenN > 1 as batch training . We optimize the objective above using Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) for improved sample efficiency . Figure 1 shows an overview of the proposed end-to-end device placement network . Our proposed policy network πθ consists a graph embedding network that learns the graphical representation of any dataflow graph , and a placement network that learns a placement strategy over the given graph embeddings . The two components are jointly trained in an end-to-end fashion . The policy p ( a|G ) is applied to make a set of decisions at each node . These decisions , denoted as av for each v ∈ V across all nodes , form one action a = { av∈V } . One decision corresponds to playing one arm of a multi-bandit problem , and specifying the entire a corresponds to playing several arms together in a single shot . Note the architecture is designed to be invariant over the underlying graph topology , enabling us to apply the same learned policy to a wide set of input graphs with different structures . 3.1 GRAPH EMBEDDING NETWORK . We leverage graph neural networks ( GNNs ) ( Hamilton et al. , 2017 ; Xu et al. , 2019 ; You et al. , 2018 ) to capture the topological information encoded in the dataflow graph . Most graph embedding frameworks are inherently transductive and can only generate embeddings for a given fixed graph . These transductive methods do not efficiently extrapolate to handle unseen nodes ( e.g. , in evolving graphs ) , and can not learn to generalize to unseen graphs . GraphSAGE ( Hamilton et al. , 2017 ) is an inductive framework that leverages node attribute information to efficiently generate representations on previously unseen data . While our proposed framework is generic , we adopt the feature aggregation scheme proposed in GraphSAGE to model the dependencies between the operations and build a general , end-to-end device placement method for a wide set of dataflow graphs . In GDP , nodes and edges in the dataflow graph are represented as the concatenation of their meta features ( e.g. , operation type , output shape , adjacent node ids ) and are further encoded by the graph embedding network into a trainable representation . The graph embedding process consists of multiple iterations , and the computation procedure for the l-th iteration can be outlined as follows : First , each node v ∈ V aggregates the feature representations of its neighbors , { h ( l ) u , ∀u ∈ N ( v ) } , into a single vector h ( l ) N ( v ) . This aggregation outcome is a function of all previously generated representations , including the initial representations defined based on the input node features . In this work , we use the following aggregation function with max pooling : h ( l ) N ( v ) = max ( σ ( W ( l ) h ( l ) u + b ( l ) ) , ∀u ∈ N ( v ) ) ( 2 ) where ( W ( l ) , b ( l ) ) define an affine transform and σ stands for the sigmoid activation function . We then concatenate the node ’ s current representation , h ( l ) v , with the aggregated neighborhood vector , h ( l ) N ( v ) , and feed this concatenated vector through a fully connected layer f ( l+1 ) h ( l+1 ) v = f ( l+1 ) ( concat ( h ( l ) v , h ( l ) N ( v ) ) ) ( 3 ) Different from GraphSAGE , parameters in our graph embedding network are trained jointly with a placement network via stochastic gradient descent with PPO , in a supervised fashion , as described in Section 3 . That is , we replace the unsupervised loss with our task-specific objective .
In this paper the authors propose an end-to-end policy for graph placement and partitioning of computational graphs produced "under-the-hood" by platforms like Tensorflow. As the sizes of the neural networks increase, using distributed deep learning is becoming more and more necessary. Primitives like the one suggested by the authors are very important in many ways, including improving the ability of the NN to process more data, reduce energy consumption etc. The authors compared to prior work propose a method that can take as input more than one data flow graphs, and learns a policy for graph partitioning/placement of the operations in a set of machines that minimizes the makespan. This problem in principle is NP-hard as it entails both graph partitioning and graph scheduling as its components. The authors propose a heuristic that composes of two existing methods: graph neural networks are used to produce an embedding of the computation/data flow graph, and then a seq-2-seq placement network. The method is able to generalize to unseen instances.
SP:a396624adb04f88f4ba9d10a7968be1926b5d226
GDP: Generalized Device Placement for Dataflow Graphs
1 INTRODUCTION . Neural networks have demonstrated remarkable scalability–improved performance can usually be achieved by training a larger model on a larger dataset ( Hestness et al. , 2017 ; Shazeer et al. , 2017 ; Jozefowicz et al. , 2016 ; Mahajan et al. , 2018 ; Radford et al. ) . Training such large models efficiently while meeting device constraints , like memory limitations , necessitate partitioning of the underlying dataflow graphs for the models across multiple devices . However , devising a good partitioning and placement of the dataflow graphs requires deep understanding of the model architecture , optimizations performed by domain-specific compilers , as well as the device characteristics , and is therefore extremely hard even for experts . ML practitioners often rely on their understanding of model architecture to determine a reasonable partitioning and placement for graphs . However , relying solely on the model architecture while ignoring the effect of the partitioning on subsequent compiler optimizations like op-fusion can lead to sub-optimal placements and consequently under-utilization of available devices . The goal of automated device placement is to find the optimal assignment of operations to devices such that the end-to-end execution time for a single step is minimized and all device constraints like memory limitations are satisfied . Since this objective function is non-differentiable , prior approaches ( Mirhoseini et al. , 2017 ; 2018 ; Gao et al. , 2018 ) have explored solutions based on reinforcement learning ( RL ) . However , these RL policies are usually not transferable and require training a new policy from scratch for each individual graph . This makes such approaches impractical due to the significant amount of compute required for the policy search itself , at times offsetting gains made by the reduced step time . In this paper , we propose an end-to-end deep RL method for device placement where the learned policy is generalizable to new graphs . Specifically , the policy network consists of a graph-embedding network that encodes operation features and dependencies into a trainable graph representation , followed by a scalable sequence-to-sequence placement network based on an improved Transformer ( Vaswani et al. , 2017 ; Dai et al. , 2019 ) . The placement network transforms the graph representations into a placement decision with soft attention , removing hard constraints such as hierarchical grouping of operations ( Mirhoseini et al. , 2018 ) or co-location heuristics ( to reduce the placement complexity ) ( Mirhoseini et al. , 2017 ) . Both of our graph-embedding network and placement network can be jointly trained in an end-to-end fashion using a supervised reward , without the need to manipulate the loss functions at multiple levels . We empirically show that the network learns flexible placement policies at a per-node granularity and can scale to problems over 50,000 nodes . To generalize to arbitrary and held-out graphs , our policy is trained jointly over a set of dataflow graphs ( instead of one at a time ) and then fine-tuned on each graph individually . By transferring the learned graph embeddings and placement policies , we are able to achieve faster convergence and thus use less resources to obtain high-quality placements . We also use super-positioning , i.e. , a feature conditioning mechanism based on the input graph embeddings , to effectively orchestrate the optimization dynamics of graphs with drastically different sizes in the same batch . Our contributions can be summarized as follows : 1 . An end-to-end device placement network that can generalize to arbitrary and held-out graphs . This is enabled by jointly learning a transferable graph neural network along with the placement network . 2 . A scalable placement network with an efficient recurrent attention mechanism , which eliminates the need for an explicit grouping stage before placement . The proposed end-to-end network provides 15× faster convergence as compared to the hierarchical LSTM model used in earlier works ( Mirhoseini et al. , 2017 ; 2018 ) . 3 . A new batch pre-training and fine-tuning strategy based on network superposition , which leads to improved transferability , better placements especially for larger graphs , and 10× reduction in policy search time as compared to training individual graphs from scratch . 4 . Superior performance over a wide set of workloads , including InceptionV3 ( Szegedy et al. , 2015 ) , AmoebaNet ( Real et al. , 2018 ) , RNNs , GNMT ( Wu et al. , 2016 ) , Transformer-XL ( Dai et al. , 2019 ) , WaveNet ( van den Oord et al. , 2016 ) , and more . 2 RELATED WORK . Device Placement Reinforcement learning has been used for device placement of a given dataflow graph ( Mirhoseini et al. , 2017 ) and demonstrated run time reduction over human crafted placement and conventional heuristics . For improved scalability , a hierarchical device placement strategy ( HDP ) ( Mirhoseini et al. , 2018 ) has been proposed that clusters operations into groups before placing the operation groups onto devices . Spotlight ( Gao et al. , 2018 ) applies proximal policy optimization and cross-entropy minimization to lower training overhead . Both HDP and Spotlight rely on LSTM controllers that are difficult to train and struggle to capture very long-term dependencies over large graphs . In addition , both methods are restricted to process only a single graph at a time , and can not generalize to arbitrary and held-out graphs . Placeto ( Addanki et al. , 2019 ) represents the first attempt to generalize device placement using a graph embedding network . But like HDP , Placeto also relies on hierarchical grouping and only generates placement for one node at each time step . Our approach ( GDP ) leverages a recurrent attention mechanism and generates the whole graph placement at once . This significantly reduces the training time for the controller . We also demonstrate the generalization ability of GDP over a wider set of important workloads . Parallelization Strategy Mesh-TensorFlow is a language that provides a general class of distributed tensor computations . While data-parallelism can be viewed as splitting tensors and operations along the “ batch ” dimension , in Mesh-TensorFlow the user can specify any tensor-dimensions to be split across any dimensions of a multi-dimensional mesh of processors . FlexFlow ( Jia et al. , 2018 ) introduces SOAP , a more comprehensive search space of parallelization strategies for DNNs which allows parallelization of a DNN in the Sample , Operator , Attribute , and Parameter dimensions . It uses guided randomized search of the SOAP space to find a parallelization strategy for a specific parallel machine . GPipe ( Huang et al. , 2018 ) proposed pipeline parallelism , by partitioning a model across different accelerators and automatically splitting a mini-batch of training examples into smaller micro-batches . By pipelining the execution across micro-batches , accelerators can operate in parallel . Our GDP focuses on a general deep RL method for automating device placement on arbitrary graphs , and is therefore orthogonal to existing parallelization strategies . Compiler Optimization REGAL ( Paliwal et al. , 2019 ) uses deep RL to optimize the execution cost of computation graphs in a static compiler . The method leverages the policy ’ s ability to transfer to new graphs to improve the quality of the genetic algorithm for the same objective budget . However , REGAL only targets peak memory minimization while GDP focuses on graph run time and scalability while also meeting the peak memory constraints of the devices . Specifically , we generalize graph partitioning and placement into a single end-to-end problem , with and without simulation , which can handle graphs with over 50,000 nodes . 3 END-TO-END PLACEMENT POLICY . Given a dataflow graph G ( V , E ) where V represents atomic computational operations ( ops ) and E represents the data dependency , our goal is to learn a policy π : G 7→ D that assigns a placement D ∈ D for all the ops in the given graph G ∈ G , to maximize the reward rG , D defined based on the run time . D is the allocated devices that can be a mixture of CPUs and GPUs . In this work , we represent policy πθ as a neural network parameterized by θ . Unlike prior works that focus on a single graph only , the RL objective in GDP is defined to simultaneously reduce the expected runtime of the placements over a set of N dataflow graphs : J ( θ ) = EG∼G , D∼πθ ( G ) [ rG , D ] ≈ 1 N ∑ G ED∼πθ ( G ) [ rG , D ] ( 1 ) In the following , we refer to the case whenN = 1 as individual training and the case whenN > 1 as batch training . We optimize the objective above using Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) for improved sample efficiency . Figure 1 shows an overview of the proposed end-to-end device placement network . Our proposed policy network πθ consists a graph embedding network that learns the graphical representation of any dataflow graph , and a placement network that learns a placement strategy over the given graph embeddings . The two components are jointly trained in an end-to-end fashion . The policy p ( a|G ) is applied to make a set of decisions at each node . These decisions , denoted as av for each v ∈ V across all nodes , form one action a = { av∈V } . One decision corresponds to playing one arm of a multi-bandit problem , and specifying the entire a corresponds to playing several arms together in a single shot . Note the architecture is designed to be invariant over the underlying graph topology , enabling us to apply the same learned policy to a wide set of input graphs with different structures . 3.1 GRAPH EMBEDDING NETWORK . We leverage graph neural networks ( GNNs ) ( Hamilton et al. , 2017 ; Xu et al. , 2019 ; You et al. , 2018 ) to capture the topological information encoded in the dataflow graph . Most graph embedding frameworks are inherently transductive and can only generate embeddings for a given fixed graph . These transductive methods do not efficiently extrapolate to handle unseen nodes ( e.g. , in evolving graphs ) , and can not learn to generalize to unseen graphs . GraphSAGE ( Hamilton et al. , 2017 ) is an inductive framework that leverages node attribute information to efficiently generate representations on previously unseen data . While our proposed framework is generic , we adopt the feature aggregation scheme proposed in GraphSAGE to model the dependencies between the operations and build a general , end-to-end device placement method for a wide set of dataflow graphs . In GDP , nodes and edges in the dataflow graph are represented as the concatenation of their meta features ( e.g. , operation type , output shape , adjacent node ids ) and are further encoded by the graph embedding network into a trainable representation . The graph embedding process consists of multiple iterations , and the computation procedure for the l-th iteration can be outlined as follows : First , each node v ∈ V aggregates the feature representations of its neighbors , { h ( l ) u , ∀u ∈ N ( v ) } , into a single vector h ( l ) N ( v ) . This aggregation outcome is a function of all previously generated representations , including the initial representations defined based on the input node features . In this work , we use the following aggregation function with max pooling : h ( l ) N ( v ) = max ( σ ( W ( l ) h ( l ) u + b ( l ) ) , ∀u ∈ N ( v ) ) ( 2 ) where ( W ( l ) , b ( l ) ) define an affine transform and σ stands for the sigmoid activation function . We then concatenate the node ’ s current representation , h ( l ) v , with the aggregated neighborhood vector , h ( l ) N ( v ) , and feed this concatenated vector through a fully connected layer f ( l+1 ) h ( l+1 ) v = f ( l+1 ) ( concat ( h ( l ) v , h ( l ) N ( v ) ) ) ( 3 ) Different from GraphSAGE , parameters in our graph embedding network are trained jointly with a placement network via stochastic gradient descent with PPO , in a supervised fashion , as described in Section 3 . That is , we replace the unsupervised loss with our task-specific objective .
This work proposes to use a combination of graph neural networks (GNNs) and proximal policy optimization (PPO) to train policies for generalized device placement in dataflow graphs. Essentially, (1) a GNN is used to learn representations of a dataflow graph (in an inductive manner), (2) a transformer is used to output a device placement action for each node in the graph, and (3) the entire system is trained end-to-end via PPO. Extensive experimental results show very impressive results compared to strong baselines.
SP:a396624adb04f88f4ba9d10a7968be1926b5d226
CEB Improves Model Robustness
1 INTRODUCTION . We aim to make models that make meaningful predictions beyond the data they were trained on . Generally we want our models to be robust . Broadly , robustness is the ability of a model to continue making valid predictions as the distribution the model is tested on moves away from the empirical training set distribution . The most commonly reported robustness metric is simply test set performance , where we verify that our model continues to make valid predictions on what we hope represents valid draws from the exact same data generating procedure . Adversarial robustness tests robustness in a worst case setting , where an attacker ( Szegedy et al. , 2013 ) makes limited targeted modifications to the input that are as fooling as possible . Many adversarial attacks have been proposed and studied ( Szegedy et al. , 2013 ; Carlini & Wagner , 2017b ; a ; Kurakin et al. , 2016a ; Madry et al. , 2017 ) . Most machine-learned systems are currently believed to be vulnerable to adversarial examples . Many defenses have been proposed , but very few have demonstrated robustness against a powerful , general-purpose adversary ( Carlini & Wagner , 2017a ; Athalye et al. , 2018 ) . While robustness to adversarial attacks continues to attract interest , recent discussions have emphasized the need to consider other forms of robustness as well ( Engstrom et al. , 2019 ) . The Common Corruptions Benchmark ( Hendrycks & Dietterich , 2019 ) measures image models robustness to more mild but real world sorts of perturbations . Even these modest perturbations can be very fooling for traditional architectures . One of the few general purpose strategies that demonstrably improves model robustness is Data Augmentation ( Cubuk et al. , 2018 ; Lopes et al. , 2019 ; Yin et al. , 2019 ) . However , it would be nice to identify loss-based solutions that can work in tandem with the data augmentation approaches . Intuitively , by performing modifications of the inputs at training time , the model is prevented from being too sensitive to particular features of the inputs that don ’ t survive the augmentation procedure . Alternatively , we can try to make our models more robust by making them less sensitive to the inputs in the first place . The goal of this work is to experimentally investigate whether , by systematically limiting the complexity of the extracted representation using the Conditional Entropy Bottleneck ( CEB ) , we can make our models more robust in all three of these senses : test set generalization ( e.g. , classification accuracy on “ clean ” test inputs ) , worst-case robustness , and typical-case robustness . 1.1 CONTRIBUTIONS . This paper is primarily empirical . We demonstrate : • CEB models are easy to implement and train . • CEB models demonstrate improved generalization performance over deterministic base- lines on CIFAR-10 and ImageNet . • CEB models show improved robustness to adversarial attacks on CIFAR-10 . • CEB models show improved robustness on the IMAGENET-C Common Corruptions Benchmark , the IMAGENET-A Benchmark , and targeted PGD attacks . Additionally , we show that adversarially-trained models fail to generalize to attacks they weren ’ t trained on , by comparing the results on L2 PGD attacks from Madry et al . ( 2017 ) to our results on the same baseline architecture . This result underscores the importance of finding ways to make models robust that do not rely on knowing the form of the attack ahead of time . 2 BACKGROUND . 2.1 INFORMATION BOTTLENECKS . The Information Bottleneck ( IB ) objective ( Tishby et al. , 2000 ) aims to learn a stochastic representation Z ∼ p ( z|x ) that retains as much information about a target variable Y while being as compressed as possible . The objective:1 max I ( Z ; Y ) − σ ( −ρ ) I ( Z ; X ) , ( 1 ) uses a Lagrange multiplier σ ( −ρ ) to trade off between the relevant information ( I ( Z ; Y ) ) and complexity of the representation ( I ( Z ; X ) ) . Because Z depends only on X ( Z ← X ↔ Y ) : Z and Y are conditionally independent given Z : I ( Z ; X , Y ) = I ( Z ; X ) + I ( Z ; Y |X ) = I ( Z ; Y ) + I ( Z ; X|Y ) . ( 2 ) This allows us to write the information bottleneck of Equation ( 1 ) in an equivalent form : max I ( Z ; Y ) − e−ρI ( Z ; X|Y ) . ( 3 ) Just as the original Information Bottleneck objective ( Equation ( 1 ) ) admits a natural variational lower bound ( Alemi et al. , 2017 ) , so does this form . We can variationally lower bound the mutual information between our representation and the targets with a variational decoder q ( y|z ) : I ( Z ; Y ) = Ep ( x , y ) p ( z|x ) [ log p ( y|z ) p ( y ) ] ≥ H ( Y ) + Ep ( x , y ) p ( z|x ) [ log q ( y|z ) ] . ( 4 ) While we may not know H ( Y ) exactly for real world datasets , in the information bottleneck formulation it is a constant outside of our control and so can be dropped in our objective . We can variationally upper bound our residual information : I ( Z ; X|Y ) = Ep ( x , y ) p ( z|x ) [ log p ( z|x , y ) p ( z|y ) ] ≤ Ep ( x , y ) p ( z|x ) [ log p ( z|x ) q ( z|y ) ] , ( 5 ) with a variational class conditional marginal q ( z|y ) that approximates ∫ dx p ( z|x ) p ( x|y ) . Putting both bounds together gives us the Conditional Entropy Bottleneck objective ( Fischer , 2018 ) : min p ( z|x ) Ep ( x , y ) p ( z|x ) [ log q ( y|z ) − e−ρ log p ( z|x ) q ( z|y ) ] ( 6 ) Compare this with the Variational Information Bottleneck ( VIB ) objective ( Alemi et al. , 2017 ) : min p ( z|x ) Ep ( x , y ) p ( z|x ) [ log q ( y|z ) − σ ( −ρ ) log p ( z|x ) q ( z ) ] . ( 7 ) The difference between CEB and VIB is the presence of a class conditional versus unconditional variational marginal . As can be seen in Equation ( 5 ) : using an unconditional marginal provides a looser variational upper bound on I ( Z ; X|Y ) . CEB ( Equation ( 6 ) ) can be thought of as a tighter variational approximation than VIB ( Equation ( 7 ) ) to Equation ( 3 ) . Since Equation ( 3 ) is equivalent to the IB objective ( Equation ( 1 ) ) , CEB can be thought of as a tighter variational approximation to the IB objective than VIB . 1 The IB objective is ordinarily written with a Lagrange multiplier β ≡ σ ( −ρ ) with a natural range from 0 to 1 . Here we use the sigmoid function : σ ( −ρ ) ≡ 1 1+eρ to reparameterize in terms of a control parameter ρ on the whole real line . As ρ→∞ the bottleneck turns off . 2.2 IMPLEMENTING A CEB MODEL . In practice , turning an existing classifier architecture into a CEB model is very simple . For the stochastic representation p ( z|x ) we simply use the original architecture , replacing the final softmax layer with a dense layer with d outputs . These outputs are then used to specify the means of a d-dimensional Gaussian distribution with unit diagonal covariance . That is , to form the stochastic representation , independent standard normal noise is simply added to the output of the network ( z = x + ) . For every input , this stochastic encoder will generate a random d-dimensional output vector . For the variational classifier q ( y|z ) any classifier network can be used , including just a linear softmax classifier as done in these experiments . For the variational conditional marginal q ( z|y ) it helps to use the same distribution as output by the classifier . For the simple unit variance Gaussian encoding we used in these experiments , this requires learning just d parameters per class . For ease of implementation , this can be represented as single dense linear layer mapping from a one-hot representation of the labels to the d-dimensional output , interpreted as the mean of the corresponding class marginal . In this setup the CEB loss takes a particularly simple form : E wy · ( f ( x ) + ) − log∑ y′ ewy′ · ( f ( x ) + ) − e −ρ 2 ( f ( x ) − µy ) ( f ( x ) − µy + 2 ) . ( 8 ) Here the first term is the usual softmax classifier loss , but acting on our stochastic representation z = f ( x ) + , which is simply the output of our encoder network f ( x ) with additive Gaussian noise . The wy is the yth row of weights in the final linear layer outputing the logits . µy are the learned class conditional means for our marginal . are standard normal draws from an isotropic unit variance Gaussian with the same dimension as our encoding f ( x ) . The second term in the loss is a stochastic sampling of the KL divergence between our encoder likelihood and the class conditional marginal likelihood . ρ controls the strength of the bottleneck and can vary on the whole real line . As ρ → ∞ the bottleneck is turned off . In practice we find that ρ values near but above 0 tend to work best for modest size models , with the tendency for the best ρ to approach 0 as the model capacity increases . Notice that in expectation the second term in the loss is ( f ( x ) − µy ) 2 , which encourages the learned means µy to converge to the average of the representations of each element in the class . During testing we use the mean encodings and remove the stochasticity . In its simplest form , CEB training a classifier amounts to injecting Gaussian random noise in the penultimate layer and learning estimates of the class averaged output of that layer with the stochastic regularization shown . In Appendix B we show simple modifications to the TPU-compatible ResNet implementation available on GitHub from the Google TensorFlow Team that produce the same core ResNet-50 models we use for our ImageNet experiments . 2.3 ADVERSARIAL ATTACKS AND DEFENSES . Attacks . The first adversarial attacks were proposed in Szegedy et al . ( 2013 ) ; Goodfellow et al . ( 2015 ) . Since those seminal works , an enormous variety of attacks has been proposed ( Kurakin et al . ( 2016a ; b ) ; Moosavi-Dezfooli et al . ( 2016 ) ; Carlini & Wagner ( 2017b ) ; Madry et al . ( 2017 ) ; Eykholt et al . ( 2017 ) ; Baluja & Fischer ( 2017 ) , etc. ) . In this work , we will primarily consider the Projected Gradient Descent ( PGD ) attack ( Madry et al. , 2017 ) , which is a multi-step variant of the early Fast Gradient Method ( Goodfellow et al. , 2015 ) . The attack can be viewed as having four parameters : p , the norm of the attack ( typically 2 or∞ ) , , the radius the the p-norm ball within which the attack is permitted to make changes to an input , n , the number of gradient steps the adversary is permitted to take , and i , the per-step limit to modifications of the current input . In this work , we consider L2 and L∞ attacks of varying and n , and with i = 43 n . Defenses . A common defense for adversarial examples is adversarial training . Adversarial training was originally proposed in Szegedy et al . ( 2013 ) , but was not practical until the Fast Gradient Method was introduced . It has been studied in detail , with varied techniques ( Kurakin et al. , 2016b ; Madry et al. , 2017 ; Ilyas et al. , 2019 ; Xie et al. , 2019 ) . Adversarial training can clearly be viewed as a form of data augmentation ( Tsipras et al. , 2018 ) , where instead of using some fixed set of functions to modify the training examples , we use the model itself in combination with one or more adversarial attacks to modify the training examples . As the model changes , the distribution of modifications changes as well . However , unlike with non-adversarial data augmentation techniques , such as AUTOAUG , adversarial training techniques considered in the literature so far cause substantial reductions in accuracy on clean test sets . For example , the CIFAR-10 model described in Madry et al . ( 2017 ) gets 95.5 % accuracy when trained normally , but only 87.3 % when trained on L∞ adversarial examples . More recently , Xie et al . ( 2019 ) adversarially trains ImageNet models with impressive robustness to targeted PGD L∞ attacks , but at only 62.32 % accuracy on the non-adversarial test set , compared to 78.81 % accuracy for the same model trained only on clean images .
The paper modifies existing classifier architectures and training objective, in order to minimize "conditional entropy bottleneck" (CEB) objective, in attempts to force the representation to maximize the information bottleneck objective. Consequently, the paper claims that this CEB model improves general test accuracy and robustness against adversarial attacks and common corruptions, compared to the softmax + cross entropy counterpart. This claim is supported by experimental results on CIFAR-10 and ImageNet-C datasets.
SP:caca11294236433df3e4a14e0ae263ef332372c9
CEB Improves Model Robustness
1 INTRODUCTION . We aim to make models that make meaningful predictions beyond the data they were trained on . Generally we want our models to be robust . Broadly , robustness is the ability of a model to continue making valid predictions as the distribution the model is tested on moves away from the empirical training set distribution . The most commonly reported robustness metric is simply test set performance , where we verify that our model continues to make valid predictions on what we hope represents valid draws from the exact same data generating procedure . Adversarial robustness tests robustness in a worst case setting , where an attacker ( Szegedy et al. , 2013 ) makes limited targeted modifications to the input that are as fooling as possible . Many adversarial attacks have been proposed and studied ( Szegedy et al. , 2013 ; Carlini & Wagner , 2017b ; a ; Kurakin et al. , 2016a ; Madry et al. , 2017 ) . Most machine-learned systems are currently believed to be vulnerable to adversarial examples . Many defenses have been proposed , but very few have demonstrated robustness against a powerful , general-purpose adversary ( Carlini & Wagner , 2017a ; Athalye et al. , 2018 ) . While robustness to adversarial attacks continues to attract interest , recent discussions have emphasized the need to consider other forms of robustness as well ( Engstrom et al. , 2019 ) . The Common Corruptions Benchmark ( Hendrycks & Dietterich , 2019 ) measures image models robustness to more mild but real world sorts of perturbations . Even these modest perturbations can be very fooling for traditional architectures . One of the few general purpose strategies that demonstrably improves model robustness is Data Augmentation ( Cubuk et al. , 2018 ; Lopes et al. , 2019 ; Yin et al. , 2019 ) . However , it would be nice to identify loss-based solutions that can work in tandem with the data augmentation approaches . Intuitively , by performing modifications of the inputs at training time , the model is prevented from being too sensitive to particular features of the inputs that don ’ t survive the augmentation procedure . Alternatively , we can try to make our models more robust by making them less sensitive to the inputs in the first place . The goal of this work is to experimentally investigate whether , by systematically limiting the complexity of the extracted representation using the Conditional Entropy Bottleneck ( CEB ) , we can make our models more robust in all three of these senses : test set generalization ( e.g. , classification accuracy on “ clean ” test inputs ) , worst-case robustness , and typical-case robustness . 1.1 CONTRIBUTIONS . This paper is primarily empirical . We demonstrate : • CEB models are easy to implement and train . • CEB models demonstrate improved generalization performance over deterministic base- lines on CIFAR-10 and ImageNet . • CEB models show improved robustness to adversarial attacks on CIFAR-10 . • CEB models show improved robustness on the IMAGENET-C Common Corruptions Benchmark , the IMAGENET-A Benchmark , and targeted PGD attacks . Additionally , we show that adversarially-trained models fail to generalize to attacks they weren ’ t trained on , by comparing the results on L2 PGD attacks from Madry et al . ( 2017 ) to our results on the same baseline architecture . This result underscores the importance of finding ways to make models robust that do not rely on knowing the form of the attack ahead of time . 2 BACKGROUND . 2.1 INFORMATION BOTTLENECKS . The Information Bottleneck ( IB ) objective ( Tishby et al. , 2000 ) aims to learn a stochastic representation Z ∼ p ( z|x ) that retains as much information about a target variable Y while being as compressed as possible . The objective:1 max I ( Z ; Y ) − σ ( −ρ ) I ( Z ; X ) , ( 1 ) uses a Lagrange multiplier σ ( −ρ ) to trade off between the relevant information ( I ( Z ; Y ) ) and complexity of the representation ( I ( Z ; X ) ) . Because Z depends only on X ( Z ← X ↔ Y ) : Z and Y are conditionally independent given Z : I ( Z ; X , Y ) = I ( Z ; X ) + I ( Z ; Y |X ) = I ( Z ; Y ) + I ( Z ; X|Y ) . ( 2 ) This allows us to write the information bottleneck of Equation ( 1 ) in an equivalent form : max I ( Z ; Y ) − e−ρI ( Z ; X|Y ) . ( 3 ) Just as the original Information Bottleneck objective ( Equation ( 1 ) ) admits a natural variational lower bound ( Alemi et al. , 2017 ) , so does this form . We can variationally lower bound the mutual information between our representation and the targets with a variational decoder q ( y|z ) : I ( Z ; Y ) = Ep ( x , y ) p ( z|x ) [ log p ( y|z ) p ( y ) ] ≥ H ( Y ) + Ep ( x , y ) p ( z|x ) [ log q ( y|z ) ] . ( 4 ) While we may not know H ( Y ) exactly for real world datasets , in the information bottleneck formulation it is a constant outside of our control and so can be dropped in our objective . We can variationally upper bound our residual information : I ( Z ; X|Y ) = Ep ( x , y ) p ( z|x ) [ log p ( z|x , y ) p ( z|y ) ] ≤ Ep ( x , y ) p ( z|x ) [ log p ( z|x ) q ( z|y ) ] , ( 5 ) with a variational class conditional marginal q ( z|y ) that approximates ∫ dx p ( z|x ) p ( x|y ) . Putting both bounds together gives us the Conditional Entropy Bottleneck objective ( Fischer , 2018 ) : min p ( z|x ) Ep ( x , y ) p ( z|x ) [ log q ( y|z ) − e−ρ log p ( z|x ) q ( z|y ) ] ( 6 ) Compare this with the Variational Information Bottleneck ( VIB ) objective ( Alemi et al. , 2017 ) : min p ( z|x ) Ep ( x , y ) p ( z|x ) [ log q ( y|z ) − σ ( −ρ ) log p ( z|x ) q ( z ) ] . ( 7 ) The difference between CEB and VIB is the presence of a class conditional versus unconditional variational marginal . As can be seen in Equation ( 5 ) : using an unconditional marginal provides a looser variational upper bound on I ( Z ; X|Y ) . CEB ( Equation ( 6 ) ) can be thought of as a tighter variational approximation than VIB ( Equation ( 7 ) ) to Equation ( 3 ) . Since Equation ( 3 ) is equivalent to the IB objective ( Equation ( 1 ) ) , CEB can be thought of as a tighter variational approximation to the IB objective than VIB . 1 The IB objective is ordinarily written with a Lagrange multiplier β ≡ σ ( −ρ ) with a natural range from 0 to 1 . Here we use the sigmoid function : σ ( −ρ ) ≡ 1 1+eρ to reparameterize in terms of a control parameter ρ on the whole real line . As ρ→∞ the bottleneck turns off . 2.2 IMPLEMENTING A CEB MODEL . In practice , turning an existing classifier architecture into a CEB model is very simple . For the stochastic representation p ( z|x ) we simply use the original architecture , replacing the final softmax layer with a dense layer with d outputs . These outputs are then used to specify the means of a d-dimensional Gaussian distribution with unit diagonal covariance . That is , to form the stochastic representation , independent standard normal noise is simply added to the output of the network ( z = x + ) . For every input , this stochastic encoder will generate a random d-dimensional output vector . For the variational classifier q ( y|z ) any classifier network can be used , including just a linear softmax classifier as done in these experiments . For the variational conditional marginal q ( z|y ) it helps to use the same distribution as output by the classifier . For the simple unit variance Gaussian encoding we used in these experiments , this requires learning just d parameters per class . For ease of implementation , this can be represented as single dense linear layer mapping from a one-hot representation of the labels to the d-dimensional output , interpreted as the mean of the corresponding class marginal . In this setup the CEB loss takes a particularly simple form : E wy · ( f ( x ) + ) − log∑ y′ ewy′ · ( f ( x ) + ) − e −ρ 2 ( f ( x ) − µy ) ( f ( x ) − µy + 2 ) . ( 8 ) Here the first term is the usual softmax classifier loss , but acting on our stochastic representation z = f ( x ) + , which is simply the output of our encoder network f ( x ) with additive Gaussian noise . The wy is the yth row of weights in the final linear layer outputing the logits . µy are the learned class conditional means for our marginal . are standard normal draws from an isotropic unit variance Gaussian with the same dimension as our encoding f ( x ) . The second term in the loss is a stochastic sampling of the KL divergence between our encoder likelihood and the class conditional marginal likelihood . ρ controls the strength of the bottleneck and can vary on the whole real line . As ρ → ∞ the bottleneck is turned off . In practice we find that ρ values near but above 0 tend to work best for modest size models , with the tendency for the best ρ to approach 0 as the model capacity increases . Notice that in expectation the second term in the loss is ( f ( x ) − µy ) 2 , which encourages the learned means µy to converge to the average of the representations of each element in the class . During testing we use the mean encodings and remove the stochasticity . In its simplest form , CEB training a classifier amounts to injecting Gaussian random noise in the penultimate layer and learning estimates of the class averaged output of that layer with the stochastic regularization shown . In Appendix B we show simple modifications to the TPU-compatible ResNet implementation available on GitHub from the Google TensorFlow Team that produce the same core ResNet-50 models we use for our ImageNet experiments . 2.3 ADVERSARIAL ATTACKS AND DEFENSES . Attacks . The first adversarial attacks were proposed in Szegedy et al . ( 2013 ) ; Goodfellow et al . ( 2015 ) . Since those seminal works , an enormous variety of attacks has been proposed ( Kurakin et al . ( 2016a ; b ) ; Moosavi-Dezfooli et al . ( 2016 ) ; Carlini & Wagner ( 2017b ) ; Madry et al . ( 2017 ) ; Eykholt et al . ( 2017 ) ; Baluja & Fischer ( 2017 ) , etc. ) . In this work , we will primarily consider the Projected Gradient Descent ( PGD ) attack ( Madry et al. , 2017 ) , which is a multi-step variant of the early Fast Gradient Method ( Goodfellow et al. , 2015 ) . The attack can be viewed as having four parameters : p , the norm of the attack ( typically 2 or∞ ) , , the radius the the p-norm ball within which the attack is permitted to make changes to an input , n , the number of gradient steps the adversary is permitted to take , and i , the per-step limit to modifications of the current input . In this work , we consider L2 and L∞ attacks of varying and n , and with i = 43 n . Defenses . A common defense for adversarial examples is adversarial training . Adversarial training was originally proposed in Szegedy et al . ( 2013 ) , but was not practical until the Fast Gradient Method was introduced . It has been studied in detail , with varied techniques ( Kurakin et al. , 2016b ; Madry et al. , 2017 ; Ilyas et al. , 2019 ; Xie et al. , 2019 ) . Adversarial training can clearly be viewed as a form of data augmentation ( Tsipras et al. , 2018 ) , where instead of using some fixed set of functions to modify the training examples , we use the model itself in combination with one or more adversarial attacks to modify the training examples . As the model changes , the distribution of modifications changes as well . However , unlike with non-adversarial data augmentation techniques , such as AUTOAUG , adversarial training techniques considered in the literature so far cause substantial reductions in accuracy on clean test sets . For example , the CIFAR-10 model described in Madry et al . ( 2017 ) gets 95.5 % accuracy when trained normally , but only 87.3 % when trained on L∞ adversarial examples . More recently , Xie et al . ( 2019 ) adversarially trains ImageNet models with impressive robustness to targeted PGD L∞ attacks , but at only 62.32 % accuracy on the non-adversarial test set , compared to 78.81 % accuracy for the same model trained only on clean images .
This paper studied the effectiveness of Conditional Entropy Bottleneck (CEB) on improving model robustness. Three tasks are considered to demonstrate its effectiveness; generalization performance over clean test images, adversarially perturbed images, and images corrupted by various synthetic noises. The experiment results demonstrated that CEB improves the model robustness on all considered tasks over the deterministic baseline and adversarially-trained classifiers.
SP:caca11294236433df3e4a14e0ae263ef332372c9
Inductive representation learning on temporal graphs
1 INTRODUCTION . The technique of learning lower-dimensional vector embeddings on graphs have been widely applied to graph analysis tasks ( Perozzi et al. , 2014 ; Tang et al. , 2015 ; Wang et al. , 2016 ) and deployed in industrial systems ( Ying et al. , 2018 ; Wang et al. , 2018a ) . Most of the graph representation learning approaches only accept static or non-temporal graphs as input , despite the fact that many graph-structured data are time-dependent . In social network , citation network , question answering forum and user-item interaction system , graphs are created as temporal interactions between nodes . Using the final state as a static portrait of the graph is reasonable in some cases , such as the proteinprotein interaction network , as long as node interactions are timeless in nature . Otherwise , ignoring the temporal information can severely diminish the modelling efforts and even causing questionable inference . For instance , models may mistakenly utilize future information for predicting past interactions during training and testing if the temporal constraints are disregarded . More importantly , the dynamic and evolving nature of many graph-related problems demand an explicitly modelling of the timeliness whenever nodes and edges are added , deleted or changed over time . Learning representations on temporal graphs is extremely challenging , and it is not until recently that several solutions are proposed ( Nguyen et al. , 2018 ; Li et al. , 2018 ; Goyal et al. , 2018 ; Trivedi et al. , 2018 ) . We conclude the challenges in three folds . Firstly , to model the temporal dynamics , node embeddings should not be only the projections of topological structures and node features but also functions of the continuous time . Therefore , in addition to the usual vector space , temporal representation learning should be operated in some functional space as well . Secondly , graph topological structures are no longer static since the nodes and edges are evolving over time , which poses ∗Both authors contributed equally to this research . temporal constraints on neighborhood aggregation methods . Thirdly , node features and topological structures can exhibit temporal patterns . For example , node interactions that took place long ago may have less impact on the current topological structure and thus the node embeddings . Also , some nodes may possess features that allows them having more regular or recurrent interactions with others . We provide sketched plots for visual illustration in Figure 1 . Similar to its non-temporal counterparts , in the real-world applications , models for representation learning on temporal graphs should be able to quickly generate embeddings whenever required , in an inductive fashion . GraphSAGE ( Hamilton et al. , 2017a ) and graph attention network ( GAT ) ( Veličković et al. , 2017 ) are capable of inductively generating embeddings for unseen nodes based on their features , however , they do not consider the temporal factors . Most of the temporal graph embedding methods can only handle transductive tasks , since they require re-training or the computationally-expensive gradient calculations to infer embeddings for unseen nodes or node embeddings for a new timepoint . In this work , we aim at developing an architecture to inductively learn representations for temporal graphs such that the time-aware embeddings ( for unseen and observed nodes ) can be obtained via a single network forward pass . The key to our approach is the combination of the self-attention mechanism ( Vaswani et al. , 2017 ) and a novel functional time encoding technique derived from the Bochner ’ s theorem from classical harmonic analysis ( Loomis , 2013 ) . The motivation for adapting self-attention to inductive representation learning on temporal graphs is to identify and capture relevant pieces of the temporal neighborhood information . Both graph convolutional network ( GCN ) ( Kipf & Welling , 2016a ) and GAT are implicitly or explicitly assigning different weights to neighboring nodes ( Veličković et al. , 2017 ) when aggregating node features . The self-attention mechanism was initially designed to recognize the relevant parts of input sequence in natural language processing . As a discrete-event sequence learning method , self-attention outputs a vector representation of the input sequence as a weighted sum of individual entry embeddings . Selfattention enjoys several advantages such as parallelized computation and interpretability ( Vaswani et al. , 2017 ) . Since it captures sequential information only through the positional encoding , temporal features can not be handled . Therefore , we are motivated to replace positional encoding with some vector representation of time . Since time is a continuous variable , the mapping from the time domain to vector space has to be functional . We gain insights from harmonic analysis and propose a theoretical-grounded functional time encoding approach that is compatible with the self-attention mechanism . The temporal signals are then modelled by the interactions between the functional time encoding and nodes features as well as the graph topological structures . To evaluate our approach , we consider future link prediction on the observed nodes as transductive learning task , and on the unseen nodes as inductive learning task . We also examine the dynamic node classification task using node embeddings ( temporal versus non-temporal ) as features to demonstrate the usefulness of our functional time encoding . We carry out extensive ablation studies and sensitivity analysis to show the effectiveness of the proposed functional time encoding and TGAT -layer . 2 RELATED WORK . Graph representation learning . Spectral graph embedding models operate on the graph spectral domain by approximating , projecting or expanding the graph Laplacian ( Kipf & Welling , 2016a ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ) . Since their training and inference are conditioned on the specific graph spectrum , they are not directly extendable to temporal graphs . Non-spectral approaches , such as GAT , GraphSAGE and MoNET , ( Monti et al. , 2017 ) rely on the localized neighbourhood aggregations and thus are not restricted to the training graph . GraphSAGE and GAT also have the flexibility to handle evolving graphs inductively . To extend classical graph representation learning approaches to the temporal domain , several attempts have been done by cropping the temporal graph into a sequence of graph snapshots ( Li et al. , 2018 ; Goyal et al. , 2018 ; Rahman et al. , 2018 ; Xu et al. , 2019b ) , and some others work with temporally persistent node ( edges ) ( Trivedi et al. , 2018 ; Ma et al. , 2018 ) . Nguyen et al . ( 2018 ) proposes a node embedding method based on temporal random walk and reported state-of-the-art performances . However , their approach only generates embeddings for the final state of temporal graph and can not directly apply to the inductive setting . Self-attention mechanism . Self-attention mechanisms often have two components : the embedding layer and the attention layer . The embedding layer takes an ordered entity sequence as input . Selfattention uses the positional encoding , i.e . each position k is equipped with a vector pk ( fixed or learnt ) which is shared for all sequences . For the entity sequence e = ( e1 , . . . , el ) , the embedding layer takes the sum or concatenation of entity embeddings ( or features ) ( z ∈ Rd ) and their positional encodings as input : Ze = [ ze1 + p1 , . . . , ze1 + pl ] ᵀ ∈ Rl×d , or Ze = [ ze1 ||p1 , . . . , ze1 ||pl ] ᵀ ∈ Rl× ( d+dpos ) . ( 1 ) where || denotes concatenation operation and dpos is the dimension for positional encoding . Selfattention layers can be constructed using the scaled dot-product attention , which is defined as : Attn ( Q , K , V ) = softmax ( QKᵀ√ d ) V , ( 2 ) where Q denotes the ’ queries ’ , K the ’ keys ’ and V the ’ values ’ . In Vaswani et al . ( 2017 ) , they are treated as projections of the output Ze : Q = ZeWQ , K = ZeWK , V = ZeWV , where WQ , WK and WV are the projection matrices . Since each row of Q , K and V represents an entity , the dot-product attention takes a weighted sum of the entity ’ values ’ in V where the weights are given by the interactions of entity ’ query-key ’ pairs . The hidden representation for the entity sequence under the dot-product attention is then given by he = Attn ( Q , K , V ) . 3 TEMPORAL GRAPH ATTENTION NETWORK ARCHITECTURE . We first derive the mapping from time domain to the continuous differentiable functional domain as the functional time encoding such that resulting formulation is compatible with self-attention mechanism as well as the backpropagation-based optimization frameworks . The same idea was explored in a concurrent work ( Xu et al. , 2019a ) . We then present the temporal graph attention layer and show how it can be naturally extended to incorporate the edge features . 3.1 FUNCTIONAL TIME ENCODING . Recall that our starting point is to obtain a continuous functional mapping Φ : T → RdT from time domain to the dT -dimensional vector space to replace the positional encoding in ( 1 ) . Without loss of generality , we assume that the time domain can be represented by the interval starting from origin : T = [ 0 , tmax ] , where tmax is determined by the observed data . For the inner-product selfattention in ( 2 ) , often the ’ key ’ and ’ query ’ matrices ( K , Q ) are given by identity or linear projection of Ze defined in ( 1 ) , leading to terms that only involve inner-products between positional ( time ) encodings . Consider two time points t1 , t2 and inner product between their functional encodings〈 Φ ( t1 ) , Φ ( t2 ) 〉 . Usually , the relative timespan , rather than the absolute value of time , reveals critical temporal information . Therefore , we are more interested in learning patterns related to the timespan of |t2−t1| , which should be ideally expressed by 〈 Φ ( t1 ) , Φ ( t2 ) 〉 to be compatible with self-attention . Formally , we define the temporal kernel K : T × T → R with K ( t1 , t2 ) : = 〈 Φ ( t1 ) , Φ ( t2 ) 〉 and K ( t1 , t2 ) = ψ ( t1 − t2 ) , ∀t1 , t2 ∈ T for some ψ : [ −tmax , tmax ] → R. The temporal kernel is then translation-invariant , since K ( t1 + c , t2 + c ) = ψ ( t1 − t2 ) = K ( t1 , t2 ) for any constant c. Generally speaking , functional learning is extremely complicated since it operates on infinite-dimensional spaces , but now we have transformed the problem into learning the temporal kernel K expressed by Φ . Nonetheless , we still need to figure out an explicit parameterization for Φ in order to conduct efficient gradient-based optimization . Classical harmonic analysis theory , i.e . the Bochner ’ s theorem , motivates our final solution . We point out that the temporal kernel K is positive-semidefinite ( PSD ) and continuous , since it is defined via Gram matrix and the mapping Φ is continuous . Therefore , the kernel K defined above satisfy the assumptions of the Bochner ’ s theorem , which we state below . Theorem 1 ( Bochner ’ s Theorem ) . A continuous , translation-invariant kernel K ( x , y ) = ψ ( x− y ) on Rd is positive definite if and only if there exists a non-negative measure on R such that ψ is the Fourier transform of the measure . Consequently , when scaled properly , our temporal kernel K have the alternate expression : K ( t1 , t2 ) = ψ ( t1 , t2 ) = ∫ R eiω ( t1−t2 ) p ( ω ) dω = Eω [ ξω ( t1 ) ξω ( t2 ) ∗ ] , ( 3 ) where ξω ( t ) = eiωt . Since the kernel K and the probability measure p ( ω ) are real , we extract the real part of ( 3 ) and obtain : K ( t1 , t2 ) = Eω [ cos ( ω ( t1 − t2 ) ) ] = Eω [ cos ( ωt1 ) cos ( ωt2 ) + sin ( ωt1 ) sin ( ωt2 ) ] . ( 4 ) The above formulation suggests approximating the expectation by the Monte Carlo integral ( Rahimi & Recht , 2008 ) , i.e . K ( t1 , t2 ) ≈ 1d ∑d i=1 cos ( ωit1 ) cos ( ωit2 ) + sin ( ωit1 ) sin ( ωit2 ) , with ω1 , . . . , ωd i.i.d∼ p ( ω ) . Therefore , we propose the finite dimensional functional mapping to Rd as : t 7→ Φd ( t ) : = √ 1 d [ cos ( ω1t ) , sin ( ω1t ) , . . . , cos ( ωdt ) , sin ( ωdt ) ] , ( 5 ) and it is easy to show that 〈 Φd ( t1 ) , Φd ( t2 ) 〉 ≈ K ( t1 , t2 ) . As a matter of fact , we prove the stochastic uniform convergence of 〈 Φd ( t1 ) , Φd ( t2 ) 〉 to the underlying K ( t1 , t2 ) and shows that it takes only a reasonable amount of samples to achieve proper estimation , which is stated in Claim 1 . Claim 1 . Let p ( ω ) be the corresponding probability measure stated in Bochner ’ s Theorem for kernel function K. Suppose the feature map Φ is constructed as described above using samples { ωi } di=1 , then we only need d = Ω ( 1 2 log σ2ptmax ) samples to have sup t1 , t2∈T ∣∣Φd ( t1 ) ′Φd ( t2 ) −K ( t1 , t2 ) ∣∣ < with any probability for ∀ > 0 , where σ2p is the second momentum with respect to p ( ω ) . The proof is provided in supplement material . By applying Bochner ’ s theorem , we convert the problem of kernel learning to distribution learning , i.e . estimating the p ( ω ) in Theorem 1 . A straightforward solution is to apply the reparameterization trick by using auxiliary random variables with a known marginal distribution as in variational autoencoders ( Kingma & Welling , 2013 ) . However , the reparameterization trick is often limited to certain distributions such as the ’ local-scale ’ family , which may not be rich enough for our purpose . For instance , when p ( ω ) is multimodal it is difficult to reconstruct the underlying distribution via direct reparameterizations . An alternate approach is to use the inverse cumulative distribution function ( CDF ) transformation . Rezende & Mohamed ( 2015 ) propose using parameterized normalizing flow , i.e . a sequence of invertible transformation functions , to approximate arbitrarily complicated CDF and efficiently sample from it . Dinh et al . ( 2016 ) further considers stacking bijective transformations , known as affine coupling layer , to achieve more effective CDF estimation . The above methods learns the inverse CDF function F−1θ ( . ) parameterized by flow-based networks and draw samples from the corresponding distribution . On the other hand , if we consider an non-parameterized approach for estimating distribution , then learning F−1 ( . ) and obtain d samples from it is equivalent to directly optimizing the { ω1 , . . . , ωd } in ( 4 ) as free model parameters . In practice , we find these two approaches to have highly comparable performances ( see supplement material ) . Therefore we focus on the non-parametric approach , since it is more parameter-efficient and has faster training speed ( as no sampling during training is required ) . The above functional time encoding is fully compatible with self-attention , thus they can replace the positional encodings in ( 1 ) and their parameters are jointly optimized as part of the whole model .
This paper addresses the problem of representation learning for temporal graphs. That is, graphs where the topology can evolve over time. The contribution is a temporal graph attention (TGAT) layer aims to exploit learned temporal dynamics of graph evolution in tasks such as node classification and link prediction. This TGAT layer can work in an inductive manner unlike much prior work which is restricted to the transduction setting. Specifically, a temporal-kernel is introduced to generate time-related features, and incorporated into the self-attention mechanism. The results on some standard and new graph-structured benchmarks show improved performance vs a variety of baselines in both transduction and inductive settings.
SP:50073cbe6ab4b44b3c68f141542c1e81df0c5f61
Inductive representation learning on temporal graphs
1 INTRODUCTION . The technique of learning lower-dimensional vector embeddings on graphs have been widely applied to graph analysis tasks ( Perozzi et al. , 2014 ; Tang et al. , 2015 ; Wang et al. , 2016 ) and deployed in industrial systems ( Ying et al. , 2018 ; Wang et al. , 2018a ) . Most of the graph representation learning approaches only accept static or non-temporal graphs as input , despite the fact that many graph-structured data are time-dependent . In social network , citation network , question answering forum and user-item interaction system , graphs are created as temporal interactions between nodes . Using the final state as a static portrait of the graph is reasonable in some cases , such as the proteinprotein interaction network , as long as node interactions are timeless in nature . Otherwise , ignoring the temporal information can severely diminish the modelling efforts and even causing questionable inference . For instance , models may mistakenly utilize future information for predicting past interactions during training and testing if the temporal constraints are disregarded . More importantly , the dynamic and evolving nature of many graph-related problems demand an explicitly modelling of the timeliness whenever nodes and edges are added , deleted or changed over time . Learning representations on temporal graphs is extremely challenging , and it is not until recently that several solutions are proposed ( Nguyen et al. , 2018 ; Li et al. , 2018 ; Goyal et al. , 2018 ; Trivedi et al. , 2018 ) . We conclude the challenges in three folds . Firstly , to model the temporal dynamics , node embeddings should not be only the projections of topological structures and node features but also functions of the continuous time . Therefore , in addition to the usual vector space , temporal representation learning should be operated in some functional space as well . Secondly , graph topological structures are no longer static since the nodes and edges are evolving over time , which poses ∗Both authors contributed equally to this research . temporal constraints on neighborhood aggregation methods . Thirdly , node features and topological structures can exhibit temporal patterns . For example , node interactions that took place long ago may have less impact on the current topological structure and thus the node embeddings . Also , some nodes may possess features that allows them having more regular or recurrent interactions with others . We provide sketched plots for visual illustration in Figure 1 . Similar to its non-temporal counterparts , in the real-world applications , models for representation learning on temporal graphs should be able to quickly generate embeddings whenever required , in an inductive fashion . GraphSAGE ( Hamilton et al. , 2017a ) and graph attention network ( GAT ) ( Veličković et al. , 2017 ) are capable of inductively generating embeddings for unseen nodes based on their features , however , they do not consider the temporal factors . Most of the temporal graph embedding methods can only handle transductive tasks , since they require re-training or the computationally-expensive gradient calculations to infer embeddings for unseen nodes or node embeddings for a new timepoint . In this work , we aim at developing an architecture to inductively learn representations for temporal graphs such that the time-aware embeddings ( for unseen and observed nodes ) can be obtained via a single network forward pass . The key to our approach is the combination of the self-attention mechanism ( Vaswani et al. , 2017 ) and a novel functional time encoding technique derived from the Bochner ’ s theorem from classical harmonic analysis ( Loomis , 2013 ) . The motivation for adapting self-attention to inductive representation learning on temporal graphs is to identify and capture relevant pieces of the temporal neighborhood information . Both graph convolutional network ( GCN ) ( Kipf & Welling , 2016a ) and GAT are implicitly or explicitly assigning different weights to neighboring nodes ( Veličković et al. , 2017 ) when aggregating node features . The self-attention mechanism was initially designed to recognize the relevant parts of input sequence in natural language processing . As a discrete-event sequence learning method , self-attention outputs a vector representation of the input sequence as a weighted sum of individual entry embeddings . Selfattention enjoys several advantages such as parallelized computation and interpretability ( Vaswani et al. , 2017 ) . Since it captures sequential information only through the positional encoding , temporal features can not be handled . Therefore , we are motivated to replace positional encoding with some vector representation of time . Since time is a continuous variable , the mapping from the time domain to vector space has to be functional . We gain insights from harmonic analysis and propose a theoretical-grounded functional time encoding approach that is compatible with the self-attention mechanism . The temporal signals are then modelled by the interactions between the functional time encoding and nodes features as well as the graph topological structures . To evaluate our approach , we consider future link prediction on the observed nodes as transductive learning task , and on the unseen nodes as inductive learning task . We also examine the dynamic node classification task using node embeddings ( temporal versus non-temporal ) as features to demonstrate the usefulness of our functional time encoding . We carry out extensive ablation studies and sensitivity analysis to show the effectiveness of the proposed functional time encoding and TGAT -layer . 2 RELATED WORK . Graph representation learning . Spectral graph embedding models operate on the graph spectral domain by approximating , projecting or expanding the graph Laplacian ( Kipf & Welling , 2016a ; Henaff et al. , 2015 ; Defferrard et al. , 2016 ) . Since their training and inference are conditioned on the specific graph spectrum , they are not directly extendable to temporal graphs . Non-spectral approaches , such as GAT , GraphSAGE and MoNET , ( Monti et al. , 2017 ) rely on the localized neighbourhood aggregations and thus are not restricted to the training graph . GraphSAGE and GAT also have the flexibility to handle evolving graphs inductively . To extend classical graph representation learning approaches to the temporal domain , several attempts have been done by cropping the temporal graph into a sequence of graph snapshots ( Li et al. , 2018 ; Goyal et al. , 2018 ; Rahman et al. , 2018 ; Xu et al. , 2019b ) , and some others work with temporally persistent node ( edges ) ( Trivedi et al. , 2018 ; Ma et al. , 2018 ) . Nguyen et al . ( 2018 ) proposes a node embedding method based on temporal random walk and reported state-of-the-art performances . However , their approach only generates embeddings for the final state of temporal graph and can not directly apply to the inductive setting . Self-attention mechanism . Self-attention mechanisms often have two components : the embedding layer and the attention layer . The embedding layer takes an ordered entity sequence as input . Selfattention uses the positional encoding , i.e . each position k is equipped with a vector pk ( fixed or learnt ) which is shared for all sequences . For the entity sequence e = ( e1 , . . . , el ) , the embedding layer takes the sum or concatenation of entity embeddings ( or features ) ( z ∈ Rd ) and their positional encodings as input : Ze = [ ze1 + p1 , . . . , ze1 + pl ] ᵀ ∈ Rl×d , or Ze = [ ze1 ||p1 , . . . , ze1 ||pl ] ᵀ ∈ Rl× ( d+dpos ) . ( 1 ) where || denotes concatenation operation and dpos is the dimension for positional encoding . Selfattention layers can be constructed using the scaled dot-product attention , which is defined as : Attn ( Q , K , V ) = softmax ( QKᵀ√ d ) V , ( 2 ) where Q denotes the ’ queries ’ , K the ’ keys ’ and V the ’ values ’ . In Vaswani et al . ( 2017 ) , they are treated as projections of the output Ze : Q = ZeWQ , K = ZeWK , V = ZeWV , where WQ , WK and WV are the projection matrices . Since each row of Q , K and V represents an entity , the dot-product attention takes a weighted sum of the entity ’ values ’ in V where the weights are given by the interactions of entity ’ query-key ’ pairs . The hidden representation for the entity sequence under the dot-product attention is then given by he = Attn ( Q , K , V ) . 3 TEMPORAL GRAPH ATTENTION NETWORK ARCHITECTURE . We first derive the mapping from time domain to the continuous differentiable functional domain as the functional time encoding such that resulting formulation is compatible with self-attention mechanism as well as the backpropagation-based optimization frameworks . The same idea was explored in a concurrent work ( Xu et al. , 2019a ) . We then present the temporal graph attention layer and show how it can be naturally extended to incorporate the edge features . 3.1 FUNCTIONAL TIME ENCODING . Recall that our starting point is to obtain a continuous functional mapping Φ : T → RdT from time domain to the dT -dimensional vector space to replace the positional encoding in ( 1 ) . Without loss of generality , we assume that the time domain can be represented by the interval starting from origin : T = [ 0 , tmax ] , where tmax is determined by the observed data . For the inner-product selfattention in ( 2 ) , often the ’ key ’ and ’ query ’ matrices ( K , Q ) are given by identity or linear projection of Ze defined in ( 1 ) , leading to terms that only involve inner-products between positional ( time ) encodings . Consider two time points t1 , t2 and inner product between their functional encodings〈 Φ ( t1 ) , Φ ( t2 ) 〉 . Usually , the relative timespan , rather than the absolute value of time , reveals critical temporal information . Therefore , we are more interested in learning patterns related to the timespan of |t2−t1| , which should be ideally expressed by 〈 Φ ( t1 ) , Φ ( t2 ) 〉 to be compatible with self-attention . Formally , we define the temporal kernel K : T × T → R with K ( t1 , t2 ) : = 〈 Φ ( t1 ) , Φ ( t2 ) 〉 and K ( t1 , t2 ) = ψ ( t1 − t2 ) , ∀t1 , t2 ∈ T for some ψ : [ −tmax , tmax ] → R. The temporal kernel is then translation-invariant , since K ( t1 + c , t2 + c ) = ψ ( t1 − t2 ) = K ( t1 , t2 ) for any constant c. Generally speaking , functional learning is extremely complicated since it operates on infinite-dimensional spaces , but now we have transformed the problem into learning the temporal kernel K expressed by Φ . Nonetheless , we still need to figure out an explicit parameterization for Φ in order to conduct efficient gradient-based optimization . Classical harmonic analysis theory , i.e . the Bochner ’ s theorem , motivates our final solution . We point out that the temporal kernel K is positive-semidefinite ( PSD ) and continuous , since it is defined via Gram matrix and the mapping Φ is continuous . Therefore , the kernel K defined above satisfy the assumptions of the Bochner ’ s theorem , which we state below . Theorem 1 ( Bochner ’ s Theorem ) . A continuous , translation-invariant kernel K ( x , y ) = ψ ( x− y ) on Rd is positive definite if and only if there exists a non-negative measure on R such that ψ is the Fourier transform of the measure . Consequently , when scaled properly , our temporal kernel K have the alternate expression : K ( t1 , t2 ) = ψ ( t1 , t2 ) = ∫ R eiω ( t1−t2 ) p ( ω ) dω = Eω [ ξω ( t1 ) ξω ( t2 ) ∗ ] , ( 3 ) where ξω ( t ) = eiωt . Since the kernel K and the probability measure p ( ω ) are real , we extract the real part of ( 3 ) and obtain : K ( t1 , t2 ) = Eω [ cos ( ω ( t1 − t2 ) ) ] = Eω [ cos ( ωt1 ) cos ( ωt2 ) + sin ( ωt1 ) sin ( ωt2 ) ] . ( 4 ) The above formulation suggests approximating the expectation by the Monte Carlo integral ( Rahimi & Recht , 2008 ) , i.e . K ( t1 , t2 ) ≈ 1d ∑d i=1 cos ( ωit1 ) cos ( ωit2 ) + sin ( ωit1 ) sin ( ωit2 ) , with ω1 , . . . , ωd i.i.d∼ p ( ω ) . Therefore , we propose the finite dimensional functional mapping to Rd as : t 7→ Φd ( t ) : = √ 1 d [ cos ( ω1t ) , sin ( ω1t ) , . . . , cos ( ωdt ) , sin ( ωdt ) ] , ( 5 ) and it is easy to show that 〈 Φd ( t1 ) , Φd ( t2 ) 〉 ≈ K ( t1 , t2 ) . As a matter of fact , we prove the stochastic uniform convergence of 〈 Φd ( t1 ) , Φd ( t2 ) 〉 to the underlying K ( t1 , t2 ) and shows that it takes only a reasonable amount of samples to achieve proper estimation , which is stated in Claim 1 . Claim 1 . Let p ( ω ) be the corresponding probability measure stated in Bochner ’ s Theorem for kernel function K. Suppose the feature map Φ is constructed as described above using samples { ωi } di=1 , then we only need d = Ω ( 1 2 log σ2ptmax ) samples to have sup t1 , t2∈T ∣∣Φd ( t1 ) ′Φd ( t2 ) −K ( t1 , t2 ) ∣∣ < with any probability for ∀ > 0 , where σ2p is the second momentum with respect to p ( ω ) . The proof is provided in supplement material . By applying Bochner ’ s theorem , we convert the problem of kernel learning to distribution learning , i.e . estimating the p ( ω ) in Theorem 1 . A straightforward solution is to apply the reparameterization trick by using auxiliary random variables with a known marginal distribution as in variational autoencoders ( Kingma & Welling , 2013 ) . However , the reparameterization trick is often limited to certain distributions such as the ’ local-scale ’ family , which may not be rich enough for our purpose . For instance , when p ( ω ) is multimodal it is difficult to reconstruct the underlying distribution via direct reparameterizations . An alternate approach is to use the inverse cumulative distribution function ( CDF ) transformation . Rezende & Mohamed ( 2015 ) propose using parameterized normalizing flow , i.e . a sequence of invertible transformation functions , to approximate arbitrarily complicated CDF and efficiently sample from it . Dinh et al . ( 2016 ) further considers stacking bijective transformations , known as affine coupling layer , to achieve more effective CDF estimation . The above methods learns the inverse CDF function F−1θ ( . ) parameterized by flow-based networks and draw samples from the corresponding distribution . On the other hand , if we consider an non-parameterized approach for estimating distribution , then learning F−1 ( . ) and obtain d samples from it is equivalent to directly optimizing the { ω1 , . . . , ωd } in ( 4 ) as free model parameters . In practice , we find these two approaches to have highly comparable performances ( see supplement material ) . Therefore we focus on the non-parametric approach , since it is more parameter-efficient and has faster training speed ( as no sampling during training is required ) . The above functional time encoding is fully compatible with self-attention , thus they can replace the positional encodings in ( 1 ) and their parameters are jointly optimized as part of the whole model .
This paper proposed the temporal graph attention layer which aggregates in-hop features with self-attention and incorporates temporal information with Fourier based relative positional encoding. This idea is novel in GCN field. Experimental results demonstrate that the TGAT which adds temporal encoding outperforms the other methods. Overall this paper addressed its core ideas clearly and made proper experiments and analysis to demonstrate the superiority against existing counterparts.
SP:50073cbe6ab4b44b3c68f141542c1e81df0c5f61
Graph Neural Networks for Reasoning 2-Quantified Boolean Formulas
1 INTRODUCTION . As deep learning makes astonishing achievements in the domain of image ( He et al. , 2016 ) and audio ( Hannun et al. , 2014 ) processing , natural languages ( Vaswani et al. , 2017 ) , and discrete heuristics decisions in games ( Silver et al. , 2017 ) , there is a profound interest in applying the relevant techniques in the field of logical reasoning . Logical reasoning problems span from simple propositional logic to complex predicate logic and high-order logic , with known theoretical complexities from NP-complete ( Cook , 1971 ) to semi-decidable and undecidable ( Church , 1936 ) . Testing the ability and limitation of machine learning tools on logical reasoning problems leads to a fundamental understanding of the boundary of learnability and robust AI , and addresses the interesting questions in decision procedures in logic , symbolic reasoning , and program analysis and verification as defined in the programming language community . There have been some successes in learning propositional logic reasoning ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) , which focus on SAT ( Boolean Satisfiability ) problems as defined below . A propositional logic formula is an expression composed of Boolean constants ( > : true , ⊥ : false ) , Boolean variables ( xi ) , and propositional connectives such as ∧ , ∨ , ¬ ( for example ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) ) . The SAT problem asks if a given formula can be satisfied ( evaluated to > ) by assigning proper Boolean values to the variables . A crucial feature of the logical reasoning domain ( as is visible in the SAT problem ) is that the inputs are often structural , where logical connections between entities ( variables in SAT problems ) are the key information . Accordingly , previous successes have used GNN ( Graph Neural Networks ) and message-passing based embeddings to solve SAT problems . However , it should be noted that logical decision procedures is more complex that just reading the formulas correctly . It is unclear if GNN embeddings ( via simple message-passing ) contain all the information needed to reason about complex logical questions on top of the graph structures derived from the formulas , or whether the complex embedding schemes can be learned from backpropagation . Previous successes on SAT problems argued for the power of GNN , which can handle NP-complete problems ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) , but no successes have been reported for solving semi-decidable predicate logic problems via GNN . In order to find out where the limitation of GNN is and why , in learning logical reasoning problems , we decide to look at problems with complexity inbetween SAT and predicate logic problems , for which QBF ( Quantified Boolean Formula ) problems serve as excellent middle steps . QBF is an extension of propositional formula , which allows quantifiers ( ∀ and ∃ ) over the Boolean variables ( such as ∀x1∃x2 . ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) ) . In general , a quantified Boolean formula in prenex normal form can be expressed as such : QiXiQi−1Xi−1 ... Q0X0 . φ where Qi are quantifiers that always differ from their neighboring quantifiers , Xi are disjoint sets of Boolean variables , and φ is a propositional formula with all Boolean variables bounded in Qi . Complexity-wise , QBF problems are PSPACE-complete ( Kleine Büning & Bubeck , 2009 ) , which lies in-between the NP-completeness of SAT problems and the semi-decidability of predicate logic problems . Furthermore , 2-QBF ( QBF with only two alternative quantifiers ) is ΣP2 -complete ( Kleine Büning & Bubeck , 2009 ) . Another direction of addressing logical reasoning problems via machine learning is to learn heuristic decisions within traditional decision procedures . This direction is less appealing from a theoretical perspective , but more interesting from a practical one , since it has been shown to speed up SAT solvers in practical settings ( Selsam & Bjørner , 2019 ) . In this direction , there is less concern about the embedding power of GNN , but more about the design of the training procedures ( what is the data and label for training ) and how to incorporate the trained models within the decision procedures . The embeddings captured via GNN is rather preferred to be lossy to prevent overfitting ( Selsam & Bjørner , 2019 ) . In this paper we explore the potential applications of GNNs to 2QBF problems . In Section 2 , we illustrate our designs of GNN architectures for embedding 2QBF formulas . In Section 3 , we evaluate GNN-based 2QBF solvers , and conjecture with empirical evidences that the current GNN techniques are unable to learn complete SAT solvers or 2QBF solvers . In Section 4 , we demonstrate the potential of our GNN-based heuristics for selecting candidates and counter-examples in the CEGAR-based solver framework . In Section 5 , we discuss the related work and conclude in Section 6 . Throughout the paper we redirect details to supplementary materials . We make the following contributions : 1 . Design and test possible GNN architectures for embedding 2QBF . 2 . Pinpoint the limitation of GNN in learning logical decision procedures that need reasoning about a space of Boolean assignments . 3 . Learn GNN-based CEGAR solver heuristics via supervised learning and uncover interesting challenges for GNN to generalize across graph structures . 2 GNN EMBEDDING OF PROPOSITIONAL LOGICAL FORMULAS . Preliminary : Graph Neural Networks . GNNs refer to the neural architectures devised to learn the embeddings of nodes and graphs via message-passing . Resembling the generic definition in Xu et al . ( 2019 ) , they consist of two successive operators to propagate the messages and evolve the embeddings over iterations : m ( k ) v =Aggregate ( k ) ( { h ( k−1 ) u : u ∈ N ( v ) } ) , h ( k ) v =Combine ( k ) ( h ( k−1 ) v , m ( k ) v ) ( 1 ) where h ( k ) v denotes the hidden state ( embedding ) of node v at the kth layer/iteration , and N ( v ) denotes the neighbors of node v. In each iteration , the Aggregate ( k ) ( · ) aggregates hidden states from node v ’ s neighbors to produce the new message ( i.e. , m ( k ) v ) for node v , and Combine ( k ) ( · , · ) computes the new embedding of v with its last state and its current message . After a specific number of iterations ( e.g. , K ) , the embeddings should capture the global relational information of the nodes , which can be fed into other neural network modules for specific tasks . GNN Architecture for Embedding SAT formulas . Previous success ( Selsam et al. , 2019 ) of GNN-based SAT solvers embedded SAT formulas like below . Each SAT formula is translated into a bipartite graph , where one kind of nodes represent all literals ( Boolean variables and their negations , denoted as L ) , and the other kind of nodes represent all clauses ( sets of literals that are connected via ∨ , denoted as C ) . Edges between literal and clause nodes represent the literal appearing in that clause , and all edges are represented by a sparse adjacent matrix ( EdgeMatrix ( E ) ) of dimension |C| × |L| . There is also another kind of edges connecting literals with their negations . The graph representation of ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) is given below as an example . Note that this architecture is specific for propositional formulas in Conjunctive Normal Form ( CNF ) , which is composed of clauses connected via ∧ . C1 C2 x1 x2 ¬x1 ¬x2 The embeddings of literals and clauses are initialized with tiled random vectors . Then the GNN uses MLPs to compute the messages of literals and clauses from the embeddings , and LSTMs to update embeddings with the aggregated messages . The mathematical process for one iteration of message-passing is given below , where EmbL and EmbC denotes embedding matrices of literals and clauses respectively , MsgX→Y denotes messages from X to Y , MLPX denotes MLPs of X for generating messages from the embeddings , LSTMX denotes LSTMs of X for digesting incoming messages and updating the embeddings , and · T [ ] represent matrix multiplication , transposition , and concatenation respectively . Furthermore , Emb¬L denotes a permutational view of EmbL such that the same row of EmbL and Emb¬L are embeddings of a variable and its negation respectively . MsgL→C = E ·MLPL ( EmbL ) # aggregate clauses EmbC = LSTMC ( EmbC , MsgL→C ) # combine clauses MsgC→L = E T ·MLPC ( EmbC ) # aggregate literals EmbL = LSTML ( EmbL , [ MsgC→L , Emb¬L ] ) # combine literals ( 2 ) Note that different instances of MLPs and LSTMs are used for clauses and literals ( they have different subscripts ) . What ’ s more , Emb¬L is used as additional message when updating EmbL . GNN Architectures for Embedding 2QBF . The difference between SAT formulas and 2QBF is that in 2QBF the variables are quantified by ∀ or ∃ . To reflect that difference in graph representation , we separate ∀-literals and ∃-literals into different groups of nodes . For example , the graph representation of ∀x1∃x2 . ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) is shown below : C1 C2 x1 ¬x1 x2 ¬x2 Accordingly , in GNN architectures , the separated ∀-literals and ∃-literals are embedded via different modules . The GNN architecture design closely resembles the design philosophy of Selsam et al . ( 2019 ) in terms of permutation invariance and negation invariance , and would most likely carry over the success of GNN in solving SAT problems to 2QBF problems . MsgL→C = [ E∀ ·MLP∀ ( Emb∀ ) , E∃ ·MLP∃ ( Emb∃ ) ] # aggregate clauses EmbC = LSTMC ( EmbC , MsgL→C ) # combine clauses MsgC→∀ = E T ∀ ·MLPC→∀ ( EmbC ) # aggregate ∀ Emb∀ = LSTM∀ ( Emb∀ , [ MsgC→∀ , Emb¬∀ ] ) # combine ∀ MsgC→∃ = E T ∃ ·MLPC→∃ ( EmbC ) # aggregate ∃ Emb∃ = LSTM∃ ( Emb∃ , [ MsgC→∃ , Emb¬∃ ] ) # combine ∃ ( 3 ) Note that we use ∀ and ∃ to denote all ∀-literals and all ∃-literals respectively . We use EX to denote the EdgeMatrix between X and C , and MLPC→X to denote MLPs that generate MsgC→X . We de facto have tested more GNN architectures for 2QBF ( see the supplementary material A.1 ) , yet the model above performed the best in our later evaluation , so we used it in the main paper . 3 GNN-BASED SOLVERS FAIL IN 2QBF PROBLEMS . In the previous section , we have discussed GNN-based embeddings in propositional logical formulas . We then test whether GNN-based 2QBF solvers can be learned , following the previous successes ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) . 3.1 EMPIRICAL STUDY FOR REASONING 2QBF BY GNN . Data Preparation . In training and testing , we follow the previous work ( Chen & Interian , 2005 ) to generate random 2QBF formulas of specs ( 2,3 ) and sizes ( 8,10 ) . That is to say , each clause has 5 literals , 2 of them are randomly chosen from a set of 8 ∀-quantified variables , and 3 of them are randomly chosen from a set of 10 ∃-quantified variables . We modify the generation procedure so that it generates clauses until the formula becomes unsatisfiable . We then randomly negate an ∃-quantified literal per formula to get a very similar but satisfiable formula . Predicting Satisfiability . We first tested whether our graph embeddings can be used to predict satisfiability of 2QBF formulas . We extended the GNN architectures with a voting MLP ( MLPvote ) that takes the embeddings of the ∀-variables after the propagation , and uses the average votes as logits for satisfiability/unsatisfiability prediction : logitssat = mean ( MLPvote ( Emb∀ ) ) We trained our GNNs with different amount of data ( 40 pairs , 80 pairs , and 160 pairs of satisfiable/unsatisfiable formulas ) and different numbers of message-passing iterations ( 8 iters , 16 iters , and 32 iters ) , and then evaluated the converged models on 600 pairs of new instances . We report the accuracy rate of unsatisfiable and satisfiable formulas as tuples for both the training dataset and the testing dataset . By alternating the random seeds , the models with the best training data performance are selected and shown in Table 1 . Since pairs of satisfiable/unsatisfiable formulas are only different by one literal , it forces the GNNs to learn subtle structural differences in the formulas . The GNNs fit well to the smaller training dataset but have trouble for 160 pairs of formulas ( numbers in the green color ) . Performances of the models on testing dataset are close to random guesses ( numbers in the blue color ) , and running more iterations during testing does not help with their performances . Predicting Unsatisfiability Witnesses . Previous work ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) also showed successes in predicting satisfiable witnesses ( variable assignments that satisfy the formulas ) of SAT problems . 2QBF problems have unsatisfiable witnesses ( assignments to ∀ variables that render the reduced propositional formulas unsatisfiable ) . Next , we test if we can train GNNs to predict unsatisfiable witnesses of 2QBF formulas . Specifically , the final embeddings of ∀-variables are transformed into logits via an assignment MLP ( MLPasn ) and then used to compute the cross-entropy loss with the actual unsatisfiability witnesses of the formula : logitswitness = MLPasn ( Emb∀ ) Once again we tried different amount of training data ( 160 , 320 , and 640 unsatisfiable formulas ) and different numbers of iterations ( 8 iters , 16 iters , and 32 iters ) , and then tested the converged models on 600 unsatisfiable new 2QBF formulas . We report the accuracy per variable and the accuracy per formula as tuples for both the training dataset and the testing dataset in Table 2 , from which we can observe that the GNNs fit well to the training data ( numbers in green color ) , especially with more message-passing iterations . However , the GNN performance on testing data is only slightly better than random guesses ( numbers in blue color ) , and running more iterations during testing does not help with the performance either .
This paper explores how graph neural networks can be applied to test satisfiability of 2QBF logical formulas. They show that a straightforward extension of a GNN-based SAT solver to 2QBF fails to outperform random chance, and argue that this is because proving either satisfiability or unsatisfiability of 2QBF requires reasoning over exponential sets of assignments. Instead, they show that GNNs can be useful as a heuristic candidate- or counterexample- ranking model which improves the efficiency of the CEGAR algorithm for solving 2QBF.
SP:8361d709b85b1c717e2cf742dab0145fae667660
Graph Neural Networks for Reasoning 2-Quantified Boolean Formulas
1 INTRODUCTION . As deep learning makes astonishing achievements in the domain of image ( He et al. , 2016 ) and audio ( Hannun et al. , 2014 ) processing , natural languages ( Vaswani et al. , 2017 ) , and discrete heuristics decisions in games ( Silver et al. , 2017 ) , there is a profound interest in applying the relevant techniques in the field of logical reasoning . Logical reasoning problems span from simple propositional logic to complex predicate logic and high-order logic , with known theoretical complexities from NP-complete ( Cook , 1971 ) to semi-decidable and undecidable ( Church , 1936 ) . Testing the ability and limitation of machine learning tools on logical reasoning problems leads to a fundamental understanding of the boundary of learnability and robust AI , and addresses the interesting questions in decision procedures in logic , symbolic reasoning , and program analysis and verification as defined in the programming language community . There have been some successes in learning propositional logic reasoning ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) , which focus on SAT ( Boolean Satisfiability ) problems as defined below . A propositional logic formula is an expression composed of Boolean constants ( > : true , ⊥ : false ) , Boolean variables ( xi ) , and propositional connectives such as ∧ , ∨ , ¬ ( for example ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) ) . The SAT problem asks if a given formula can be satisfied ( evaluated to > ) by assigning proper Boolean values to the variables . A crucial feature of the logical reasoning domain ( as is visible in the SAT problem ) is that the inputs are often structural , where logical connections between entities ( variables in SAT problems ) are the key information . Accordingly , previous successes have used GNN ( Graph Neural Networks ) and message-passing based embeddings to solve SAT problems . However , it should be noted that logical decision procedures is more complex that just reading the formulas correctly . It is unclear if GNN embeddings ( via simple message-passing ) contain all the information needed to reason about complex logical questions on top of the graph structures derived from the formulas , or whether the complex embedding schemes can be learned from backpropagation . Previous successes on SAT problems argued for the power of GNN , which can handle NP-complete problems ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) , but no successes have been reported for solving semi-decidable predicate logic problems via GNN . In order to find out where the limitation of GNN is and why , in learning logical reasoning problems , we decide to look at problems with complexity inbetween SAT and predicate logic problems , for which QBF ( Quantified Boolean Formula ) problems serve as excellent middle steps . QBF is an extension of propositional formula , which allows quantifiers ( ∀ and ∃ ) over the Boolean variables ( such as ∀x1∃x2 . ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) ) . In general , a quantified Boolean formula in prenex normal form can be expressed as such : QiXiQi−1Xi−1 ... Q0X0 . φ where Qi are quantifiers that always differ from their neighboring quantifiers , Xi are disjoint sets of Boolean variables , and φ is a propositional formula with all Boolean variables bounded in Qi . Complexity-wise , QBF problems are PSPACE-complete ( Kleine Büning & Bubeck , 2009 ) , which lies in-between the NP-completeness of SAT problems and the semi-decidability of predicate logic problems . Furthermore , 2-QBF ( QBF with only two alternative quantifiers ) is ΣP2 -complete ( Kleine Büning & Bubeck , 2009 ) . Another direction of addressing logical reasoning problems via machine learning is to learn heuristic decisions within traditional decision procedures . This direction is less appealing from a theoretical perspective , but more interesting from a practical one , since it has been shown to speed up SAT solvers in practical settings ( Selsam & Bjørner , 2019 ) . In this direction , there is less concern about the embedding power of GNN , but more about the design of the training procedures ( what is the data and label for training ) and how to incorporate the trained models within the decision procedures . The embeddings captured via GNN is rather preferred to be lossy to prevent overfitting ( Selsam & Bjørner , 2019 ) . In this paper we explore the potential applications of GNNs to 2QBF problems . In Section 2 , we illustrate our designs of GNN architectures for embedding 2QBF formulas . In Section 3 , we evaluate GNN-based 2QBF solvers , and conjecture with empirical evidences that the current GNN techniques are unable to learn complete SAT solvers or 2QBF solvers . In Section 4 , we demonstrate the potential of our GNN-based heuristics for selecting candidates and counter-examples in the CEGAR-based solver framework . In Section 5 , we discuss the related work and conclude in Section 6 . Throughout the paper we redirect details to supplementary materials . We make the following contributions : 1 . Design and test possible GNN architectures for embedding 2QBF . 2 . Pinpoint the limitation of GNN in learning logical decision procedures that need reasoning about a space of Boolean assignments . 3 . Learn GNN-based CEGAR solver heuristics via supervised learning and uncover interesting challenges for GNN to generalize across graph structures . 2 GNN EMBEDDING OF PROPOSITIONAL LOGICAL FORMULAS . Preliminary : Graph Neural Networks . GNNs refer to the neural architectures devised to learn the embeddings of nodes and graphs via message-passing . Resembling the generic definition in Xu et al . ( 2019 ) , they consist of two successive operators to propagate the messages and evolve the embeddings over iterations : m ( k ) v =Aggregate ( k ) ( { h ( k−1 ) u : u ∈ N ( v ) } ) , h ( k ) v =Combine ( k ) ( h ( k−1 ) v , m ( k ) v ) ( 1 ) where h ( k ) v denotes the hidden state ( embedding ) of node v at the kth layer/iteration , and N ( v ) denotes the neighbors of node v. In each iteration , the Aggregate ( k ) ( · ) aggregates hidden states from node v ’ s neighbors to produce the new message ( i.e. , m ( k ) v ) for node v , and Combine ( k ) ( · , · ) computes the new embedding of v with its last state and its current message . After a specific number of iterations ( e.g. , K ) , the embeddings should capture the global relational information of the nodes , which can be fed into other neural network modules for specific tasks . GNN Architecture for Embedding SAT formulas . Previous success ( Selsam et al. , 2019 ) of GNN-based SAT solvers embedded SAT formulas like below . Each SAT formula is translated into a bipartite graph , where one kind of nodes represent all literals ( Boolean variables and their negations , denoted as L ) , and the other kind of nodes represent all clauses ( sets of literals that are connected via ∨ , denoted as C ) . Edges between literal and clause nodes represent the literal appearing in that clause , and all edges are represented by a sparse adjacent matrix ( EdgeMatrix ( E ) ) of dimension |C| × |L| . There is also another kind of edges connecting literals with their negations . The graph representation of ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) is given below as an example . Note that this architecture is specific for propositional formulas in Conjunctive Normal Form ( CNF ) , which is composed of clauses connected via ∧ . C1 C2 x1 x2 ¬x1 ¬x2 The embeddings of literals and clauses are initialized with tiled random vectors . Then the GNN uses MLPs to compute the messages of literals and clauses from the embeddings , and LSTMs to update embeddings with the aggregated messages . The mathematical process for one iteration of message-passing is given below , where EmbL and EmbC denotes embedding matrices of literals and clauses respectively , MsgX→Y denotes messages from X to Y , MLPX denotes MLPs of X for generating messages from the embeddings , LSTMX denotes LSTMs of X for digesting incoming messages and updating the embeddings , and · T [ ] represent matrix multiplication , transposition , and concatenation respectively . Furthermore , Emb¬L denotes a permutational view of EmbL such that the same row of EmbL and Emb¬L are embeddings of a variable and its negation respectively . MsgL→C = E ·MLPL ( EmbL ) # aggregate clauses EmbC = LSTMC ( EmbC , MsgL→C ) # combine clauses MsgC→L = E T ·MLPC ( EmbC ) # aggregate literals EmbL = LSTML ( EmbL , [ MsgC→L , Emb¬L ] ) # combine literals ( 2 ) Note that different instances of MLPs and LSTMs are used for clauses and literals ( they have different subscripts ) . What ’ s more , Emb¬L is used as additional message when updating EmbL . GNN Architectures for Embedding 2QBF . The difference between SAT formulas and 2QBF is that in 2QBF the variables are quantified by ∀ or ∃ . To reflect that difference in graph representation , we separate ∀-literals and ∃-literals into different groups of nodes . For example , the graph representation of ∀x1∃x2 . ( x1 ∨ ¬x2 ) ∧ ( ¬x1 ∨ x2 ) is shown below : C1 C2 x1 ¬x1 x2 ¬x2 Accordingly , in GNN architectures , the separated ∀-literals and ∃-literals are embedded via different modules . The GNN architecture design closely resembles the design philosophy of Selsam et al . ( 2019 ) in terms of permutation invariance and negation invariance , and would most likely carry over the success of GNN in solving SAT problems to 2QBF problems . MsgL→C = [ E∀ ·MLP∀ ( Emb∀ ) , E∃ ·MLP∃ ( Emb∃ ) ] # aggregate clauses EmbC = LSTMC ( EmbC , MsgL→C ) # combine clauses MsgC→∀ = E T ∀ ·MLPC→∀ ( EmbC ) # aggregate ∀ Emb∀ = LSTM∀ ( Emb∀ , [ MsgC→∀ , Emb¬∀ ] ) # combine ∀ MsgC→∃ = E T ∃ ·MLPC→∃ ( EmbC ) # aggregate ∃ Emb∃ = LSTM∃ ( Emb∃ , [ MsgC→∃ , Emb¬∃ ] ) # combine ∃ ( 3 ) Note that we use ∀ and ∃ to denote all ∀-literals and all ∃-literals respectively . We use EX to denote the EdgeMatrix between X and C , and MLPC→X to denote MLPs that generate MsgC→X . We de facto have tested more GNN architectures for 2QBF ( see the supplementary material A.1 ) , yet the model above performed the best in our later evaluation , so we used it in the main paper . 3 GNN-BASED SOLVERS FAIL IN 2QBF PROBLEMS . In the previous section , we have discussed GNN-based embeddings in propositional logical formulas . We then test whether GNN-based 2QBF solvers can be learned , following the previous successes ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) . 3.1 EMPIRICAL STUDY FOR REASONING 2QBF BY GNN . Data Preparation . In training and testing , we follow the previous work ( Chen & Interian , 2005 ) to generate random 2QBF formulas of specs ( 2,3 ) and sizes ( 8,10 ) . That is to say , each clause has 5 literals , 2 of them are randomly chosen from a set of 8 ∀-quantified variables , and 3 of them are randomly chosen from a set of 10 ∃-quantified variables . We modify the generation procedure so that it generates clauses until the formula becomes unsatisfiable . We then randomly negate an ∃-quantified literal per formula to get a very similar but satisfiable formula . Predicting Satisfiability . We first tested whether our graph embeddings can be used to predict satisfiability of 2QBF formulas . We extended the GNN architectures with a voting MLP ( MLPvote ) that takes the embeddings of the ∀-variables after the propagation , and uses the average votes as logits for satisfiability/unsatisfiability prediction : logitssat = mean ( MLPvote ( Emb∀ ) ) We trained our GNNs with different amount of data ( 40 pairs , 80 pairs , and 160 pairs of satisfiable/unsatisfiable formulas ) and different numbers of message-passing iterations ( 8 iters , 16 iters , and 32 iters ) , and then evaluated the converged models on 600 pairs of new instances . We report the accuracy rate of unsatisfiable and satisfiable formulas as tuples for both the training dataset and the testing dataset . By alternating the random seeds , the models with the best training data performance are selected and shown in Table 1 . Since pairs of satisfiable/unsatisfiable formulas are only different by one literal , it forces the GNNs to learn subtle structural differences in the formulas . The GNNs fit well to the smaller training dataset but have trouble for 160 pairs of formulas ( numbers in the green color ) . Performances of the models on testing dataset are close to random guesses ( numbers in the blue color ) , and running more iterations during testing does not help with their performances . Predicting Unsatisfiability Witnesses . Previous work ( Selsam et al. , 2019 ; Amizadeh et al. , 2019 ) also showed successes in predicting satisfiable witnesses ( variable assignments that satisfy the formulas ) of SAT problems . 2QBF problems have unsatisfiable witnesses ( assignments to ∀ variables that render the reduced propositional formulas unsatisfiable ) . Next , we test if we can train GNNs to predict unsatisfiable witnesses of 2QBF formulas . Specifically , the final embeddings of ∀-variables are transformed into logits via an assignment MLP ( MLPasn ) and then used to compute the cross-entropy loss with the actual unsatisfiability witnesses of the formula : logitswitness = MLPasn ( Emb∀ ) Once again we tried different amount of training data ( 160 , 320 , and 640 unsatisfiable formulas ) and different numbers of iterations ( 8 iters , 16 iters , and 32 iters ) , and then tested the converged models on 600 unsatisfiable new 2QBF formulas . We report the accuracy per variable and the accuracy per formula as tuples for both the training dataset and the testing dataset in Table 2 , from which we can observe that the GNNs fit well to the training data ( numbers in green color ) , especially with more message-passing iterations . However , the GNN performance on testing data is only slightly better than random guesses ( numbers in blue color ) , and running more iterations during testing does not help with the performance either .
This paper investigated the GNN-based solvers for the 2-Quantified Boolean Formula satisfiability problem. This paper points out that GNN has limitations in reasoning about unsatisfiability of SAT problems possibly due to the simple message-passing scheme. To extend the GNN-based SAT solvers to 2-QBF solvers, this paper then turns to learn GNN-based heuristics that work with traditional decision procedure, and proposes a CEGAR-based 2QBF algorithm.
SP:8361d709b85b1c717e2cf742dab0145fae667660
The Probabilistic Fault Tolerance of Neural Networks in the Continuous Limit
1 INTRODUCTION . Understanding the inner working of artificial neural networks ( NNs ) is currently one of the most pressing questions ( 20 ) in learning theory . As of now , neural networks are the backbone of the most successful machine learning solutions ( 37 ; 18 ) . They are deployed in safety-critical tasks in which there is little room for mistakes ( 10 ; 40 ) . Nevertheless , such issues are regularly reported since attention was brought to the NNs vulnerabilities over the past few years ( 37 ; 5 ; 24 ; 8 ) . Fault tolerance as a part of theoretical NNs research . Understanding complex systems requires understanding how they can tolerate failures of their components . This has been a particularly fruitful method in systems biology , where the mapping of the full network of metabolite molecules is a computationally quixotic venture . Instead of fully mapping the network , biologists improved their understanding of biological networks by studying the effect of deleting some of their components , one or a few perturbations at a time ( 7 ; 12 ) . Biological systems in general are found to be fault tolerant ( 28 ) , which is thus an important criterion for biological plausibility of mathematical models . Neuromorphic hardware ( NH ) . Current Machine Learning systems are bottlenecked by the underlying computational power ( 1 ) . One significant improvement over the now prevailing CPU/GPUs is neuromorphic hardware . In this paradigm of computation , each neuron is a physical entity ( 9 ) , and the forward pass is done ( theoretically ) at the speed of light . However , components of such hardware are small and unreliable , leading to small random perturbations of the weights of the model ( 41 ) . Thus , robustness to weight faults is an overlooked concrete Artificial Intelligence ( AI ) safety problem ( 2 ) . Since we ground the assumptions of our model in the properties of NH and of biological networks , our fundamental theoretical results can be directly applied in these computing paradigms . Research on NN fault tolerance . In the 2000s , the fault tolerance of NNs was a major motivation for studying them ( 14 ; 16 ; 4 ) . In the 1990s , the exploration of microscopic failures was fueled by the hopes of developing neuromorphic hardware ( NH ) ( 22 ; 6 ; 34 ) . Taylor expansion was one of the tools used for the study of fault tolerance ( 13 ; 26 ) . Another line of research proposes sufficient conditions for robustness ( 33 ) . However , most of these studies are either empirical or are limited to simple architectures ( 41 ) . In addition , those studies address the worst case ( 5 ) , which is known to be Under review as a conference paper at ICLR 2020 more severe than a random perturbation . Recently , fault tolerance was studied experimentally as well . DeepMind proposes to focus on neuron removal ( 25 ) to understand NNs . NVIDIA ( 21 ) studies error propagation caused by micro-failures in hardware ( 3 ) . In addition , mathematically similar problems are raised in the study of generalization ( 29 ; 30 ) and robustness ( 42 ) . The quest for guarantees . Existing NN approaches do not guarantee fault tolerance : they only provide heuristics and evaluate them experimentally . Theoretical papers , in turn , focus on the worst case and not on errors in a probabilistic sense . It is known that there exists a set of small worstcase perturbations , adversarial examples ( 5 ) , leading to pessimistic bounds not suitable for the average case of random failures , which is the most realistic case for hardware faults . Other branch of theoretical research studies robustness and arrives at error bounds which , unfortunately , scale exponentially with the depth of the network ( 29 ) . We define the goal of this paper to guarantee that the probability of loss exceeding a threshold is lower than a pre-determined small value . This condition is sensible . For example , self-driving cars are deemed to be safe once their probability of a crash is several orders of magnitude less than of human drivers ( 40 ; 15 ; 36 ) . In addition , current fault tolerant architectures use mean as the aggregation of copies of networks to achieve redundancy . This is known to require exponentially more redundancy compared to the median approach and , thus , hardware cost . In order to apply this powerful technique and reduce costs , certain conditions need to be satisfied which we will evaluate for neural networks . Contributions . Our main contribution is a theoretical bound on the error in the output of an NN in the case of random neuron crashes obtained in the continuous limit , where close-by neurons compute similar functions . We show that , while the general problem of fault tolerance is NP-hard , realistic assumptions with regard to neuromorphic hardware , and a probabilistic approach to the problem , allow us to apply a Taylor expansion for the vast majority of the cases , as the weight perturbation is small with high probability . In order for the Taylor expansion to work , we assume that a network is smooth enough , introducing the continuous limit ( 39 ) to prove the properties of NNs : it requires neighboring neurons at each layer to be similar . This makes the moments of the error linear-time computable . To our knowledge , the tightness of the bounds we obtain is a novel result . In turn , the bound allows us to build an algorithm that enhances fault tolerance of neural networks . Our algorithm uses median aggregation which results in only a logarithmic extra cost – a drastic improvement on the initial NP-hardness of the problem . Finally , we show how to apply the bounds to specific architectures and evaluate them experimentally on real-world networks , notably the widely used VGG ( 38 ) . Outline . In Sections 2-4 , we set the formalism , then state our bounds . In Section 5 , we present applications of our bounds on characterizing the fault tolerance of different architectures . In Section 6 we present our algorithm for certifying fault tolerance . In Section 7 , we present our experimental evaluation . Finally , in Section 8 , we discuss the consequences of our findings . Full proofs are available in the supplementary material . Code is provided at the anonymized repo github.com/iclr-2020-fault-tolerance/code . We abbreviate Assumption 1→ A1 , Proposition 1→ P1 , Theorem 1→ T1 , Definition 1→ D1 . 2 DEFINITIONS OF PROBABILISTIC FAULT TOLERANCE . In this section , we define a fully-connected network and fault tolerance formally . Notations . For any two vectors x , y ∈ Rn we use the notation ( x , y ) = ∑n i=1 xiyi for the standard scalar product . Matrix γ-norm for γ = ( 0 , +∞ ] is defined as ‖A‖γ = supx 6=0 ‖Ax‖γ/‖x‖γ . We use the infinity norm ‖x‖∞ = max |xi| and the corresponding operator matrix norm . We call a vector 0 6= x ∈ Rn q-balanced if min |xi| ≥ qmax |xi| . We denote [ n ] = { 1 , 2 , ... , n } . We define the Hessian Hij = ∂2y ( x ) /∂xi∂xj as a matrix of second derivatives . We write layer indices down and element indices up : W ijl . For the input , we write xi ≡ xi . If the layer is fixed , we omit its index . We use the element-wise Hadamard product ( x y ) i = xiyi . Definition 1 . ( Neural network ) A neural network with L layers is a function yL : Rn0 → RnL defined by a tuple ( L , W , B , ϕ ) with a tuple of weight matrices W = ( W1 , ... , WL ) ( or their distributions ) of size Wl : nl×nl−1 , biases B = ( b1 , ... , bL ) ( or their distributions ) of size bl ∈ Rnl by the expression yl = ϕ ( zl ) with pre-activations zl = Wlyl−1 + bl , l ∈ [ L ] , y0 = x and yL = zL . Note that the last layer is linear . We additionally require ϕ to be 1-Lipschitz 1 . We assume that the network was trained 11-Lipschitz ϕ s.t . |ϕ ( x ) − ϕ ( y ) | 6 |x− y| . If ϕ is K-Lipschitz , we rescale the weights to make K = 1 : W ijl →W ij l /K . This is the general case . Indeed , if we rescale ϕ ( x ) → Kϕ ( x ) , then , yl−1 → Ky ′ l−1 , and in the sum z′l = ∑ W ij/K ·Kyl−1 ≡ zl Under review as a conference paper at ICLR 2020 using input-output pairs x , y∗ ∼ X × Y using ERM2 for a loss ω . Loss layer for input x and the true label y∗ ( x ) is defined as yL+1 ( x ) = Ey∗∼Y |xω ( yL ( x ) , y∗ ) ) with ω ∈ [ −1 , 1 ] 3 Definition 2 . ( Weight failure ) Network ( L , W , B , ϕ ) with weight failures U of distribution U ∼ D| ( x , W ) is the network ( L , W + U , B , ϕ ) for U ∼ D| ( x , W ) . We denote a ( random ) output of this network as yW+U ( x ) = ŷL ( x ) with activations ŷl and pre-activations ẑl , as in D1 . Definition 3 . ( Bernoulli neuron failures ) Bernoulli neuron crash distribution is the distribution with i.i.d . ξil ∼ Be ( pl ) , U ij l = −ξil ·W ij l . For each possible crashing neuron i at layer l we define U il = ∑ j |U ij l | and W il = ∑ j |W ij l | , the crashed incoming weights and total incoming weights . We note that we see neuron failure as a sub-type of weight failure . This definition means that neurons crash independently , and they start to output 0 when they do . We use this model because it mimics essential properties of NH ( 41 ) . Components fail relatively independently , as we model faults as random ( 41 ) . In terms of ( 41 ) , we consider stuck-at-0 crashes , and passive fault tolerance in terms of reliability . Definition 4 . ( Output error for a weight distribution ) The error in case of weight failure with distribution D| ( x , W ) is ∆l ( x ) = yW+Ul ( x ) − yWl ( x ) for layers l ∈ [ L+ 1 ] We extend the definition of ε-fault tolerance from ( 23 ) to the probabilistic case : Definition 5 . ( Probabilistic fault tolerance ) A network ( L , W , B , ϕ ) is said to be ( ε , δ ) -fault tolerant over an input distribution ( x , y∗ ) ∼ X × Y and a crash distribution U ∼ D| ( x , W ) if P ( x , y∗ ) ∼X×Y , U∼D| ( x , W ) { ∆L+1 ( x ) ≥ ε } ≤ δ . For such network , we write ( W , B ) ∈ FT ( L , ϕ , p , ε , δ ) . Interpretation . To evaluate the fault tolerance of a network , we compute the first moments of ∆L+1 . Next , we use tail bounds to guarantee ( ε , δ ) -FT . This definition means that with high probability 1− δ additional loss due to faults does not exceed ε . Expectation over the crashes U ∼ D|x can be interpreted in two ways . First , for a large number of neural networks , each having permanent crashes , E∆ is the expectation over all instances of a network implemented in the hardware multiple times . For a single network with intermittent crashes , E∆ is the output of this one network over repetitions . The recent review study ( 41 ) identifies three types of faults : permanent , transient , and intermittent . Our definition 2 thus covers all these cases . Now that we have a definition of fault tolerance , we show in the next section that the task of certifying or even computing it is hard .
Review: This paper considers the problem of dropping neurons from a neural network. In the case where this is done randomly, this corresponds to the widely studied dropout algorithm. If the goal is to become robust to randomly dropped neurons during evaluation, then it seems sufficient to just train with dropout (there is also a gaussian approximation to dropout using the central limit theorem called "fast dropout").
SP:2b8df72b380b893a55a82934afd558d75a3f42f2
The Probabilistic Fault Tolerance of Neural Networks in the Continuous Limit
1 INTRODUCTION . Understanding the inner working of artificial neural networks ( NNs ) is currently one of the most pressing questions ( 20 ) in learning theory . As of now , neural networks are the backbone of the most successful machine learning solutions ( 37 ; 18 ) . They are deployed in safety-critical tasks in which there is little room for mistakes ( 10 ; 40 ) . Nevertheless , such issues are regularly reported since attention was brought to the NNs vulnerabilities over the past few years ( 37 ; 5 ; 24 ; 8 ) . Fault tolerance as a part of theoretical NNs research . Understanding complex systems requires understanding how they can tolerate failures of their components . This has been a particularly fruitful method in systems biology , where the mapping of the full network of metabolite molecules is a computationally quixotic venture . Instead of fully mapping the network , biologists improved their understanding of biological networks by studying the effect of deleting some of their components , one or a few perturbations at a time ( 7 ; 12 ) . Biological systems in general are found to be fault tolerant ( 28 ) , which is thus an important criterion for biological plausibility of mathematical models . Neuromorphic hardware ( NH ) . Current Machine Learning systems are bottlenecked by the underlying computational power ( 1 ) . One significant improvement over the now prevailing CPU/GPUs is neuromorphic hardware . In this paradigm of computation , each neuron is a physical entity ( 9 ) , and the forward pass is done ( theoretically ) at the speed of light . However , components of such hardware are small and unreliable , leading to small random perturbations of the weights of the model ( 41 ) . Thus , robustness to weight faults is an overlooked concrete Artificial Intelligence ( AI ) safety problem ( 2 ) . Since we ground the assumptions of our model in the properties of NH and of biological networks , our fundamental theoretical results can be directly applied in these computing paradigms . Research on NN fault tolerance . In the 2000s , the fault tolerance of NNs was a major motivation for studying them ( 14 ; 16 ; 4 ) . In the 1990s , the exploration of microscopic failures was fueled by the hopes of developing neuromorphic hardware ( NH ) ( 22 ; 6 ; 34 ) . Taylor expansion was one of the tools used for the study of fault tolerance ( 13 ; 26 ) . Another line of research proposes sufficient conditions for robustness ( 33 ) . However , most of these studies are either empirical or are limited to simple architectures ( 41 ) . In addition , those studies address the worst case ( 5 ) , which is known to be Under review as a conference paper at ICLR 2020 more severe than a random perturbation . Recently , fault tolerance was studied experimentally as well . DeepMind proposes to focus on neuron removal ( 25 ) to understand NNs . NVIDIA ( 21 ) studies error propagation caused by micro-failures in hardware ( 3 ) . In addition , mathematically similar problems are raised in the study of generalization ( 29 ; 30 ) and robustness ( 42 ) . The quest for guarantees . Existing NN approaches do not guarantee fault tolerance : they only provide heuristics and evaluate them experimentally . Theoretical papers , in turn , focus on the worst case and not on errors in a probabilistic sense . It is known that there exists a set of small worstcase perturbations , adversarial examples ( 5 ) , leading to pessimistic bounds not suitable for the average case of random failures , which is the most realistic case for hardware faults . Other branch of theoretical research studies robustness and arrives at error bounds which , unfortunately , scale exponentially with the depth of the network ( 29 ) . We define the goal of this paper to guarantee that the probability of loss exceeding a threshold is lower than a pre-determined small value . This condition is sensible . For example , self-driving cars are deemed to be safe once their probability of a crash is several orders of magnitude less than of human drivers ( 40 ; 15 ; 36 ) . In addition , current fault tolerant architectures use mean as the aggregation of copies of networks to achieve redundancy . This is known to require exponentially more redundancy compared to the median approach and , thus , hardware cost . In order to apply this powerful technique and reduce costs , certain conditions need to be satisfied which we will evaluate for neural networks . Contributions . Our main contribution is a theoretical bound on the error in the output of an NN in the case of random neuron crashes obtained in the continuous limit , where close-by neurons compute similar functions . We show that , while the general problem of fault tolerance is NP-hard , realistic assumptions with regard to neuromorphic hardware , and a probabilistic approach to the problem , allow us to apply a Taylor expansion for the vast majority of the cases , as the weight perturbation is small with high probability . In order for the Taylor expansion to work , we assume that a network is smooth enough , introducing the continuous limit ( 39 ) to prove the properties of NNs : it requires neighboring neurons at each layer to be similar . This makes the moments of the error linear-time computable . To our knowledge , the tightness of the bounds we obtain is a novel result . In turn , the bound allows us to build an algorithm that enhances fault tolerance of neural networks . Our algorithm uses median aggregation which results in only a logarithmic extra cost – a drastic improvement on the initial NP-hardness of the problem . Finally , we show how to apply the bounds to specific architectures and evaluate them experimentally on real-world networks , notably the widely used VGG ( 38 ) . Outline . In Sections 2-4 , we set the formalism , then state our bounds . In Section 5 , we present applications of our bounds on characterizing the fault tolerance of different architectures . In Section 6 we present our algorithm for certifying fault tolerance . In Section 7 , we present our experimental evaluation . Finally , in Section 8 , we discuss the consequences of our findings . Full proofs are available in the supplementary material . Code is provided at the anonymized repo github.com/iclr-2020-fault-tolerance/code . We abbreviate Assumption 1→ A1 , Proposition 1→ P1 , Theorem 1→ T1 , Definition 1→ D1 . 2 DEFINITIONS OF PROBABILISTIC FAULT TOLERANCE . In this section , we define a fully-connected network and fault tolerance formally . Notations . For any two vectors x , y ∈ Rn we use the notation ( x , y ) = ∑n i=1 xiyi for the standard scalar product . Matrix γ-norm for γ = ( 0 , +∞ ] is defined as ‖A‖γ = supx 6=0 ‖Ax‖γ/‖x‖γ . We use the infinity norm ‖x‖∞ = max |xi| and the corresponding operator matrix norm . We call a vector 0 6= x ∈ Rn q-balanced if min |xi| ≥ qmax |xi| . We denote [ n ] = { 1 , 2 , ... , n } . We define the Hessian Hij = ∂2y ( x ) /∂xi∂xj as a matrix of second derivatives . We write layer indices down and element indices up : W ijl . For the input , we write xi ≡ xi . If the layer is fixed , we omit its index . We use the element-wise Hadamard product ( x y ) i = xiyi . Definition 1 . ( Neural network ) A neural network with L layers is a function yL : Rn0 → RnL defined by a tuple ( L , W , B , ϕ ) with a tuple of weight matrices W = ( W1 , ... , WL ) ( or their distributions ) of size Wl : nl×nl−1 , biases B = ( b1 , ... , bL ) ( or their distributions ) of size bl ∈ Rnl by the expression yl = ϕ ( zl ) with pre-activations zl = Wlyl−1 + bl , l ∈ [ L ] , y0 = x and yL = zL . Note that the last layer is linear . We additionally require ϕ to be 1-Lipschitz 1 . We assume that the network was trained 11-Lipschitz ϕ s.t . |ϕ ( x ) − ϕ ( y ) | 6 |x− y| . If ϕ is K-Lipschitz , we rescale the weights to make K = 1 : W ijl →W ij l /K . This is the general case . Indeed , if we rescale ϕ ( x ) → Kϕ ( x ) , then , yl−1 → Ky ′ l−1 , and in the sum z′l = ∑ W ij/K ·Kyl−1 ≡ zl Under review as a conference paper at ICLR 2020 using input-output pairs x , y∗ ∼ X × Y using ERM2 for a loss ω . Loss layer for input x and the true label y∗ ( x ) is defined as yL+1 ( x ) = Ey∗∼Y |xω ( yL ( x ) , y∗ ) ) with ω ∈ [ −1 , 1 ] 3 Definition 2 . ( Weight failure ) Network ( L , W , B , ϕ ) with weight failures U of distribution U ∼ D| ( x , W ) is the network ( L , W + U , B , ϕ ) for U ∼ D| ( x , W ) . We denote a ( random ) output of this network as yW+U ( x ) = ŷL ( x ) with activations ŷl and pre-activations ẑl , as in D1 . Definition 3 . ( Bernoulli neuron failures ) Bernoulli neuron crash distribution is the distribution with i.i.d . ξil ∼ Be ( pl ) , U ij l = −ξil ·W ij l . For each possible crashing neuron i at layer l we define U il = ∑ j |U ij l | and W il = ∑ j |W ij l | , the crashed incoming weights and total incoming weights . We note that we see neuron failure as a sub-type of weight failure . This definition means that neurons crash independently , and they start to output 0 when they do . We use this model because it mimics essential properties of NH ( 41 ) . Components fail relatively independently , as we model faults as random ( 41 ) . In terms of ( 41 ) , we consider stuck-at-0 crashes , and passive fault tolerance in terms of reliability . Definition 4 . ( Output error for a weight distribution ) The error in case of weight failure with distribution D| ( x , W ) is ∆l ( x ) = yW+Ul ( x ) − yWl ( x ) for layers l ∈ [ L+ 1 ] We extend the definition of ε-fault tolerance from ( 23 ) to the probabilistic case : Definition 5 . ( Probabilistic fault tolerance ) A network ( L , W , B , ϕ ) is said to be ( ε , δ ) -fault tolerant over an input distribution ( x , y∗ ) ∼ X × Y and a crash distribution U ∼ D| ( x , W ) if P ( x , y∗ ) ∼X×Y , U∼D| ( x , W ) { ∆L+1 ( x ) ≥ ε } ≤ δ . For such network , we write ( W , B ) ∈ FT ( L , ϕ , p , ε , δ ) . Interpretation . To evaluate the fault tolerance of a network , we compute the first moments of ∆L+1 . Next , we use tail bounds to guarantee ( ε , δ ) -FT . This definition means that with high probability 1− δ additional loss due to faults does not exceed ε . Expectation over the crashes U ∼ D|x can be interpreted in two ways . First , for a large number of neural networks , each having permanent crashes , E∆ is the expectation over all instances of a network implemented in the hardware multiple times . For a single network with intermittent crashes , E∆ is the output of this one network over repetitions . The recent review study ( 41 ) identifies three types of faults : permanent , transient , and intermittent . Our definition 2 thus covers all these cases . Now that we have a definition of fault tolerance , we show in the next section that the task of certifying or even computing it is hard .
This contribution studies the impact of deletions of random neurons on prediction accuracy of trained architecture, with the application to failure analysis and the specific context of neuromorphic hardware. The manuscript shows that worst-case analysis of failure modes is NP hard and contributes a theoretical analysis of the average case impact of random perturbations with Bernouilli noise on prediction accuracy, as well as a training algorithm based on aggregation. The difficulty of tight bounds comes from the fact that with many layers a neural network can have a very large Lipschitz constant. The average case analysis is based on wide neural networks and an assumption of a form of smoothness in the values of hidden units as the width increases. The improve fitting procedure is done by adding a set of regularizing terms, including regularizing the spectral norm of the layers.
SP:2b8df72b380b893a55a82934afd558d75a3f42f2
Learning representations for binary-classification without backpropagation
1 INTRODUCTION . A key factor enabling the successes of Deep Learning is the backpropagation of error ( BP ) algorithm ( Rumelhart et al. , 1986 ) . Since it has been introduced , BP has sparked several discussions on whether physical brains are realizing BP-like learning or not ( Grossberg , 1987 ; Crick , 1989 ) . Today , most researchers consent that two distinct characteristics of BP render the idea of a BP based learning in brains as implausible : 1 ) The usage of symmetric forward and backward connections and 2 ) the strict separation of activity and error propagation ( Bartunov et al. , 2018 ) . These two objections have lead researchers to search for more biologically motivated alternatives to BP . The three most influential families of BP alternatives distilled so far are Contrastive Hebbian Learing ( CHL ) ( Movellan , 1991 ) , target-propagation ( TP ) ( LeCun , 1986 ; Hinton , 2007 ; Bengio , 2014 ) and feedback Alignment ( FA ) ( Lillicrap et al. , 2016 ) . The idea of CHL is to propagate the target activities , instead of the errors , backward through the network . For this reason , a temporal dimension is added to the neuron activities . Each neuron then adapts its parameters based on the temporal differences of its ” forward ” and ” backward ” activity . The two significant critic points of CHL are the requirement for symmetric ” forward-backward ” connections and the use of alternating ” forward ” and ” backward ” phases ( Baldi & Pineda , 1991 ; Bartunov et al. , 2018 ) . TP shares the idea with Contrastive Hebbian Learning of propagating target activities instead of errors . However , rather than keeping symmetric forward and backward paths , the reciprocal propagation of the activities are realized through learned connections . Consequently , each layer has assigned two objectives : Learning the inverse of the layer ’ s forward function and minimizing the difference to the back-projected target activity . Variants of TP differ in how exactly the target activity is projected backward ( LeCun , 1986 ; Bengio , 2014 ; Bartunov et al. , 2018 ) . Theoretical guarantees of TP rely on the assumption that each reciprocal connection implements the perfect inverse of the corresponding forward function . This issue of an imperfect inverse was also found to be the ” bottleneck ” of TP in practice ( Bartunov et al. , 2018 ) . When the output of a layer has a significant lower dimension than its input , reconstructing the input from the output becomes challenging , resulting in poor learning performance . Feedback alignment algorithms eliminate the weight sharing implausibility of BP by replacing the symmetric weights in the error backpropagation path by random matrices . The second objection , i.e. , separate activity and error channels , is attenuated by Direct Feedback Alignment ( Nøkland , 2016 ) which drastically reduces the number of channels carrying an error signal . While feedback alignment algorithms work well on small and medium-sized benchmarks , a recent study identified that they are unable to provide learning on more challenging datasets like ImageNet ( Bartunov et al. , 2018 ) . Another criticism of FA algorithms is the lack of rigorous mathematical justification and convergence guarantees of the performed computations . In this work , we investigate feed-forward networks where the weights of all , expect the first , layers are constrained to positive values . We prove that this constraint does not invalidate the universal approximation capabilities of neural networks . Next , we show that , in combination with monotonic activation functions , all layers from the second layer on realize monotonically increasing functions . The backpropagation of a scalar error signal through these layers only affects the magnitude of the error signal but does not change its sign . Consequently , we prove that algorithms that bypass the error backpropagation steps , such as Direct Feedback Alignment , can compute the sign of the true gradient with respect to the weights of our constraint networks without the need for backpropagation . Finally , we show that our algorithm , which we call monotone Direct Feedback Alignment , can deliver its theoretical promises in practice , by surpassing the learning performance of existing feedback alignment algorithms in binary classification task , i.e. , when the error signal is scalar , and provide decent performance even when the error signal is not scalar . We make the following key contributions : • First FA algorithm that has provable learning capabilities for non-linear networks of arbitrary depth • Experimental evaluation showing that our FA algorithm outperforms the learning performance of existing FA algorithms and match backpropagation in binary classification tasks • We make an efficient TensorFlow implementation of all tested algorithms publicly available1 2 BACKPROPAGATION AND FEEDBACK ALIGNMENT . We consider the feed-forward neural network hl ( hl−1 ) : = { f ( Wlhl−1 + bl ) if l < L Wlhl−1 + bl if l = L h0 : = x Wl ∈ Rnl×nl−1 , bl ∈ Rnl ( 1 ) where f is the non-linear activation function , x the input and hL the output of the network . For classification tasks , hL is usually transformed into a probability distribution with discrete support by a sigmoid or softmax function . During training , the parametersWl , bl , l = 1 , . . . L are adjusted to minimize a loss functionL ( y , hL ) on samples of a giving training distribution p ( y , x ) . This is usually done by performing gradient descent θl ← θl − α dL dθ , α ∈ R+ ( 2 ) with respect to the parameters θl ∈ { Wl , bl } , 1 ≤ l ≤ n of the network . 1https : //github.com/mlech26l/iclr_paper_mdfa a ) BP b ) FA c ) DFA d ) mDFA y y y y 2.1 BACKPROPAGATION . Backpropagation ( Rumelhart et al. , 1986 ) is the primary method to compute the gradients needed by the updates in equation ( 2 ) by iteratively applying the chain-rule dL dθl = ( dhl dθl ) T dL dhl ( 3 ) dL dhl = ( dhl+1 dhl ) T dL dhl+1 ( 4 ) dhl+1 dhl = Wl+1diag ( f ′ ( Wl+1hl + bl+1 ) ) . ( 5 ) A graphical representation of how information first flow forward and then backward in BP through each layer is shown in Figure ( 1 ) a . Two major concerns argue against the idea that biological neural networks are implementing BPbased learning . I ) The weight matrix Wl of the forward path is reused in the backward path in the form of WTl ( weight sharing ) , and II ) the strict separation of activity carrying forward and error carrying backward connections ( reciprocal error transport ) . 2.2 FEEDBACK ALIGNMENT ALGORITHMS . Feedback alignment addresses the implausibility of reusing WTl in the backward path by replacing WTl by a fixed random matrix Bl . Lillicrap et al . ( 2016 ) showed that this somewhat counterintuitive approach works remarkably well in practice . The term ” feedback alignment ” originates from the observations made by Lillicrap et al . ( 2016 ) that the angle between the FA update vector and the true gradient starts to decrease , i.e. , align , after a few epochs of the training algorithm . Theoretical groundwork on this alignment principle of FA relies on strong assumptions such as a linearized network with one hidden layer ( Lillicrap et al. , 2016 ) . FA avoids any weight sharing but does not address the reciprocal error transport implausibility , due to its strict separation of forward and backward pathways , as shown in Figure ( 1 ) b . Direct Feedback Alignment ( DFA ) ( Nøkland , 2016 ) relaxes this issue by replacing all backward paths with a direct feed from the output layer error-gradient dLdhL . Consequently , there is only a single error signal that is distributed across the entire network , which is arguably more biologically plausible than reciprocal error connections . The resulting parameter updates of DFA are of the form δθl : = { dL dhL hL θl if l = L dL dhL Bl hl θl if l < L ( 6 ) , where Bl ∈ RnL×nl is a random matrix . A graphical schematic of DFA is shown in Figure ( 1 ) c. Similar to FA , DFA shows a decent learning performance in mid sized classification tasks ( Nøkland , 2016 ) , but fails on more complex datasets such as ImageNet ( Bartunov et al. , 2018 ) . Theory on adapting the alignment principle to DFA shows that under the strong assumptions of constant DFA update directions and a layer-wise criterion minimization , the DFA update vector will align with the true gradient ( Nøkland , 2016 ; Gilmer et al. , 2017 ) . Recently , Frenkel et al . ( 2019 ) proposed to combine ideas from feedback alignment and targetpropagation in their Direct Random Target Projection ( DRTP ) algorithm . While DRTP shows decent empirical performance , theoretical guarantees about DRTP rely on linearized networks . 2.3 SIGN-SYMMETRY ALGORITHMS . Liao et al . ( 2016 ) introduced the sign-symmetry algorithm , a hybrid of BP and FA . Sign-symmetry locks the signs of the feedback weight Bl to have the same signs as WTl , but random absolute value . The authors showed that this approach drastically improves learning performance compared to standard FA . Furthermore , Moskovitz et al . ( 2018 ) and Xiao et al . ( 2019 ) demonstrated that the sign-symmetry algorithm is even able to match backpropagation for training deep network architectures and large datasets such as ImageNet . While these empirical observations suggest that the polarity of the error feedback is more important than its magnitude , the mathematical justification of sign-symmetry remains absent . Similar to FA , sign-symmetry relaxes the strict weight sharing implausibility , but still relies on an unrealistic reciprocal error transport . 3 MONOTONE DIRECT FEEDBACK ALIGNMENT . In this section , we first introduce a new class of feed-forward networks , where all , except the first , layers are constrained to realize monotone functions . We call such networks mono-nets and show that they are as expressive as unconstrained feed-forward networks . Next , we prove that for our mono-nets with single output tasks , e.g. , binary-classification , feedback alignment algorithms provide the sign of the gradient . The sign of the gradient is interesting for learning , as it tells us if the value of a parameter should be increased or decreased in order to reduce the loss . At the end of this section , we will highlight similarities to algorithms from literature , which can provide resilient learning by only relying on the sign of the gradient . Neural networks with monotonic constraints have been already studied in literature ( You et al. , 2017 ) , however not in the context of learning algorithms . Definition 1 ( mono-net ) . A mono-net is a feed-forward neural network with L layers h1 , . . . hL , each layer l composted of nl units and the semantics hl ( hl−1 ) : = { f ( Wlhl−1 + bl ) if l < L Wlhl−1 + bl if l = L ( 7 ) h0 : = x ( 8 ) W1 ∈ Rn1×n0 , ( 9 ) Wl ∈ R nl×nl−1 + , for l > 1 ( 10 ) bl ∈ Rnl ( 11 ) where R+ are the positive reals , i.e . R+ = { x ∈ R|x ≥ 0 } , f is a non-linear monotonic increasing activation function , x the input and hL the output of the network . The major difference between mono-nets and general feed-forward neural networks is the restriction to only positive weight values in layers from the second layer on . Combined with the monotonic increasing activation function , this means that each layer hl ( hl−1 ) , l ≥ 2 realizes a monotone increasing function . Because functional composition preserves monotonicity , the complete network up to the first layer hl ◦ hl−1 ◦ · · · ◦ h2 ( h1 ) ( 12 ) implements a monotone increasing function . mono-nets are Universal Approximators At first glance , this restriction seems counterproductive , as it might interfere with the expressiveness of the networks . However , we proof in Theorem 1 that our mono-nets with tangent hyperbolic activation are universal approximators , meaning that we can approximate any continuous function arbitrarily close . A potential drawback of the monotonicity constraint is that we might need a larger number of units in the hidden layers to achieve the same expressiveness as a general feed-forward network , as illustrated in our proof of Theorem 1 . Theorem 1 ( mono-nets are Universal Approximators ) . Let In be the n-dimensional unit hypercube [ 0 , 1 ] n and C ( In ) denote the set of continuous functions f : In → R. We define ‖f‖∞ as the supremum norm of f ∈ C ( In ) over its domain In . For any given f ∈ C ( In ) and ε > 0 , there exist a function m : In → R of the form m ( x ) : = M∑ i=1 v̄i tanh ( w̄i Tx+ ŵi T ( −x ) + b̄i ) + c ( 13 ) with v̄ ∈ R+M , w̄i ∈ R+n , ŵi ∈ R+n , b̄ ∈ RM , c ∈ R and M < ∞ such that ‖m ( x ) − f ( x ) ‖∞ < ε . In essence , the set of functions m ( x ) of the form given in ( 19 ) is dense in C ( In ) . Proof . See supplementary materials 3. mDFA provides the sign of the gradient Here , we prove that for 1-dimensional outputs DFA applied on a mono-net , which we will call simply mDFA , provides the sign of the true gradient . Note that we focus our methods on DFA instead of ” vanilla ” FA , due to the superiority of DFA in terms of biological plausibility and empirical performance ( Nøkland , 2016 ; Bartunov et al. , 2018 ) . Theorem 2 ( For 1-dimensional outputs mDFA computes the sign of the gradient ) . Let L ( y , hL ) be a loss function , m ( x ) : = hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0 ( x ) be a mono-net according to Definition 1 with parameters Θ : = { Wl , bl ∣∣l = 1 , . . . L } . We denote δθ the update value computed by mDFA and ∇θ as the gradient ∂L∂θ for any θ ∈ { Wl , bl } with 1 ≤ l ≤ L. If nL = 1 , it follows that ( δθ ) i , j · ( ∇θ ) i , j ≥ 0 , ( 14 ) for each coordinate ( i , j ) of θ . Proof . See supplementary materials . A graphical illustration of how activities and errors propagate in mDFA is shown in Figure ( 1 ) d. Literature on learning by relying only on the sign of the gradient Two learning concepts related to mDFA are RPROP ( Riedmiller & Braun , 1993 ; Riedmiller , 1994 ) and signSGD ( Bernstein et al. , 2018 ) . RPROP aims to build a more resilient alternative to gradient descent by decoupling the amplitude of the gradient from the step size of parameter updates . In essence , for each coordinate RPROP adapts the step size based on the sign of the most recent gradients computed . Riedmiller & Braun ( 1993 ) showed that their approach could stabilize the training of a neural network compared to standard gradient descent . Performing gradient descent with taking the sign of each gradient coordinate is on an algorithmic level equivalent to the well-known steepest descent method with L∞ norm ( Boyd & Vandenberghe , 2004 ; Bernstein et al. , 2018 ) . signSGD ( Bernstein et al. , 2018 ) studies convergence properties of the stochastic approximation of this algorithm . What about networks with more than one output neuron ? Theorem 2 applies only to networks with scalar output . As a natural consequence , one may ask whether such theoretical guarantees can be extended to more dimensional output variables . The simple answer is , unfortunately not . In the supplementary materials section A.3 we provide a counterexample showing that Theorem 2 naively extended to two output neurons does not hold anymore . We want to note that the requirement of a neural network to have only a single output neuron is biologically unjustified . It is known that sub-circuits of biological neuronal networks can feed to multiple motor neuron groups Cook et al . ( 2019 ) . How does mDFA relate to the non-negative matrix factorization ? A seemingly related concept to mDFA is the non-negative matrix factorization ( NMF ) algorithm . NMF decomposes an observation matrix V into a weight matrix W and a latent variable matrix H such that V ≈ WH . In contrast to other decomposition-based unsupervised learning methods , all three matrices V , W and H are restricted to non-negative entries . While NMF can model data that is inherently non-negative , such has semantic features of images and text , effectively Yuan & Oja ( 2005 ) ; Shahnaz et al . ( 2006 ) , the method is unable to learn subtractive and non-linear structures that are present in the data Lee & Seung ( 1999 ) . Semi-non-negative matrix factorization Ding et al . ( 2008 ) relaxes the original restriction to nonnegative observations of NMF , by only constraining the weight matrix W to be non-negative . Deep semi-NMF Trigeorgis et al . ( 2014 ) further enhances the expressiveness of NMF by adding multiple layers and non-linearities between them to the decomposition . Concerning this work , the semantics of mono-nets from the second layer on is equivalent to that of deep semi-NMF models . However , the unconstrained first layer of mono-nets provides universal approximation capabilities , enabling mono-nets to learn subtractive and non-monotonic input dependencies . Moreover , while deep NMF models are mostly trained via layer-wise learning in an unsupervised context Trigeorgis et al . ( 2014 ) ; Yu et al . ( 2018 ) , the sole purpose of mono-nets is to investigate alternatives to backpropagation for training multi-layer classifiers .
This paper presents an approach towards extending the capabilities of feedback alignment algorithms, that in essence replace the error backpropagation weights with random matrices. The authors propose a particular type of network where all weights are constraint to positive values except the first layers, a monotonically increasing activation function, and where a single output neuron exists (i.e., for binary classification - empirical evidence for more output neurons is presented but not theoretically supported). This is to enforce that the backpropagation of the (scalar) error signal to affect the magnitude of the error rather than the sign, while preserving universal approximation. The authors also provide provable learning capabilities, and several experiments that show good performance, while also pointing out limitations in case of using multiple output neurons.
SP:8d95af673099b1df7b837f583aa55678d67c5bd6
Learning representations for binary-classification without backpropagation
1 INTRODUCTION . A key factor enabling the successes of Deep Learning is the backpropagation of error ( BP ) algorithm ( Rumelhart et al. , 1986 ) . Since it has been introduced , BP has sparked several discussions on whether physical brains are realizing BP-like learning or not ( Grossberg , 1987 ; Crick , 1989 ) . Today , most researchers consent that two distinct characteristics of BP render the idea of a BP based learning in brains as implausible : 1 ) The usage of symmetric forward and backward connections and 2 ) the strict separation of activity and error propagation ( Bartunov et al. , 2018 ) . These two objections have lead researchers to search for more biologically motivated alternatives to BP . The three most influential families of BP alternatives distilled so far are Contrastive Hebbian Learing ( CHL ) ( Movellan , 1991 ) , target-propagation ( TP ) ( LeCun , 1986 ; Hinton , 2007 ; Bengio , 2014 ) and feedback Alignment ( FA ) ( Lillicrap et al. , 2016 ) . The idea of CHL is to propagate the target activities , instead of the errors , backward through the network . For this reason , a temporal dimension is added to the neuron activities . Each neuron then adapts its parameters based on the temporal differences of its ” forward ” and ” backward ” activity . The two significant critic points of CHL are the requirement for symmetric ” forward-backward ” connections and the use of alternating ” forward ” and ” backward ” phases ( Baldi & Pineda , 1991 ; Bartunov et al. , 2018 ) . TP shares the idea with Contrastive Hebbian Learning of propagating target activities instead of errors . However , rather than keeping symmetric forward and backward paths , the reciprocal propagation of the activities are realized through learned connections . Consequently , each layer has assigned two objectives : Learning the inverse of the layer ’ s forward function and minimizing the difference to the back-projected target activity . Variants of TP differ in how exactly the target activity is projected backward ( LeCun , 1986 ; Bengio , 2014 ; Bartunov et al. , 2018 ) . Theoretical guarantees of TP rely on the assumption that each reciprocal connection implements the perfect inverse of the corresponding forward function . This issue of an imperfect inverse was also found to be the ” bottleneck ” of TP in practice ( Bartunov et al. , 2018 ) . When the output of a layer has a significant lower dimension than its input , reconstructing the input from the output becomes challenging , resulting in poor learning performance . Feedback alignment algorithms eliminate the weight sharing implausibility of BP by replacing the symmetric weights in the error backpropagation path by random matrices . The second objection , i.e. , separate activity and error channels , is attenuated by Direct Feedback Alignment ( Nøkland , 2016 ) which drastically reduces the number of channels carrying an error signal . While feedback alignment algorithms work well on small and medium-sized benchmarks , a recent study identified that they are unable to provide learning on more challenging datasets like ImageNet ( Bartunov et al. , 2018 ) . Another criticism of FA algorithms is the lack of rigorous mathematical justification and convergence guarantees of the performed computations . In this work , we investigate feed-forward networks where the weights of all , expect the first , layers are constrained to positive values . We prove that this constraint does not invalidate the universal approximation capabilities of neural networks . Next , we show that , in combination with monotonic activation functions , all layers from the second layer on realize monotonically increasing functions . The backpropagation of a scalar error signal through these layers only affects the magnitude of the error signal but does not change its sign . Consequently , we prove that algorithms that bypass the error backpropagation steps , such as Direct Feedback Alignment , can compute the sign of the true gradient with respect to the weights of our constraint networks without the need for backpropagation . Finally , we show that our algorithm , which we call monotone Direct Feedback Alignment , can deliver its theoretical promises in practice , by surpassing the learning performance of existing feedback alignment algorithms in binary classification task , i.e. , when the error signal is scalar , and provide decent performance even when the error signal is not scalar . We make the following key contributions : • First FA algorithm that has provable learning capabilities for non-linear networks of arbitrary depth • Experimental evaluation showing that our FA algorithm outperforms the learning performance of existing FA algorithms and match backpropagation in binary classification tasks • We make an efficient TensorFlow implementation of all tested algorithms publicly available1 2 BACKPROPAGATION AND FEEDBACK ALIGNMENT . We consider the feed-forward neural network hl ( hl−1 ) : = { f ( Wlhl−1 + bl ) if l < L Wlhl−1 + bl if l = L h0 : = x Wl ∈ Rnl×nl−1 , bl ∈ Rnl ( 1 ) where f is the non-linear activation function , x the input and hL the output of the network . For classification tasks , hL is usually transformed into a probability distribution with discrete support by a sigmoid or softmax function . During training , the parametersWl , bl , l = 1 , . . . L are adjusted to minimize a loss functionL ( y , hL ) on samples of a giving training distribution p ( y , x ) . This is usually done by performing gradient descent θl ← θl − α dL dθ , α ∈ R+ ( 2 ) with respect to the parameters θl ∈ { Wl , bl } , 1 ≤ l ≤ n of the network . 1https : //github.com/mlech26l/iclr_paper_mdfa a ) BP b ) FA c ) DFA d ) mDFA y y y y 2.1 BACKPROPAGATION . Backpropagation ( Rumelhart et al. , 1986 ) is the primary method to compute the gradients needed by the updates in equation ( 2 ) by iteratively applying the chain-rule dL dθl = ( dhl dθl ) T dL dhl ( 3 ) dL dhl = ( dhl+1 dhl ) T dL dhl+1 ( 4 ) dhl+1 dhl = Wl+1diag ( f ′ ( Wl+1hl + bl+1 ) ) . ( 5 ) A graphical representation of how information first flow forward and then backward in BP through each layer is shown in Figure ( 1 ) a . Two major concerns argue against the idea that biological neural networks are implementing BPbased learning . I ) The weight matrix Wl of the forward path is reused in the backward path in the form of WTl ( weight sharing ) , and II ) the strict separation of activity carrying forward and error carrying backward connections ( reciprocal error transport ) . 2.2 FEEDBACK ALIGNMENT ALGORITHMS . Feedback alignment addresses the implausibility of reusing WTl in the backward path by replacing WTl by a fixed random matrix Bl . Lillicrap et al . ( 2016 ) showed that this somewhat counterintuitive approach works remarkably well in practice . The term ” feedback alignment ” originates from the observations made by Lillicrap et al . ( 2016 ) that the angle between the FA update vector and the true gradient starts to decrease , i.e. , align , after a few epochs of the training algorithm . Theoretical groundwork on this alignment principle of FA relies on strong assumptions such as a linearized network with one hidden layer ( Lillicrap et al. , 2016 ) . FA avoids any weight sharing but does not address the reciprocal error transport implausibility , due to its strict separation of forward and backward pathways , as shown in Figure ( 1 ) b . Direct Feedback Alignment ( DFA ) ( Nøkland , 2016 ) relaxes this issue by replacing all backward paths with a direct feed from the output layer error-gradient dLdhL . Consequently , there is only a single error signal that is distributed across the entire network , which is arguably more biologically plausible than reciprocal error connections . The resulting parameter updates of DFA are of the form δθl : = { dL dhL hL θl if l = L dL dhL Bl hl θl if l < L ( 6 ) , where Bl ∈ RnL×nl is a random matrix . A graphical schematic of DFA is shown in Figure ( 1 ) c. Similar to FA , DFA shows a decent learning performance in mid sized classification tasks ( Nøkland , 2016 ) , but fails on more complex datasets such as ImageNet ( Bartunov et al. , 2018 ) . Theory on adapting the alignment principle to DFA shows that under the strong assumptions of constant DFA update directions and a layer-wise criterion minimization , the DFA update vector will align with the true gradient ( Nøkland , 2016 ; Gilmer et al. , 2017 ) . Recently , Frenkel et al . ( 2019 ) proposed to combine ideas from feedback alignment and targetpropagation in their Direct Random Target Projection ( DRTP ) algorithm . While DRTP shows decent empirical performance , theoretical guarantees about DRTP rely on linearized networks . 2.3 SIGN-SYMMETRY ALGORITHMS . Liao et al . ( 2016 ) introduced the sign-symmetry algorithm , a hybrid of BP and FA . Sign-symmetry locks the signs of the feedback weight Bl to have the same signs as WTl , but random absolute value . The authors showed that this approach drastically improves learning performance compared to standard FA . Furthermore , Moskovitz et al . ( 2018 ) and Xiao et al . ( 2019 ) demonstrated that the sign-symmetry algorithm is even able to match backpropagation for training deep network architectures and large datasets such as ImageNet . While these empirical observations suggest that the polarity of the error feedback is more important than its magnitude , the mathematical justification of sign-symmetry remains absent . Similar to FA , sign-symmetry relaxes the strict weight sharing implausibility , but still relies on an unrealistic reciprocal error transport . 3 MONOTONE DIRECT FEEDBACK ALIGNMENT . In this section , we first introduce a new class of feed-forward networks , where all , except the first , layers are constrained to realize monotone functions . We call such networks mono-nets and show that they are as expressive as unconstrained feed-forward networks . Next , we prove that for our mono-nets with single output tasks , e.g. , binary-classification , feedback alignment algorithms provide the sign of the gradient . The sign of the gradient is interesting for learning , as it tells us if the value of a parameter should be increased or decreased in order to reduce the loss . At the end of this section , we will highlight similarities to algorithms from literature , which can provide resilient learning by only relying on the sign of the gradient . Neural networks with monotonic constraints have been already studied in literature ( You et al. , 2017 ) , however not in the context of learning algorithms . Definition 1 ( mono-net ) . A mono-net is a feed-forward neural network with L layers h1 , . . . hL , each layer l composted of nl units and the semantics hl ( hl−1 ) : = { f ( Wlhl−1 + bl ) if l < L Wlhl−1 + bl if l = L ( 7 ) h0 : = x ( 8 ) W1 ∈ Rn1×n0 , ( 9 ) Wl ∈ R nl×nl−1 + , for l > 1 ( 10 ) bl ∈ Rnl ( 11 ) where R+ are the positive reals , i.e . R+ = { x ∈ R|x ≥ 0 } , f is a non-linear monotonic increasing activation function , x the input and hL the output of the network . The major difference between mono-nets and general feed-forward neural networks is the restriction to only positive weight values in layers from the second layer on . Combined with the monotonic increasing activation function , this means that each layer hl ( hl−1 ) , l ≥ 2 realizes a monotone increasing function . Because functional composition preserves monotonicity , the complete network up to the first layer hl ◦ hl−1 ◦ · · · ◦ h2 ( h1 ) ( 12 ) implements a monotone increasing function . mono-nets are Universal Approximators At first glance , this restriction seems counterproductive , as it might interfere with the expressiveness of the networks . However , we proof in Theorem 1 that our mono-nets with tangent hyperbolic activation are universal approximators , meaning that we can approximate any continuous function arbitrarily close . A potential drawback of the monotonicity constraint is that we might need a larger number of units in the hidden layers to achieve the same expressiveness as a general feed-forward network , as illustrated in our proof of Theorem 1 . Theorem 1 ( mono-nets are Universal Approximators ) . Let In be the n-dimensional unit hypercube [ 0 , 1 ] n and C ( In ) denote the set of continuous functions f : In → R. We define ‖f‖∞ as the supremum norm of f ∈ C ( In ) over its domain In . For any given f ∈ C ( In ) and ε > 0 , there exist a function m : In → R of the form m ( x ) : = M∑ i=1 v̄i tanh ( w̄i Tx+ ŵi T ( −x ) + b̄i ) + c ( 13 ) with v̄ ∈ R+M , w̄i ∈ R+n , ŵi ∈ R+n , b̄ ∈ RM , c ∈ R and M < ∞ such that ‖m ( x ) − f ( x ) ‖∞ < ε . In essence , the set of functions m ( x ) of the form given in ( 19 ) is dense in C ( In ) . Proof . See supplementary materials 3. mDFA provides the sign of the gradient Here , we prove that for 1-dimensional outputs DFA applied on a mono-net , which we will call simply mDFA , provides the sign of the true gradient . Note that we focus our methods on DFA instead of ” vanilla ” FA , due to the superiority of DFA in terms of biological plausibility and empirical performance ( Nøkland , 2016 ; Bartunov et al. , 2018 ) . Theorem 2 ( For 1-dimensional outputs mDFA computes the sign of the gradient ) . Let L ( y , hL ) be a loss function , m ( x ) : = hL ◦ hL−1 ◦ · · · ◦ h2 ◦ h1 ◦ h0 ( x ) be a mono-net according to Definition 1 with parameters Θ : = { Wl , bl ∣∣l = 1 , . . . L } . We denote δθ the update value computed by mDFA and ∇θ as the gradient ∂L∂θ for any θ ∈ { Wl , bl } with 1 ≤ l ≤ L. If nL = 1 , it follows that ( δθ ) i , j · ( ∇θ ) i , j ≥ 0 , ( 14 ) for each coordinate ( i , j ) of θ . Proof . See supplementary materials . A graphical illustration of how activities and errors propagate in mDFA is shown in Figure ( 1 ) d. Literature on learning by relying only on the sign of the gradient Two learning concepts related to mDFA are RPROP ( Riedmiller & Braun , 1993 ; Riedmiller , 1994 ) and signSGD ( Bernstein et al. , 2018 ) . RPROP aims to build a more resilient alternative to gradient descent by decoupling the amplitude of the gradient from the step size of parameter updates . In essence , for each coordinate RPROP adapts the step size based on the sign of the most recent gradients computed . Riedmiller & Braun ( 1993 ) showed that their approach could stabilize the training of a neural network compared to standard gradient descent . Performing gradient descent with taking the sign of each gradient coordinate is on an algorithmic level equivalent to the well-known steepest descent method with L∞ norm ( Boyd & Vandenberghe , 2004 ; Bernstein et al. , 2018 ) . signSGD ( Bernstein et al. , 2018 ) studies convergence properties of the stochastic approximation of this algorithm . What about networks with more than one output neuron ? Theorem 2 applies only to networks with scalar output . As a natural consequence , one may ask whether such theoretical guarantees can be extended to more dimensional output variables . The simple answer is , unfortunately not . In the supplementary materials section A.3 we provide a counterexample showing that Theorem 2 naively extended to two output neurons does not hold anymore . We want to note that the requirement of a neural network to have only a single output neuron is biologically unjustified . It is known that sub-circuits of biological neuronal networks can feed to multiple motor neuron groups Cook et al . ( 2019 ) . How does mDFA relate to the non-negative matrix factorization ? A seemingly related concept to mDFA is the non-negative matrix factorization ( NMF ) algorithm . NMF decomposes an observation matrix V into a weight matrix W and a latent variable matrix H such that V ≈ WH . In contrast to other decomposition-based unsupervised learning methods , all three matrices V , W and H are restricted to non-negative entries . While NMF can model data that is inherently non-negative , such has semantic features of images and text , effectively Yuan & Oja ( 2005 ) ; Shahnaz et al . ( 2006 ) , the method is unable to learn subtractive and non-linear structures that are present in the data Lee & Seung ( 1999 ) . Semi-non-negative matrix factorization Ding et al . ( 2008 ) relaxes the original restriction to nonnegative observations of NMF , by only constraining the weight matrix W to be non-negative . Deep semi-NMF Trigeorgis et al . ( 2014 ) further enhances the expressiveness of NMF by adding multiple layers and non-linearities between them to the decomposition . Concerning this work , the semantics of mono-nets from the second layer on is equivalent to that of deep semi-NMF models . However , the unconstrained first layer of mono-nets provides universal approximation capabilities , enabling mono-nets to learn subtractive and non-monotonic input dependencies . Moreover , while deep NMF models are mostly trained via layer-wise learning in an unsupervised context Trigeorgis et al . ( 2014 ) ; Yu et al . ( 2018 ) , the sole purpose of mono-nets is to investigate alternatives to backpropagation for training multi-layer classifiers .
This paper examines the question of learning in neural networks with random, fixed feedback weights, a technique known as “feedback alignment”. Feedback alignment was originally discovered by Lillicrap et al. (2016; Nature Communications, 7, 13276) when they were exploring potential means of solving the “weight transport problem” for neural networks. Essentially, the weight transport problem refers to the fact that the backpropagation-of-error algorithm requires feedback pathways for communicating errors that have synaptic weights that are symmetric to the feedforward pathway, which is biologically questionable. Feedback alignment is one approach to solving the weight transport problem, which as stated above, relies on the use of random, fixed weights for communicating the error backwards. It has been shown that in some cases, feedback alignment converges to weight updates that are reasonably well-aligned to the true gradient. Though initially considered a good potential solution for biologically realistic learning, feedback alignment both has not scaled up to difficult datasets and has no theoretical guarantees that it converges to the true gradient. This paper addresses both these issues.
SP:8d95af673099b1df7b837f583aa55678d67c5bd6
Quantized Reinforcement Learning (QuaRL)
Recent work has shown that quantization can help reduce the memory , compute , and energy demands of deep neural networks without significantly harming their quality . However , whether these prior techniques , applied traditionally to imagebased models , work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question . To address this void , we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands . We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks ( such as Pong , Breakout , BeamRider and more ) and training algorithms ( such as PPO , A2C , DDPG , and DQN ) . Across this spectrum of tasks and learning algorithms , we show that policies can be quantized to 6-8 bits of precision without loss of accuracy . We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models ’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline . Additionally , we show that quantization aware training , like traditional regularizers , regularize models by increasing exploration during the training process . Finally , we demonstrate usefulness of quantization for reinforcement learning . We use half-precision training to train a Pong model 50 % faster , and we deploy a quantized reinforcement learning based navigation policy to an embedded system , achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy . 1 INTRODUCTION . Deep reinforcement learning has promise in many applications , ranging from game playing ( Silver et al. , 2016 ; 2017 ; Kempka et al. , 2016 ) to robotics ( Lillicrap et al. , 2015 ; Zhang et al. , 2015 ) to locomotion and transportation ( Arulkumaran et al. , 2017 ; Kendall et al. , 2018 ) . However , the training and deployment of reinforcement learning models remain challenging . Training is expensive because of their computationally expensive demands for repeatedly performing the forward and backward propagation in neural network training . Deploying deep reinforcement learning ( DRL ) models is prohibitively expensive , if not even impossible , due to the resource constraints on embedded computing systems typically used for applications , such as robotics and drone navigation . Quantization can be helpful in substantially reducing the memory , compute , and energy usage of deep learning models without significantly harming their quality ( Han et al. , 2015 ; Zhou et al. , 2016 ; Han et al. , 2016 ) . However , it is unknown whether the same techniques carry over to reinforcement learning . Unlike models in supervised learning , the quality of a reinforcement learning policy depends on how effective it is in sequential decision making . Specifically , an agent ’ s current input and decision heavily affect its future state and future actions ; it is unclear how quantization affects the long-term decision making capability of reinforcement learning policies . Also , there are many different algorithms to train a reinforcement learning policy . Algorithms like actor-critic methods ( A2C ) , deep-q networks ( DQN ) , proximal policy optimization ( PPO ) and deep deterministic policy gradients ( DDPG ) are significantly different in their optimization goals and implementation details , and it is unclear whether quantization would be similarly effective across these algorithms . Finally , reinforcement learning policies are trained and applied to a wide range of environments , and it is unclear how quantization affects performance in tasks of differing complexity . Here , we aim to understand quantization effects on deep reinforcement learning policies . We comprehensively benchmark the effects of quantization on policies trained by various reinforcement learning algorithms on different tasks , conducting in excess of 350 experiments to present representative and conclusive analysis . We perform experiments over 3 major axes : ( 1 ) environments ( Atari Arcade , PyBullet , OpenAI Gym ) , ( 2 ) reinforcement learning training algorithms ( Deep-Q Networks , Advantage Actor-Critic , Deep Deterministic Policy Gradients , Proximal Policy Optimization ) and ( 3 ) quantization methods ( post-training quantization , quantization aware training ) . We show that quantization induces a regularization effect by increasing exploration during training . This motivates the use of quantization aware training , which we show demonstrates improved performance over post-training quantization and oftentimes even over the full precision baseline . Additionally , We show that deep reinforcement learning models can be quantized to 6-8 bits of precision without loss in quality . Furthermore , we analyze how each axis affects the final performance of the quantized model to develop insights into how to achieve better model quantization . Our results show that some tasks and training algorithms yield models that are more difficult to apply post-training quantization as they widen the spread of the models ’ weight distribution , yielding higher quantization error . To demonstrate the usefulness of quantization for deep reinforcement learning , we 1 ) use half precision ops to train a Pong model 50 % faster than full precision training and 2 ) deploy a quantized reinforcement learning based navigation policy onto an embedded system and achieve an 18× speedup and a 4× reduction in memory usage over an unquantized policy . 2 RELATED WORK . Reducing neural network resource requirements is an active research topic . Techniques include quantization ( Han et al. , 2015 ; 2016 ; Zhu et al. , 2016 ; Jacob et al. , 2018 ; Lin et al. , 2019 ; Polino et al. , 2018 ; Sakr & Shanbhag , 2018 ) , deep compression ( Han et al. , 2016 ) , knowledge distillation ( Hinton et al. , 2015 ; Chen et al. , 2017 ) , sparsification ( Han et al. , 2016 ; Alford et al. , 2018 ; Park et al. , 2016 ; Louizos et al. , 2018b ; Bellec et al. , 2017 ) and pruning ( Alford et al. , 2018 ; Molchanov et al. , 2016 ; Li et al. , 2016 ) . These methods are employed because they compress to reduce storage and memory requirements as well as enable fast and efficient inference and training with specialized operations . We provide background for these motivations and describe the specific techniques that fall under these categories and motivate why quantization for reinforcement learning needs study . Compression for Memory and Storage : Techniques such as quantization , pruning , sparsification , and distillation reduce the amount of storage and memory required by deep neural networks . These techniques are motivated by the need to train and deploy neural networks on memoryconstrained environments ( e.g. , IoT or mobile ) . Broadly , quantization reduces the precision of network weights ( Han et al. , 2015 ; 2016 ; Zhu et al. , 2016 ) , pruning removes various layers and filters of a network ( Alford et al. , 2018 ; Molchanov et al. , 2016 ) , sparsification zeros out selective network values ( Molchanov et al. , 2016 ; Alford et al. , 2018 ) and distillation compresses an ensemble of networks into one ( Hinton et al. , 2015 ; Chen et al. , 2017 ) . Various algorithms combining these core techniques have been proposed . For example , Deep Compression ( Han et al. , 2015 ) demonstrated that a combination of weight-sharing , pruning , and quantization might reduce storage requirements by 35-49x . Importantly , these methods achieve high compression rates at small losses in accuracy by exploiting the redundancy that is inherent within the neural networks . Fast and Efficient Inference/Training : Methods like quantization , pruning , and sparsification may also be employed to improve the runtime of network inference and training as well as their energy consumption . Quantization reduces the precision of network weights and allows more efficient quantized operations to be used during training and deployment , for example , a ” binary ” GEMM ( general matrix multiply ) operation ( Rastegari et al. , 2016 ; Courbariaux et al. , 2016 ) . Pruning speeds up neural networks by removing layers or filters to reduce the overall amount of computation necessary to make predictions ( Molchanov et al. , 2016 ) . Finally , Sparsification zeros out network weights and enables faster computation via specialized primitives like block-sparse matrix multiply ( Ren et al. , 2018 ) . These techniques not only speed up neural networks but decrease energy consumption by requiring fewer floating-point operations . Quantization for Reinforcement Learning : Prior work in quantization focuses mostly on quantizing image / supervised models . However , there are several key differences between these models and reinforcement learning policies : an agent ’ s current input and decision affects its future state and actions , there are many complex algorithms ( e.g : DQN , PPO , A2C , DDPG ) for training , and there are many diverse tasks . To the best of our knowledge , this is the first work to apply and analyze the performance of quantization across a broad of reinforcement learning tasks and training algorithms . 3 QUANTIZED REINFORCEMENT LEARNING ( QUARL ) . We develop QuaRL , an open-source software framework that allows us to systematically apply traditional quantization methods to a broad spectrum of deep reinforcement learning models . We use the QuaRL framework to 1 ) evaluate how effective quantization is at compressing reinforcement learning policies , 2 ) analyze how quantization affects/is affected by the various environments and training algorithms in reinforcement learning and 3 ) establish a standard on the performance of quantization techniques across various training algorithms and environments . Environments : We evaluate quantized models on three different types of environments : OpenAI gym ( Brockman et al. , 2016 ) , Atari Arcade Learning ( Bellemare et al. , 2012 ) , and PyBullet ( which is an open-source implementation of the MuJoCo ) . These environments consist of a variety of tasks , including CartPole , MountainCar , LunarLandar , Atari Games , Humanoid , etc . The complete list of environments used in the QuaRL framework is listed in Table 1 . Evaluations across this spectrum of different tasks provide a robust benchmark on the performance of quantization applied to different reinforcement learning tasks . Training Algorithms : We study quantization on four popular reinforcement learning algorithms , namely Advantage Actor-Critic ( A2C ) ( Mnih et al. , 2016 ) , Deep Q-Network ( DQN ) ( Mnih et al. , 2013 ) , Deep Deterministic Policy Gradients ( DDPG ) ( Lillicrap et al. , 2015 ) and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) . Evaluating these standard reinforcement learning algorithms that are well established in the community allows us to explore whether quantization is similarly effective across different reinforcement learning algorithms . Quantization Methods : We apply standard quantization techniques to deep reinforcement learning models . Our main approaches are post-training quantization and quantization aware training . We apply these methods to models trained in different environments by different reinforcement learning algorithms to broadly understand their performance . We describe how these methods are applied in the context of reinforcement learning below . 3.1 POST-TRAINING QUANTIZATION . Post-training quantization takes a trained full precision model ( 32-bit floating point ) and quantizes its weights to lower precision values . We quantize weights down to fp16 ( 16-bit floating point ) and int8 ( 8-bit integer ) values . fp16 quantization is based on IEEE-754 floating point rounding and int8 quantization uses uniform affine quantization . Fp16 Quantization : Fp16 quantization involves taking full precision ( 32-bit ) values and mapping them to the nearest representable 16-bit float . The IEEE-754 standard specifies 16-bit floats with the format shown below . Bits are grouped to specify the value of the sign ( S ) , fraction ( F ) and exponent ( E ) which are then combined with the following formula to yield the effective value of the float : Sign Exponent ( 5 bits ) Fraction ( 10 bits ) Vfp16 = ( −1 ) S × ( 1 + F 210 ) × 2E−15 In subsequent sections , we refer to float16 quantization using the following notation : Qfp16 ( W ) = roundfp16 ( W ) Uniform Affine Quantization : Uniform affine quantization ( TensorFlow , 2018b ) is applied to a full precision weight matrix and is performed by 1 ) calculating the minimum and maximum values of the matrix and 2 ) dividing this range equally into 2n representable values ( where n is the number of bits being quantized to ) . As each representable value is equally spaced across this range , the quantized value can be represented by an integer . More specifically , quantization from full precision to n-bit integers is given by : Qn ( W ) = ⌊ W δ ⌋ + z where δ = |min ( W , 0 ) |+ |max ( W , 0 ) | 2n , z = ⌊ −min ( W , 0 ) δ ⌋ Note that δ is the gap between representable numbers and z is an offset so that 0 is exactly representable . Further note that we usemin ( W , 0 ) andmax ( W , 0 ) to ensure that 0 is always represented . To dequantize we perform : D ( Wq , δ , z ) = δ ( Wq − z ) In the context of QuaRL , int8 and fp16 quantization are applied after training a full precision model on an environment , as per Algorithm 1 . In post training quantization , uniform quantization is applied to each fully connected layer of the model ( per-tensor quantization ) and is applied to each channel of convolution weights ( per-axis quantization ) ; activations are not quantized . We use post-training quantization to quantize to fp16 and int8 values . Algorithm 1 : Post-Training Quantization for Reinforcement Learning Input : T : task or environment Input : L : reinforcement learning algorithm Input : A : model architecture Input : n : quantize bits ( 8 or 16 ) Output : Reward 1 M = Train ( T , L , A ) 2 Q = { Qint8 n = 8 Qfp16 n = 16 3 return Eval ( Q ( M ) ) Algorithm 2 : Quantization Aware Training for Reinforcement Learning Output : Reward Input : T : task or environment Input : L : reinforcement learning algorithm Input : n : quantize bits Input : A : model architecture Input : Qd : quantization delay 1 Aq = InsertAfterWeightsAndActivations ( Qtrainn ) 2 M , TensorMinMaxes = TrainNoQuantMonitorWeightsActivationsRanges ( T , L , Aq , Qd ) 3 M = TrainWithQuantization ( T , L , M , TensorMinMaxes , Qtrainn ) 4 return Eval ( M , Qtrainn , TensorMinMaxes )
This paper investigates the impact of using a reduced precision (i.e., quantization) in different deep reinforcement learning (DRL) algorithms. It shows that overall, reducing the precision of the neural network in DRL algorithms from 32 bits to 16 or 8 bits doesn't have much effect on the quality of the learned policy. It also shows how this quantization leads to a reduced memory cost and faster training and inference times.
SP:0cfa52672cf34ffafece1171e48d6c344645dcf3
Quantized Reinforcement Learning (QuaRL)
Recent work has shown that quantization can help reduce the memory , compute , and energy demands of deep neural networks without significantly harming their quality . However , whether these prior techniques , applied traditionally to imagebased models , work with the same efficacy to the sequential decision making process in reinforcement learning remains an unanswered question . To address this void , we conduct the first comprehensive empirical study that quantifies the effects of quantization on various deep reinforcement learning policies with the intent to reduce their computational resource demands . We apply techniques such as post-training quantization and quantization aware training to a spectrum of reinforcement learning tasks ( such as Pong , Breakout , BeamRider and more ) and training algorithms ( such as PPO , A2C , DDPG , and DQN ) . Across this spectrum of tasks and learning algorithms , we show that policies can be quantized to 6-8 bits of precision without loss of accuracy . We also show that certain tasks and reinforcement learning algorithms yield policies that are more difficult to quantize due to their effect of widening the models ’ distribution of weights and that quantization aware training consistently improves results over post-training quantization and oftentimes even over the full precision baseline . Additionally , we show that quantization aware training , like traditional regularizers , regularize models by increasing exploration during the training process . Finally , we demonstrate usefulness of quantization for reinforcement learning . We use half-precision training to train a Pong model 50 % faster , and we deploy a quantized reinforcement learning based navigation policy to an embedded system , achieving an 18× speedup and a 4× reduction in memory usage over an unquantized policy . 1 INTRODUCTION . Deep reinforcement learning has promise in many applications , ranging from game playing ( Silver et al. , 2016 ; 2017 ; Kempka et al. , 2016 ) to robotics ( Lillicrap et al. , 2015 ; Zhang et al. , 2015 ) to locomotion and transportation ( Arulkumaran et al. , 2017 ; Kendall et al. , 2018 ) . However , the training and deployment of reinforcement learning models remain challenging . Training is expensive because of their computationally expensive demands for repeatedly performing the forward and backward propagation in neural network training . Deploying deep reinforcement learning ( DRL ) models is prohibitively expensive , if not even impossible , due to the resource constraints on embedded computing systems typically used for applications , such as robotics and drone navigation . Quantization can be helpful in substantially reducing the memory , compute , and energy usage of deep learning models without significantly harming their quality ( Han et al. , 2015 ; Zhou et al. , 2016 ; Han et al. , 2016 ) . However , it is unknown whether the same techniques carry over to reinforcement learning . Unlike models in supervised learning , the quality of a reinforcement learning policy depends on how effective it is in sequential decision making . Specifically , an agent ’ s current input and decision heavily affect its future state and future actions ; it is unclear how quantization affects the long-term decision making capability of reinforcement learning policies . Also , there are many different algorithms to train a reinforcement learning policy . Algorithms like actor-critic methods ( A2C ) , deep-q networks ( DQN ) , proximal policy optimization ( PPO ) and deep deterministic policy gradients ( DDPG ) are significantly different in their optimization goals and implementation details , and it is unclear whether quantization would be similarly effective across these algorithms . Finally , reinforcement learning policies are trained and applied to a wide range of environments , and it is unclear how quantization affects performance in tasks of differing complexity . Here , we aim to understand quantization effects on deep reinforcement learning policies . We comprehensively benchmark the effects of quantization on policies trained by various reinforcement learning algorithms on different tasks , conducting in excess of 350 experiments to present representative and conclusive analysis . We perform experiments over 3 major axes : ( 1 ) environments ( Atari Arcade , PyBullet , OpenAI Gym ) , ( 2 ) reinforcement learning training algorithms ( Deep-Q Networks , Advantage Actor-Critic , Deep Deterministic Policy Gradients , Proximal Policy Optimization ) and ( 3 ) quantization methods ( post-training quantization , quantization aware training ) . We show that quantization induces a regularization effect by increasing exploration during training . This motivates the use of quantization aware training , which we show demonstrates improved performance over post-training quantization and oftentimes even over the full precision baseline . Additionally , We show that deep reinforcement learning models can be quantized to 6-8 bits of precision without loss in quality . Furthermore , we analyze how each axis affects the final performance of the quantized model to develop insights into how to achieve better model quantization . Our results show that some tasks and training algorithms yield models that are more difficult to apply post-training quantization as they widen the spread of the models ’ weight distribution , yielding higher quantization error . To demonstrate the usefulness of quantization for deep reinforcement learning , we 1 ) use half precision ops to train a Pong model 50 % faster than full precision training and 2 ) deploy a quantized reinforcement learning based navigation policy onto an embedded system and achieve an 18× speedup and a 4× reduction in memory usage over an unquantized policy . 2 RELATED WORK . Reducing neural network resource requirements is an active research topic . Techniques include quantization ( Han et al. , 2015 ; 2016 ; Zhu et al. , 2016 ; Jacob et al. , 2018 ; Lin et al. , 2019 ; Polino et al. , 2018 ; Sakr & Shanbhag , 2018 ) , deep compression ( Han et al. , 2016 ) , knowledge distillation ( Hinton et al. , 2015 ; Chen et al. , 2017 ) , sparsification ( Han et al. , 2016 ; Alford et al. , 2018 ; Park et al. , 2016 ; Louizos et al. , 2018b ; Bellec et al. , 2017 ) and pruning ( Alford et al. , 2018 ; Molchanov et al. , 2016 ; Li et al. , 2016 ) . These methods are employed because they compress to reduce storage and memory requirements as well as enable fast and efficient inference and training with specialized operations . We provide background for these motivations and describe the specific techniques that fall under these categories and motivate why quantization for reinforcement learning needs study . Compression for Memory and Storage : Techniques such as quantization , pruning , sparsification , and distillation reduce the amount of storage and memory required by deep neural networks . These techniques are motivated by the need to train and deploy neural networks on memoryconstrained environments ( e.g. , IoT or mobile ) . Broadly , quantization reduces the precision of network weights ( Han et al. , 2015 ; 2016 ; Zhu et al. , 2016 ) , pruning removes various layers and filters of a network ( Alford et al. , 2018 ; Molchanov et al. , 2016 ) , sparsification zeros out selective network values ( Molchanov et al. , 2016 ; Alford et al. , 2018 ) and distillation compresses an ensemble of networks into one ( Hinton et al. , 2015 ; Chen et al. , 2017 ) . Various algorithms combining these core techniques have been proposed . For example , Deep Compression ( Han et al. , 2015 ) demonstrated that a combination of weight-sharing , pruning , and quantization might reduce storage requirements by 35-49x . Importantly , these methods achieve high compression rates at small losses in accuracy by exploiting the redundancy that is inherent within the neural networks . Fast and Efficient Inference/Training : Methods like quantization , pruning , and sparsification may also be employed to improve the runtime of network inference and training as well as their energy consumption . Quantization reduces the precision of network weights and allows more efficient quantized operations to be used during training and deployment , for example , a ” binary ” GEMM ( general matrix multiply ) operation ( Rastegari et al. , 2016 ; Courbariaux et al. , 2016 ) . Pruning speeds up neural networks by removing layers or filters to reduce the overall amount of computation necessary to make predictions ( Molchanov et al. , 2016 ) . Finally , Sparsification zeros out network weights and enables faster computation via specialized primitives like block-sparse matrix multiply ( Ren et al. , 2018 ) . These techniques not only speed up neural networks but decrease energy consumption by requiring fewer floating-point operations . Quantization for Reinforcement Learning : Prior work in quantization focuses mostly on quantizing image / supervised models . However , there are several key differences between these models and reinforcement learning policies : an agent ’ s current input and decision affects its future state and actions , there are many complex algorithms ( e.g : DQN , PPO , A2C , DDPG ) for training , and there are many diverse tasks . To the best of our knowledge , this is the first work to apply and analyze the performance of quantization across a broad of reinforcement learning tasks and training algorithms . 3 QUANTIZED REINFORCEMENT LEARNING ( QUARL ) . We develop QuaRL , an open-source software framework that allows us to systematically apply traditional quantization methods to a broad spectrum of deep reinforcement learning models . We use the QuaRL framework to 1 ) evaluate how effective quantization is at compressing reinforcement learning policies , 2 ) analyze how quantization affects/is affected by the various environments and training algorithms in reinforcement learning and 3 ) establish a standard on the performance of quantization techniques across various training algorithms and environments . Environments : We evaluate quantized models on three different types of environments : OpenAI gym ( Brockman et al. , 2016 ) , Atari Arcade Learning ( Bellemare et al. , 2012 ) , and PyBullet ( which is an open-source implementation of the MuJoCo ) . These environments consist of a variety of tasks , including CartPole , MountainCar , LunarLandar , Atari Games , Humanoid , etc . The complete list of environments used in the QuaRL framework is listed in Table 1 . Evaluations across this spectrum of different tasks provide a robust benchmark on the performance of quantization applied to different reinforcement learning tasks . Training Algorithms : We study quantization on four popular reinforcement learning algorithms , namely Advantage Actor-Critic ( A2C ) ( Mnih et al. , 2016 ) , Deep Q-Network ( DQN ) ( Mnih et al. , 2013 ) , Deep Deterministic Policy Gradients ( DDPG ) ( Lillicrap et al. , 2015 ) and Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) . Evaluating these standard reinforcement learning algorithms that are well established in the community allows us to explore whether quantization is similarly effective across different reinforcement learning algorithms . Quantization Methods : We apply standard quantization techniques to deep reinforcement learning models . Our main approaches are post-training quantization and quantization aware training . We apply these methods to models trained in different environments by different reinforcement learning algorithms to broadly understand their performance . We describe how these methods are applied in the context of reinforcement learning below . 3.1 POST-TRAINING QUANTIZATION . Post-training quantization takes a trained full precision model ( 32-bit floating point ) and quantizes its weights to lower precision values . We quantize weights down to fp16 ( 16-bit floating point ) and int8 ( 8-bit integer ) values . fp16 quantization is based on IEEE-754 floating point rounding and int8 quantization uses uniform affine quantization . Fp16 Quantization : Fp16 quantization involves taking full precision ( 32-bit ) values and mapping them to the nearest representable 16-bit float . The IEEE-754 standard specifies 16-bit floats with the format shown below . Bits are grouped to specify the value of the sign ( S ) , fraction ( F ) and exponent ( E ) which are then combined with the following formula to yield the effective value of the float : Sign Exponent ( 5 bits ) Fraction ( 10 bits ) Vfp16 = ( −1 ) S × ( 1 + F 210 ) × 2E−15 In subsequent sections , we refer to float16 quantization using the following notation : Qfp16 ( W ) = roundfp16 ( W ) Uniform Affine Quantization : Uniform affine quantization ( TensorFlow , 2018b ) is applied to a full precision weight matrix and is performed by 1 ) calculating the minimum and maximum values of the matrix and 2 ) dividing this range equally into 2n representable values ( where n is the number of bits being quantized to ) . As each representable value is equally spaced across this range , the quantized value can be represented by an integer . More specifically , quantization from full precision to n-bit integers is given by : Qn ( W ) = ⌊ W δ ⌋ + z where δ = |min ( W , 0 ) |+ |max ( W , 0 ) | 2n , z = ⌊ −min ( W , 0 ) δ ⌋ Note that δ is the gap between representable numbers and z is an offset so that 0 is exactly representable . Further note that we usemin ( W , 0 ) andmax ( W , 0 ) to ensure that 0 is always represented . To dequantize we perform : D ( Wq , δ , z ) = δ ( Wq − z ) In the context of QuaRL , int8 and fp16 quantization are applied after training a full precision model on an environment , as per Algorithm 1 . In post training quantization , uniform quantization is applied to each fully connected layer of the model ( per-tensor quantization ) and is applied to each channel of convolution weights ( per-axis quantization ) ; activations are not quantized . We use post-training quantization to quantize to fp16 and int8 values . Algorithm 1 : Post-Training Quantization for Reinforcement Learning Input : T : task or environment Input : L : reinforcement learning algorithm Input : A : model architecture Input : n : quantize bits ( 8 or 16 ) Output : Reward 1 M = Train ( T , L , A ) 2 Q = { Qint8 n = 8 Qfp16 n = 16 3 return Eval ( Q ( M ) ) Algorithm 2 : Quantization Aware Training for Reinforcement Learning Output : Reward Input : T : task or environment Input : L : reinforcement learning algorithm Input : n : quantize bits Input : A : model architecture Input : Qd : quantization delay 1 Aq = InsertAfterWeightsAndActivations ( Qtrainn ) 2 M , TensorMinMaxes = TrainNoQuantMonitorWeightsActivationsRanges ( T , L , Aq , Qd ) 3 M = TrainWithQuantization ( T , L , M , TensorMinMaxes , Qtrainn ) 4 return Eval ( M , Qtrainn , TensorMinMaxes )
Training and deployment of DRL models is expensive. Quantization has proven useful in supervised learning, however it is yet to be tested thoroughly in DRL. This paper investigates whether quantization can be applied in DRL towards better resource usage (compute, energy) without harming the model quality. Both quantization-aware training (via fake quantization) and post-training quantization is investigated. The work demonstrates that policies can be reduced to 6-8 bits without quality loss. The paper indicates that quantization can indeed lower resource consumption without quality decline in realistic DRL tasks and for various algorithms.
SP:0cfa52672cf34ffafece1171e48d6c344645dcf3
A critical analysis of self-supervision, or what we can learn from a single image
1 INTRODUCTION . Despite tremendous progress in supervised learning , learning without external supervision remains difficult . Self-supervision has recently emerged as one of the most promising approaches to address this limitation . Self-supervision builds on the fact that convolutional neural networks ( CNNs ) transfer well between tasks ( Shin et al. , 2016 ; Oquab et al. , 2014 ; Girshick , 2015 ; Huh et al. , 2016 ) . The idea then is to pre-train networks via pretext tasks that do not require expensive manual annotations and can be automatically generated from the data itself . Once pre-trained , networks can be applied to a target task by using only a modest amount of labelled data . Early successes in self-supervision have encouraged authors to develop a large variety of pretext tasks , from colorization to rotation estimation and image autoencoding . Recent papers have shown performance competitive with supervised learning by learning complex neural networks on very large image datasets . Nevertheless , for a given model complexity , pre-training by using an off-theshelf annotated image datasets such as ImageNet remains much more efficient . In this paper , we aim to investigate the effectiveness of current self-supervised approaches by characterizing how much information they can extract from a given dataset of images . Since deep networks learn a hierarchy of representations , we further break down this investigation on a per-layer basis . We are motivated by the fact that the first few layers of most networks extract low-level information ( Yosinski et al. , 2014 ) , and thus learning them may not require the high-level semantic information captured by manual labels . Concretely , in this paper we answer the following simple question : “ is self-supervision able to exploit the information contained in a large number of images in order to learn different parts of a neural network ? ” We contribute two key findings . First , we show that as little as a single image is sufficient , when combined with self-supervision and data augmentation , to learn the first few layers of standard deep networks as well as using millions of images and full supervision ( Figure 1 ) . Hence , while selfsupervised learning works well for these layers , this may be due more to the limited complexity of such features than the strength of the supervisory technique . This also confirms the intuition that early layers in a convolutional network amounts to low-level feature extractors , analogous to early conv1 conv2 conv3 conv4 conv5 0 20 40 60 80 100 % su pe rv ise d pe rfo rm an ce Linear Classifier on ImageNet Random RotNet 1-RotNet BiGAN 1-BiGAN DeepCluster 1-DeepCluster Figure 1 : Single-image self-supervision . We show that several self-supervision methods can be used to train the first few layers of a deep neural networks using a single training image , such as this Image A , B or even C ( above ) , provided that sufficient data augmentation is used . learned and hand-crafted features for visual recognition ( Olshausen & Field , 1997 ; Lowe , 2004 ; Dalal & Triggs , 2005 ) . Finally , it demonstrates the importance of image transformations in learning such low-level features as opposed to image diversity.1 Our second finding is about the deeper layers of the network . For these , self-supervision remains inferior to strong supervision even if millions of images are used for training . Our finding is that this is unlikely to change with the addition of more data . In particular , we show that training these layers with self-supervision and a single image already achieves as much as two thirds of the performance that can be achieved by using a million different images . We show that these conclusions hold true for three different self-supervised methods , BiGAN ( Donahue et al. , 2017 ) , RotNet ( Gidaris et al. , 2018 ) and DeepCluster ( Caron et al. , 2018 ) , which are representative of the spectrum of techniques that are currently popular . We find that performance as a function of the amount of data is dependent on the method , but all three methods can indeed leverage a single image to learn the first few layers of a deep network almost “ perfectly ” . Overall , while our results do not improve self-supervision per-se , they help to characterize the limitations of current methods and to better focus on the important open challenges . 2 RELATED WORK . Our paper relates to three broad areas of research : ( a ) self-supervised/unsupervised learning , ( b ) learning from a single sample , and ( c ) designing/learning low-level feature extractors . We discuss closely related work for each . Self-supervised learning : A wide variety of proxy tasks , requiring no manual annotations , have been proposed for the self-training of deep convolutional neural networks . These methods use various cues and tasks namely , in-painting ( Pathak et al. , 2016 ) , patch context and jigsaw puzzles ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ; Noroozi et al. , 2018 ; Mundhenk et al. , 2017 ) , clustering ( Caron et al. , 2018 ) , noise-as-targets ( Bojanowski & Joulin , 2017 ) , colorization ( Zhang et al. , 2016 ; Larsson et al. , 2017 ) , generation ( Jenni & Favaro , 2018 ; Ren & Lee , 2018 ; Donahue et al. , 2017 ) , geometry ( Dosovitskiy et al. , 2016 ; Gidaris et al. , 2018 ) and counting ( Noroozi et al. , 2017 ) . The idea is that the pretext task can be constructed automatically and easily on images alone . Thus , methods often modify information in the images and require the network to recover them . Inpainting or colorization techniques fall in this category . However these methods have the downside that the features are learned on modified images which potentially harms the generalization to unmodified ones . For example , colorization uses a gray scale image as input , thus the network can not learn to extract color information , which can be important for other tasks . Slightly less related are methods that use additional information to learn features . Here , often temporal information is used in the form of videos . Typical pretext tasks are based on temporalcontext ( Misra et al. , 2016 ; Wei et al. , 2018 ; Lee et al. , 2017 ; Sermanet et al. , 2018 ) , spatio-temporal 1Example applications that only rely on low-level feature extractors include template matching ( Kat et al. , 2018 ; Talmi et al. , 2017 ) and style transfer ( Gatys et al. , 2016 ; Johnson et al. , 2016 ) , which currently rely on pre-training with millions of images . cues ( Isola et al. , 2015 ; Gao et al. , 2016 ; Wang et al. , 2017 ) , foreground-background segmentation via video segmentation ( Pathak et al. , 2017 ) , optical-flow ( Gan et al. , 2018 ; Mahendran et al. , 2018 ) , future-frame synthesis ( Srivastava et al. , 2015 ) , audio prediction from video ( de Sa , 1994 ; Owens et al. , 2016 ) , audio-video alignment ( Arandjelović & Zisserman , 2017 ) , ego-motion estimation ( Jayaraman & Grauman , 2015 ) , slow feature analysis with higher order temporal coherence ( Jayaraman & Grauman , 2016 ) , transformation between frames ( Agrawal et al. , 2015 ) and patch tracking in videos ( Wang & Gupta , 2015 ) . Since we are interested in learning features from as little data as one image , we can not make use of methods that rely on video input . Our contribution inspects three unsupervised feature learning methods that use very different means of extracting information from the data : BiGAN ( Donahue et al. , 2017 ) utilizes a generative adversarial task , RotNet ( Gidaris et al. , 2018 ) exploits the photographic bias in the dataset and DeepCluster ( Caron et al. , 2018 ) learns stable feature representations under a number of image transformations by proxy labels obtained from clustering . These are described in more detail in the Methods section . Learning from a single sample : In some applications of computer vision , the bold idea of learning from a single sample comes out of necessity . For general object tracking , methods such as max margin correlation filters ( Rodriguez et al. , 2013 ) learn robust tracking templates from a single sample of the patch . A single image can also be used to learn and interpolate multi-scale textures with a GAN framework ( Rott Shaham et al. , 2019 ) . Single sample learning was pursued by the semi-parametric exemplar SVM model ( Malisiewicz et al. , 2011 ) . They learn one SVM per positive sample separating it from all negative patches mined from the background . While only one sample is used for the positive set , the negative set consists of thousands of images and is a necessary component of their method . The negative space was approximated by a multi-dimensional Gaussian by the Exemplar LDA ( Hariharan et al. , 2012 ) . These SVMs , one per positive sample , are pooled together using a max aggregation . We differ from both of these approaches in that we do not use a large collection of negative images to train our model . Instead we restrict ourselves to a single or a few images with a systematic augmentation strategy . Classical learned and hand-crafted low-level feature extractors : Learning and hand-crafting features pre-dates modern deep learning approaches and self-supervision techniques . For example the classical work of ( Olshausen & Field , 1997 ) shows that edge-like filters can be learned via sparse coding of just 10 natural scene images . SIFT ( Lowe , 2004 ) and HOG ( Dalal & Triggs , 2005 ) have been used extensively before the advent of convolutional neural networks and , in many ways , they resemble the first layers of these networks . The scatter transform of Bruna & Mallat ( 2013 ) ; Oyallon et al . ( 2017 ) is an handcrafted design that aims at replacing at least the first few layers of a deep network . While these results show that effective low-level features can be handcrafted , this is insufficient to clarify the power and limitation of self-supervision in deep networks . For instance , it is not obvious whether deep networks can learn better low level features than these , how many images may be required to learn them , and how effective self-supervision may be in doing so . For instance , as we also show in the experiments , replacing low-level layers in a convolutional networks with handcrafted features such as Oyallon et al . ( 2017 ) may still decrease the overall performance of the model . Furthermore , this says little about deeper layers , which we also investigate . In this work we show that current deep learning methods learn slightly better low-level representations than hand crafted features such as the scattering transform . Additionally , these representations can be learned from one single image with augmentations and without supervision . The results show how current self-supervised learning approaches that use one million images yield only relatively small gains when compared to what can be achieved from one image and augmentations , and motivates a renewed focus on augmentations and incorporating prior knowledge into feature extractors . 3 METHODS . We discuss first our data and data augmentation strategy ( section 3.1 ) and then we summarize the three different methods for unsupervised feature learning used in the experiments ( section 3.2 ) .
The paper studies self-supervised learning from very few unlabeled images, down to the extreme case where only a single image is used for training. From the few/single image(s) available for training, a data set of the same size as some unmodified reference data set (ImageNet, Cifar-10/100) is generated through heavy data augmentation (cropping, scaling, rotation, contrast changes, adding noise). Three popular self-supervised learning algorithms are then trained on this data sets, namely (Bi)GAN, RotNet, and DeepCluster, and the linear probing accuracy on different blocks is compared to that obtained by training the same methods on the reference data sets. The linear probing accuracy from the first few conv layers of the network trained on the single/few image data set is found to be comparable to or better than that of the same model trained on the full reference data set.
SP:8283eb652046558e12c67447dddebcb52ee9de94
A critical analysis of self-supervision, or what we can learn from a single image
1 INTRODUCTION . Despite tremendous progress in supervised learning , learning without external supervision remains difficult . Self-supervision has recently emerged as one of the most promising approaches to address this limitation . Self-supervision builds on the fact that convolutional neural networks ( CNNs ) transfer well between tasks ( Shin et al. , 2016 ; Oquab et al. , 2014 ; Girshick , 2015 ; Huh et al. , 2016 ) . The idea then is to pre-train networks via pretext tasks that do not require expensive manual annotations and can be automatically generated from the data itself . Once pre-trained , networks can be applied to a target task by using only a modest amount of labelled data . Early successes in self-supervision have encouraged authors to develop a large variety of pretext tasks , from colorization to rotation estimation and image autoencoding . Recent papers have shown performance competitive with supervised learning by learning complex neural networks on very large image datasets . Nevertheless , for a given model complexity , pre-training by using an off-theshelf annotated image datasets such as ImageNet remains much more efficient . In this paper , we aim to investigate the effectiveness of current self-supervised approaches by characterizing how much information they can extract from a given dataset of images . Since deep networks learn a hierarchy of representations , we further break down this investigation on a per-layer basis . We are motivated by the fact that the first few layers of most networks extract low-level information ( Yosinski et al. , 2014 ) , and thus learning them may not require the high-level semantic information captured by manual labels . Concretely , in this paper we answer the following simple question : “ is self-supervision able to exploit the information contained in a large number of images in order to learn different parts of a neural network ? ” We contribute two key findings . First , we show that as little as a single image is sufficient , when combined with self-supervision and data augmentation , to learn the first few layers of standard deep networks as well as using millions of images and full supervision ( Figure 1 ) . Hence , while selfsupervised learning works well for these layers , this may be due more to the limited complexity of such features than the strength of the supervisory technique . This also confirms the intuition that early layers in a convolutional network amounts to low-level feature extractors , analogous to early conv1 conv2 conv3 conv4 conv5 0 20 40 60 80 100 % su pe rv ise d pe rfo rm an ce Linear Classifier on ImageNet Random RotNet 1-RotNet BiGAN 1-BiGAN DeepCluster 1-DeepCluster Figure 1 : Single-image self-supervision . We show that several self-supervision methods can be used to train the first few layers of a deep neural networks using a single training image , such as this Image A , B or even C ( above ) , provided that sufficient data augmentation is used . learned and hand-crafted features for visual recognition ( Olshausen & Field , 1997 ; Lowe , 2004 ; Dalal & Triggs , 2005 ) . Finally , it demonstrates the importance of image transformations in learning such low-level features as opposed to image diversity.1 Our second finding is about the deeper layers of the network . For these , self-supervision remains inferior to strong supervision even if millions of images are used for training . Our finding is that this is unlikely to change with the addition of more data . In particular , we show that training these layers with self-supervision and a single image already achieves as much as two thirds of the performance that can be achieved by using a million different images . We show that these conclusions hold true for three different self-supervised methods , BiGAN ( Donahue et al. , 2017 ) , RotNet ( Gidaris et al. , 2018 ) and DeepCluster ( Caron et al. , 2018 ) , which are representative of the spectrum of techniques that are currently popular . We find that performance as a function of the amount of data is dependent on the method , but all three methods can indeed leverage a single image to learn the first few layers of a deep network almost “ perfectly ” . Overall , while our results do not improve self-supervision per-se , they help to characterize the limitations of current methods and to better focus on the important open challenges . 2 RELATED WORK . Our paper relates to three broad areas of research : ( a ) self-supervised/unsupervised learning , ( b ) learning from a single sample , and ( c ) designing/learning low-level feature extractors . We discuss closely related work for each . Self-supervised learning : A wide variety of proxy tasks , requiring no manual annotations , have been proposed for the self-training of deep convolutional neural networks . These methods use various cues and tasks namely , in-painting ( Pathak et al. , 2016 ) , patch context and jigsaw puzzles ( Doersch et al. , 2015 ; Noroozi & Favaro , 2016 ; Noroozi et al. , 2018 ; Mundhenk et al. , 2017 ) , clustering ( Caron et al. , 2018 ) , noise-as-targets ( Bojanowski & Joulin , 2017 ) , colorization ( Zhang et al. , 2016 ; Larsson et al. , 2017 ) , generation ( Jenni & Favaro , 2018 ; Ren & Lee , 2018 ; Donahue et al. , 2017 ) , geometry ( Dosovitskiy et al. , 2016 ; Gidaris et al. , 2018 ) and counting ( Noroozi et al. , 2017 ) . The idea is that the pretext task can be constructed automatically and easily on images alone . Thus , methods often modify information in the images and require the network to recover them . Inpainting or colorization techniques fall in this category . However these methods have the downside that the features are learned on modified images which potentially harms the generalization to unmodified ones . For example , colorization uses a gray scale image as input , thus the network can not learn to extract color information , which can be important for other tasks . Slightly less related are methods that use additional information to learn features . Here , often temporal information is used in the form of videos . Typical pretext tasks are based on temporalcontext ( Misra et al. , 2016 ; Wei et al. , 2018 ; Lee et al. , 2017 ; Sermanet et al. , 2018 ) , spatio-temporal 1Example applications that only rely on low-level feature extractors include template matching ( Kat et al. , 2018 ; Talmi et al. , 2017 ) and style transfer ( Gatys et al. , 2016 ; Johnson et al. , 2016 ) , which currently rely on pre-training with millions of images . cues ( Isola et al. , 2015 ; Gao et al. , 2016 ; Wang et al. , 2017 ) , foreground-background segmentation via video segmentation ( Pathak et al. , 2017 ) , optical-flow ( Gan et al. , 2018 ; Mahendran et al. , 2018 ) , future-frame synthesis ( Srivastava et al. , 2015 ) , audio prediction from video ( de Sa , 1994 ; Owens et al. , 2016 ) , audio-video alignment ( Arandjelović & Zisserman , 2017 ) , ego-motion estimation ( Jayaraman & Grauman , 2015 ) , slow feature analysis with higher order temporal coherence ( Jayaraman & Grauman , 2016 ) , transformation between frames ( Agrawal et al. , 2015 ) and patch tracking in videos ( Wang & Gupta , 2015 ) . Since we are interested in learning features from as little data as one image , we can not make use of methods that rely on video input . Our contribution inspects three unsupervised feature learning methods that use very different means of extracting information from the data : BiGAN ( Donahue et al. , 2017 ) utilizes a generative adversarial task , RotNet ( Gidaris et al. , 2018 ) exploits the photographic bias in the dataset and DeepCluster ( Caron et al. , 2018 ) learns stable feature representations under a number of image transformations by proxy labels obtained from clustering . These are described in more detail in the Methods section . Learning from a single sample : In some applications of computer vision , the bold idea of learning from a single sample comes out of necessity . For general object tracking , methods such as max margin correlation filters ( Rodriguez et al. , 2013 ) learn robust tracking templates from a single sample of the patch . A single image can also be used to learn and interpolate multi-scale textures with a GAN framework ( Rott Shaham et al. , 2019 ) . Single sample learning was pursued by the semi-parametric exemplar SVM model ( Malisiewicz et al. , 2011 ) . They learn one SVM per positive sample separating it from all negative patches mined from the background . While only one sample is used for the positive set , the negative set consists of thousands of images and is a necessary component of their method . The negative space was approximated by a multi-dimensional Gaussian by the Exemplar LDA ( Hariharan et al. , 2012 ) . These SVMs , one per positive sample , are pooled together using a max aggregation . We differ from both of these approaches in that we do not use a large collection of negative images to train our model . Instead we restrict ourselves to a single or a few images with a systematic augmentation strategy . Classical learned and hand-crafted low-level feature extractors : Learning and hand-crafting features pre-dates modern deep learning approaches and self-supervision techniques . For example the classical work of ( Olshausen & Field , 1997 ) shows that edge-like filters can be learned via sparse coding of just 10 natural scene images . SIFT ( Lowe , 2004 ) and HOG ( Dalal & Triggs , 2005 ) have been used extensively before the advent of convolutional neural networks and , in many ways , they resemble the first layers of these networks . The scatter transform of Bruna & Mallat ( 2013 ) ; Oyallon et al . ( 2017 ) is an handcrafted design that aims at replacing at least the first few layers of a deep network . While these results show that effective low-level features can be handcrafted , this is insufficient to clarify the power and limitation of self-supervision in deep networks . For instance , it is not obvious whether deep networks can learn better low level features than these , how many images may be required to learn them , and how effective self-supervision may be in doing so . For instance , as we also show in the experiments , replacing low-level layers in a convolutional networks with handcrafted features such as Oyallon et al . ( 2017 ) may still decrease the overall performance of the model . Furthermore , this says little about deeper layers , which we also investigate . In this work we show that current deep learning methods learn slightly better low-level representations than hand crafted features such as the scattering transform . Additionally , these representations can be learned from one single image with augmentations and without supervision . The results show how current self-supervised learning approaches that use one million images yield only relatively small gains when compared to what can be achieved from one image and augmentations , and motivates a renewed focus on augmentations and incorporating prior knowledge into feature extractors . 3 METHODS . We discuss first our data and data augmentation strategy ( section 3.1 ) and then we summarize the three different methods for unsupervised feature learning used in the experiments ( section 3.2 ) .
This paper explores self-supervised learning in the low-data regime, comparing results to self-supervised learning on larger datasets. BiGAN, RotNet, and DeepCluster serve as the reference self-supervised methods. It argues that early layers of a convolutional neural network can be effectively learned from a single source image, with data augmentation. A performance gap exists for deeper layers, suggesting that larger datasets are required for self-supervised learning of useful filters in deeper network layers.
SP:8283eb652046558e12c67447dddebcb52ee9de94
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
1 INTRODUCTION . The channel configuration ( a.k.a .. filter numbers or channel numbers ) of a neural network plays a critical role in its affordability on resource constrained platforms , such as mobile phones , wearables and Internet of Things ( IoT ) devices . The most common constraints ( Liu et al. , 2017b ; Huang et al. , 2017 ; Wang et al. , 2017 ; Han et al. , 2015a ) , i.e. , latency , FLOPs and runtime memory footprint , are all bound to the number of channels . For example , in a single convolution or fully-connected layer , the FLOPs ( number of Multiply-Adds ) increases linearly by the output channels . The memory footprint can also be reduced ( Sandler et al. , 2018 ) by reducing the number of channels in bottleneck convolutions for most vision applications ( Sandler et al. , 2018 ; Howard et al. , 2017 ; Ma et al. , 2018 ; Zhang et al. , 2017b ) . Despite its importance , the number of channels has been chosen mostly based on heuristics . LeNet5 ( LeCun et al. , 1998 ) selected 6 channels in its first convolution layer , which is then projected to 16 channels after sub-sampling . AlexNet ( Krizhevsky et al. , 2012 ) adopted five convolutions with channels equal to 96 , 256 , 384 , 384 and 256 . A commonly used heuristic , the “ half size , double channel ” rule , was introduced in VGG nets ( Simonyan & Zisserman , 2014 ) , if not earlier . The rule is that when spatial size of feature map is halved , the number of filters is doubled . This heuristic has been more-or-less used in followup network architecture designs including ResNets ( He et al. , 2016 ; Xie et al. , 2017 ) , Inception nets ( Szegedy et al. , 2015 ; 2016 ; 2017 ) , MobileNets ( Sandler et al. , 2018 ; Howard et al. , 2017 ) and networks for many vision applications . Other heuristics have also been explored . For example , the pyramidal rule ( Han et al. , 2017 ; Zhang et al. , 2017a ) suggested to gradually increase the channels in all convolutions layer by layer , regardless of spatial size . Figure 1 visually summarizes these heuristics for setting channel numbers in a neural network . Beyond the macro-level heuristics across entire network , recent works ( Sandler et al. , 2018 ; He et al. , 2016 ; Zhang et al. , 2017a ; Tan et al. , 2018 ; Cai et al. , 2018 ) have also digged into channel configuration for micro-level building blocks ( a network building block is usually composed of several 1×1 and 3×3 convolutions ) . These micro-level heuristics have led to better speed-accuracy trade-offs . The first of its kind , bottleneck residual block , was introduced in ResNet ( He et al. , 2016 ) . It is composed of 1× 1 , 3× 3 , and 1× 1 convolutions , where the 1× 1 layers are responsible for reducing and then restoring dimensions , leaving the 3 × 3 layer a bottleneck ( 4× reduction ) . MobileNet v2 ( Sandler et al. , 2018 ) , however , argued that the bottleneck design is not efficient and proposed the inverted residual block where 1 × 1 layers are used for expanding feature first ( 6× expansion ) and then projecting back after intermediate 3 × 3 depthwise convolution . Furthermore , MNasNet ( Tan et al. , 2018 ) and ProxylessNAS nets ( Cai et al. , 2018 ) included 3× expansion version of inverted residual block into search space , and achieved even better accuracy under similar runtime latency . Apart from these human-designed heuristics , efforts on automatically optimizing channel configuration have been made explicitly or implicitly . A recent work ( Liu et al. , 2018c ) suggested that many network pruning methods ( Liu et al. , 2017b ; Li et al. , 2016 ; Luo et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ; Han et al. , 2015b ) can be thought of as performing network architecture search for channel numbers . Liu et al . ( Liu et al. , 2018c ) showed that training these pruned architectures from scratch leads to similar or even better performance than fine-tuning and pruning from a large model . More recently , MNasNet ( Tan et al. , 2018 ) proposed to directly search network architectures , including filter sizes , using reinforcement learning algorithms ( Schulman et al. , 2017 ; Heess et al. , 2017 ) . Although the search is performed on the factorized hierarchical search space , massive network samples and computational cost ( Tan et al. , 2018 ) are required for an optimized network architecture . In this work , we study how to set channel numbers in a neural network to achieve better accuracy under constrained resources . To start , the first and the most brute-force approach came in mind is the exhaustive search : training all possible channel configurations of a deep neural network for full epochs ( e.g. , MobileNets ( Sandler et al. , 2018 ; Howard et al. , 2017 ) are trained for approximately 480 epochs on ImageNet ) . Then we can simply select the best performers that are qualified for efficiency constraints . However , it is undoubtedly impractical since the cost of this brute-force approach is too high . For example , we consider a 8-layer convolutional networks and a search space limited to 10 candidates of channel numbers ( e.g. , 32 , 64 , ... , 320 ) for each layer . As a result , there are totally 108 candidate network architectures . To address this challenge , we present a simple and one-shot solution AutoSlim . Our main idea lies in training a slimmable network ( Yu et al. , 2018 ) to approximate the network accuracy of different channel configurations . Yu et al . ( Yu et al. , 2018 ; Yu & Huang , 2019 ) introduced slimmable networks that can run at arbitrary width with equally or even better performance than same architecture trained individually . Although the original motivation is to provide instant and adaptive accuracyefficiency trade-offs , we find slimmable networks are especially suitable as benchmark performance estimators for several reasons : ( 1 ) Training slimmable models ( using the sandwich rule ( Yu & Huang , 2019 ) ) is much faster than the brute-force approach . ( 2 ) A trained slimmable model can execute at arbitrary width , which can be used to approximate relative performance among different channel configurations . ( 3 ) The same trained slimmable model can be applied on search of optimal channels for different resource constraints . In AutoSlim , we first train a slimmable model for a few epochs ( e.g. , 10 % to 20 % of full training epochs ) to quickly get a benchmark performance estimator . We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop on validation set ( for ImageNet , we randomly hold out 50K samples of training set as validation set ) . After this single pass , we can obtain the optimized channel configurations under different resource constraints ( e.g. , network FLOPs limited to 150M , 300M and 600M ) . Finally we train these optimized architectures individually or jointly ( as a single slimmable network ) for full training epochs . We experiment with various networks including MobileNet v1 , MobileNet v2 , ResNet-50 and RL-searched MNasNet on the challenging setting of 1000-class ImageNet classification . AutoSlim achieves better results ( with much lower search cost ) compared with three baselines : ( 1 ) the default channel configuration of these networks , ( 2 ) channel pruning methods on same network architectures ( Luo et al. , 2017 ; He et al. , 2017 ; Yang et al. , 2018 ) and ( 3 ) reinforcement learning based architecture search methods ( He et al. , 2018 ; Tan et al. , 2018 ) . 2 RELATED WORK . 2.1 ARCHITECTURE SEARCH FOR CHANNEL NUMBERS . In this part , we mainly discuss previous methods on automatic architecture search for channel numbers . Human-designed heuristics have been introduced in Section 1 and visually summarized in Figure 1 . Channel Pruning . Channel pruning ( a.k.a. , network slimming ) methods ( Liu et al. , 2017b ; He et al. , 2017 ; Ye et al. , 2018 ; Huang et al. , 2018 ; Lee et al. , 2018 ) aim at reducing effective channels of a large neural network to speedup its inference . Both training-based , inference-time and initializationtime pruning methods have been proposed ( Liu et al. , 2017b ; He et al. , 2017 ; Ye et al. , 2018 ; Huang et al. , 2018 ; Lee et al. , 2018 ; Frankle & Carbin , 2018 ) in the literature . Here we selectively review two methods ( Liu et al. , 2017b ; He et al. , 2017 ) . He et al . ( He et al. , 2017 ) proposed an inferencetime approach based on an iterative two-step algorithm : the LASSO based channel selection and the least square feature reconstruction . Liu et al . ( Liu et al. , 2017b ) , on the other hand , trained neural networks with a ` 1 regularization on the scaling factors in batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . By pushing the factors towards zero , insignificant channels can be identified and removed . In a recent work ( Liu et al. , 2018c ) , Liu et al.suggested that many network pruning methods ( Liu et al. , 2017b ; Li et al. , 2016 ; Luo et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ; Han et al. , 2015b ) can be thought of as performing network architecture search for channel numbers . In experiments , Liu et al . ( Liu et al. , 2018c ) showed that training these pruned architectures from scratch leads to similar or even better performance than iteratively fine-tuning and pruning a large model . Thus , Liu et al . ( Liu et al. , 2018c ) concluded that training a large , over-parameterized model is not necessary to obtain an efficient final model . In our work , we take channel pruning methods ( Luo et al. , 2017 ; He et al. , 2017 ; 2018 ) as one of baselines . Neural Architecture Search ( NAS ) . Recently there has been a growing interest in automating the neural network architecture design ( Tan et al. , 2018 ; Cai et al. , 2018 ; Elsken et al. , 2018 ; Bender et al. , 2018 ; Pham et al. , 2018 ; Zoph et al. , 2018 ; Liu et al. , 2018a ; 2017a ; 2018b ; Brock et al. , 2017 ) . Significant improvements have been achieved by these automatically searched architectures in many vision and language tasks ( Zoph et al. , 2018 ; Zoph & Le , 2016 ) . However , most neural architecture search methods ( Elsken et al. , 2018 ; Bender et al. , 2018 ; Pham et al. , 2018 ; Zoph et al. , 2018 ; Liu et al. , 2018a ; 2017a ; 2018b ; Brock et al. , 2017 ) did not include channel configuration into search space , and instead applied human-designed heuristics . More recently , the RL-based searching algorithms are also applied to prune channels ( He et al. , 2018 ) or search for filter numbers ( Tan et al. , 2018 ) directly . He et al.proposed AutoML for Model Compression ( AMC ) ( He et al. , 2018 ) which leveraged reinforcement learning ( deep deterministic policy gradient ( Lillicrap et al. , 2015 ) ) to provide the model compression policy . MNasNet ( Tan et al. , 2018 ) proposed to directly search network architectures , including filter sizes , for mobile devices . In the search , each sampled model is trained on 5 epochs using an aggressive learning rate schedule , and evaluated on a 50K validation set . In total , Tan et al.sampled about 8 , 000 models during architecture search . Further , ProxylessNAS ( Cai et al. , 2018 ) proposed to directly learn the architectures for large-scale target tasks and target hardware platforms , based on DARTS ( Liu et al. , 2018b ) . For each residual block , ProxylessNAS ( Cai et al. , 2018 ) followed the channel configuration of MNasNet ( Tan et al. , 2018 ) , while inside each block , the choices can be ×3 or ×6 version of inverted residual blocks . The memory consumption issue ( Cai et al. , 2018 ; Liu et al. , 2018b ) was addressed by binarizing the architecture parameters and forcing only one path to be active .
In this paper, the authors propose a method to perform architecture search on the number of channels in convolutional layers. The proposed method, called AutoSlim, is a one-shot approach based on previous work of Slimmable Networks [2,3]. The authors have tested the proposed methods on a variety of architectures on ImageNet dataset.
SP:5abcf6f6bd3c0079e6f942f614949a3f566afed8
AutoSlim: Towards One-Shot Architecture Search for Channel Numbers
1 INTRODUCTION . The channel configuration ( a.k.a .. filter numbers or channel numbers ) of a neural network plays a critical role in its affordability on resource constrained platforms , such as mobile phones , wearables and Internet of Things ( IoT ) devices . The most common constraints ( Liu et al. , 2017b ; Huang et al. , 2017 ; Wang et al. , 2017 ; Han et al. , 2015a ) , i.e. , latency , FLOPs and runtime memory footprint , are all bound to the number of channels . For example , in a single convolution or fully-connected layer , the FLOPs ( number of Multiply-Adds ) increases linearly by the output channels . The memory footprint can also be reduced ( Sandler et al. , 2018 ) by reducing the number of channels in bottleneck convolutions for most vision applications ( Sandler et al. , 2018 ; Howard et al. , 2017 ; Ma et al. , 2018 ; Zhang et al. , 2017b ) . Despite its importance , the number of channels has been chosen mostly based on heuristics . LeNet5 ( LeCun et al. , 1998 ) selected 6 channels in its first convolution layer , which is then projected to 16 channels after sub-sampling . AlexNet ( Krizhevsky et al. , 2012 ) adopted five convolutions with channels equal to 96 , 256 , 384 , 384 and 256 . A commonly used heuristic , the “ half size , double channel ” rule , was introduced in VGG nets ( Simonyan & Zisserman , 2014 ) , if not earlier . The rule is that when spatial size of feature map is halved , the number of filters is doubled . This heuristic has been more-or-less used in followup network architecture designs including ResNets ( He et al. , 2016 ; Xie et al. , 2017 ) , Inception nets ( Szegedy et al. , 2015 ; 2016 ; 2017 ) , MobileNets ( Sandler et al. , 2018 ; Howard et al. , 2017 ) and networks for many vision applications . Other heuristics have also been explored . For example , the pyramidal rule ( Han et al. , 2017 ; Zhang et al. , 2017a ) suggested to gradually increase the channels in all convolutions layer by layer , regardless of spatial size . Figure 1 visually summarizes these heuristics for setting channel numbers in a neural network . Beyond the macro-level heuristics across entire network , recent works ( Sandler et al. , 2018 ; He et al. , 2016 ; Zhang et al. , 2017a ; Tan et al. , 2018 ; Cai et al. , 2018 ) have also digged into channel configuration for micro-level building blocks ( a network building block is usually composed of several 1×1 and 3×3 convolutions ) . These micro-level heuristics have led to better speed-accuracy trade-offs . The first of its kind , bottleneck residual block , was introduced in ResNet ( He et al. , 2016 ) . It is composed of 1× 1 , 3× 3 , and 1× 1 convolutions , where the 1× 1 layers are responsible for reducing and then restoring dimensions , leaving the 3 × 3 layer a bottleneck ( 4× reduction ) . MobileNet v2 ( Sandler et al. , 2018 ) , however , argued that the bottleneck design is not efficient and proposed the inverted residual block where 1 × 1 layers are used for expanding feature first ( 6× expansion ) and then projecting back after intermediate 3 × 3 depthwise convolution . Furthermore , MNasNet ( Tan et al. , 2018 ) and ProxylessNAS nets ( Cai et al. , 2018 ) included 3× expansion version of inverted residual block into search space , and achieved even better accuracy under similar runtime latency . Apart from these human-designed heuristics , efforts on automatically optimizing channel configuration have been made explicitly or implicitly . A recent work ( Liu et al. , 2018c ) suggested that many network pruning methods ( Liu et al. , 2017b ; Li et al. , 2016 ; Luo et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ; Han et al. , 2015b ) can be thought of as performing network architecture search for channel numbers . Liu et al . ( Liu et al. , 2018c ) showed that training these pruned architectures from scratch leads to similar or even better performance than fine-tuning and pruning from a large model . More recently , MNasNet ( Tan et al. , 2018 ) proposed to directly search network architectures , including filter sizes , using reinforcement learning algorithms ( Schulman et al. , 2017 ; Heess et al. , 2017 ) . Although the search is performed on the factorized hierarchical search space , massive network samples and computational cost ( Tan et al. , 2018 ) are required for an optimized network architecture . In this work , we study how to set channel numbers in a neural network to achieve better accuracy under constrained resources . To start , the first and the most brute-force approach came in mind is the exhaustive search : training all possible channel configurations of a deep neural network for full epochs ( e.g. , MobileNets ( Sandler et al. , 2018 ; Howard et al. , 2017 ) are trained for approximately 480 epochs on ImageNet ) . Then we can simply select the best performers that are qualified for efficiency constraints . However , it is undoubtedly impractical since the cost of this brute-force approach is too high . For example , we consider a 8-layer convolutional networks and a search space limited to 10 candidates of channel numbers ( e.g. , 32 , 64 , ... , 320 ) for each layer . As a result , there are totally 108 candidate network architectures . To address this challenge , we present a simple and one-shot solution AutoSlim . Our main idea lies in training a slimmable network ( Yu et al. , 2018 ) to approximate the network accuracy of different channel configurations . Yu et al . ( Yu et al. , 2018 ; Yu & Huang , 2019 ) introduced slimmable networks that can run at arbitrary width with equally or even better performance than same architecture trained individually . Although the original motivation is to provide instant and adaptive accuracyefficiency trade-offs , we find slimmable networks are especially suitable as benchmark performance estimators for several reasons : ( 1 ) Training slimmable models ( using the sandwich rule ( Yu & Huang , 2019 ) ) is much faster than the brute-force approach . ( 2 ) A trained slimmable model can execute at arbitrary width , which can be used to approximate relative performance among different channel configurations . ( 3 ) The same trained slimmable model can be applied on search of optimal channels for different resource constraints . In AutoSlim , we first train a slimmable model for a few epochs ( e.g. , 10 % to 20 % of full training epochs ) to quickly get a benchmark performance estimator . We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop on validation set ( for ImageNet , we randomly hold out 50K samples of training set as validation set ) . After this single pass , we can obtain the optimized channel configurations under different resource constraints ( e.g. , network FLOPs limited to 150M , 300M and 600M ) . Finally we train these optimized architectures individually or jointly ( as a single slimmable network ) for full training epochs . We experiment with various networks including MobileNet v1 , MobileNet v2 , ResNet-50 and RL-searched MNasNet on the challenging setting of 1000-class ImageNet classification . AutoSlim achieves better results ( with much lower search cost ) compared with three baselines : ( 1 ) the default channel configuration of these networks , ( 2 ) channel pruning methods on same network architectures ( Luo et al. , 2017 ; He et al. , 2017 ; Yang et al. , 2018 ) and ( 3 ) reinforcement learning based architecture search methods ( He et al. , 2018 ; Tan et al. , 2018 ) . 2 RELATED WORK . 2.1 ARCHITECTURE SEARCH FOR CHANNEL NUMBERS . In this part , we mainly discuss previous methods on automatic architecture search for channel numbers . Human-designed heuristics have been introduced in Section 1 and visually summarized in Figure 1 . Channel Pruning . Channel pruning ( a.k.a. , network slimming ) methods ( Liu et al. , 2017b ; He et al. , 2017 ; Ye et al. , 2018 ; Huang et al. , 2018 ; Lee et al. , 2018 ) aim at reducing effective channels of a large neural network to speedup its inference . Both training-based , inference-time and initializationtime pruning methods have been proposed ( Liu et al. , 2017b ; He et al. , 2017 ; Ye et al. , 2018 ; Huang et al. , 2018 ; Lee et al. , 2018 ; Frankle & Carbin , 2018 ) in the literature . Here we selectively review two methods ( Liu et al. , 2017b ; He et al. , 2017 ) . He et al . ( He et al. , 2017 ) proposed an inferencetime approach based on an iterative two-step algorithm : the LASSO based channel selection and the least square feature reconstruction . Liu et al . ( Liu et al. , 2017b ) , on the other hand , trained neural networks with a ` 1 regularization on the scaling factors in batch normalization ( BN ) ( Ioffe & Szegedy , 2015 ) . By pushing the factors towards zero , insignificant channels can be identified and removed . In a recent work ( Liu et al. , 2018c ) , Liu et al.suggested that many network pruning methods ( Liu et al. , 2017b ; Li et al. , 2016 ; Luo et al. , 2017 ; He et al. , 2017 ; Huang & Wang , 2018 ; Han et al. , 2015b ) can be thought of as performing network architecture search for channel numbers . In experiments , Liu et al . ( Liu et al. , 2018c ) showed that training these pruned architectures from scratch leads to similar or even better performance than iteratively fine-tuning and pruning a large model . Thus , Liu et al . ( Liu et al. , 2018c ) concluded that training a large , over-parameterized model is not necessary to obtain an efficient final model . In our work , we take channel pruning methods ( Luo et al. , 2017 ; He et al. , 2017 ; 2018 ) as one of baselines . Neural Architecture Search ( NAS ) . Recently there has been a growing interest in automating the neural network architecture design ( Tan et al. , 2018 ; Cai et al. , 2018 ; Elsken et al. , 2018 ; Bender et al. , 2018 ; Pham et al. , 2018 ; Zoph et al. , 2018 ; Liu et al. , 2018a ; 2017a ; 2018b ; Brock et al. , 2017 ) . Significant improvements have been achieved by these automatically searched architectures in many vision and language tasks ( Zoph et al. , 2018 ; Zoph & Le , 2016 ) . However , most neural architecture search methods ( Elsken et al. , 2018 ; Bender et al. , 2018 ; Pham et al. , 2018 ; Zoph et al. , 2018 ; Liu et al. , 2018a ; 2017a ; 2018b ; Brock et al. , 2017 ) did not include channel configuration into search space , and instead applied human-designed heuristics . More recently , the RL-based searching algorithms are also applied to prune channels ( He et al. , 2018 ) or search for filter numbers ( Tan et al. , 2018 ) directly . He et al.proposed AutoML for Model Compression ( AMC ) ( He et al. , 2018 ) which leveraged reinforcement learning ( deep deterministic policy gradient ( Lillicrap et al. , 2015 ) ) to provide the model compression policy . MNasNet ( Tan et al. , 2018 ) proposed to directly search network architectures , including filter sizes , for mobile devices . In the search , each sampled model is trained on 5 epochs using an aggressive learning rate schedule , and evaluated on a 50K validation set . In total , Tan et al.sampled about 8 , 000 models during architecture search . Further , ProxylessNAS ( Cai et al. , 2018 ) proposed to directly learn the architectures for large-scale target tasks and target hardware platforms , based on DARTS ( Liu et al. , 2018b ) . For each residual block , ProxylessNAS ( Cai et al. , 2018 ) followed the channel configuration of MNasNet ( Tan et al. , 2018 ) , while inside each block , the choices can be ×3 or ×6 version of inverted residual blocks . The memory consumption issue ( Cai et al. , 2018 ; Liu et al. , 2018b ) was addressed by binarizing the architecture parameters and forcing only one path to be active .
This paper proposes a simple and one-shot approach on neural architecture search for the number of channels to achieve better accuracy. Rather than training a lot of network samples, the proposed method trains a single slimmable network to approximate the network accuracy of different channel configurations. The experimental results show that the proposed method achieves better performance than the existing baseline methods.
SP:5abcf6f6bd3c0079e6f942f614949a3f566afed8
Generalized Clustering by Learning to Optimize Expected Normalized Cuts
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples . Our clustering objective is based on optimizing normalized cuts , a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity . We define a differentiable loss function equivalent to the expected normalized cuts . Unlike much of the work in unsupervised deep learning , our trained model directly outputs final cluster assignments , rather than embeddings that need further processing to be usable . Our approach generalizes to unseen datasets across a wide variety of domains , including text , and image . Specifically , we achieve state-of-the-art results on popular unsupervised clustering benchmarks ( e.g. , MNIST , Reuters , CIFAR-10 , and CIFAR-100 ) , outperforming the strongest baselines by up to 10.9 % . Our generalization results are superior ( by up to 21.9 % ) to the recent top-performing clustering approach with the ability to generalize . 1 INTRODUCTION . Clustering unlabeled data is an important problem from both a scientific and practical perspective . As technology plays a larger role in daily life , the volume of available data has exploded . However , labeling this data remains very costly and often requires domain expertise . Therefore , unsupervised clustering methods are one of the few viable approaches to gain insight into the structure of these massive unlabeled datasets . One of the most popular clustering methods is spectral clustering ( Shi & Malik , 2000 ; Ng et al. , 2002 ; Von Luxburg , 2007 ) , which first embeds the similarity of each pair of data points in the Laplacian ’ s eigenspace and then uses k-means to generate clusters from it . Spectral clustering not only outperforms commonly used clustering methods , such as k-means ( Von Luxburg , 2007 ) , but also allows us to directly minimize the pairwise distance between data points and solve for the optimal node embeddings analytically . Moreover , it is shown that the eigenvector of the normalized Laplacian matrix can be used to find the approximate solution to the well known normalized cuts problem ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . In this work , we introduce CNC , a framework for Clustering by learning to optimize expected Normalized Cuts . We show that by directly minimizing a continuous relaxation of the normalized cuts problem , CNC enables end-to-end learning approach that outperforms top-performing clustering approaches . We demonstrate that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet , which consequently results in better clustering accuracy . Let us motivate CNC through a simple example . In Figure 1 , we want to cluster 6 images from CIFAR-10 dataset into two clusters . The affinity graph for these data points is shown in Figure 1 ( a ) ( details of constructing such graph is discussed in Section 4.2 ) . In this example , it is obvious that the optimal clustering is the result of cutting the edge connecting the two triangles . Cutting this edge will result in the optimal value for the normalized cuts objective . In CNC , we define a new differentiable loss function equivalent to the expected normalized cuts objective . We train a deep learning model to minimize the proposed loss in an unsupervised manner without the need for any labeled datasets . Our trained model directly returns the probabilities of belonging to each cluster ( Figure 1 ( b ) ) . In this example , the optimal normalized cuts is 0.286 ( Equation 1 ) , and as we can see , the CNC loss also converges to this value ( Figure 1 ( c ) ) . We compare the performance of CNC to several learning-based clustering approaches ( SpectralNet ( Shaham et al. , 2018 ) , DEC ( Xie et al. , 2016 ) , DCN ( Yang et al. , 2017 ) , VaDE ( Jiang et al. , 2017 ) , DEPICT ( Ghasedi Dizaji et al. , 2017 ) , IMSAT ( Hu et al. , 2017 ) , and IIC ( Ji et al. , 2019 ) ) on four datasets : MNIST , Reuters , CIFAR10 , and CIFAR100 . Our results show up to 10.9 % improvement over the baselines . Moreover , generalizing spectral embeddings to unseen data points , a task commonly referred to as out-of-sample-extension ( OOSE ) , is a non-trivial task ( Bengio et al. , 2003 ; Belkin et al. , 2006 ; Mendoza Quispe et al. , 2016 ) . Our results confirm that CNC generalizes to unseen data . Our generalization results are superior ( by up to 21.9 % ) to SpectralNet ( Shaham et al. , 2018 ) , the recent top-performing clustering approach with the ability to generalize . 2 RELATED WORK . Recent deep learning approaches to clustering attempt to embed the input data into a form that is amenable to clustering by k-means or Gaussian Mixture Models . ( Yang et al. , 2017 ; Xie et al. , 2016 ) focused on learning representations for clustering . To find the clustering-friendly latent representations and to better cluster the data , DCN ( Yang et al. , 2017 ) proposed a joint dimensionality reduction ( DR ) and K-means clustering approach in which DR is accomplished via learning a deep neural network . DEC ( Xie et al. , 2016 ) simultaneously learns cluster assignment and the underlying feature representation by iteratively updating a target distribution to sharpen cluster associations . Several other approaches rely on a variational autoencoder that utilizes a Gaussian mixture prior ( Jiang et al. , 2017 ; Dilokthanakul et al. , 2016 ; Hu et al. , 2017 ; Ji et al. , 2019 ; Ben-Yosef & Weinshall , 2018 ) . These approaches are mainly based on data augmentation , where the network is trained to maximize the mutual information between inputs and predicted clusters , while regularizing the network so that the cluster assignment of the data points is consistent with the assignment of the augmented points . Different clustering objectives , such as self-balanced k-means and balanced min-cut , have also been exhaustively studied ( Liu et al. , 2017 ; Chen et al. , 2017 ; Chang et al. , 2014 ) . One of the most effective techniques is spectral clustering , which first generates node embeddings in the eigenspace of the graph Laplacian , and then applies k-means clustering to these vectors ( Shi & Malik , 2000 ; Ng et al. , 2002 ; Von Luxburg , 2007 ) . To address the fact that clusters with the lowest graph conductance tend to have few nodes ( Leskovec , 2009 ; Zhang & Rohe , 2018 ) , ( Zhang & Rohe , 2018 ) proposed regularized spectral clustering to encourage more balanced clusters . Generalizing clustering to unseen nodes and graphs is nontrivial ( Bengio et al. , 2003 ; Belkin et al. , 2006 ; Mendoza Quispe et al. , 2016 ) . A recent work , SpectralNet ( Shaham et al. , 2018 ) , takes a deep learning approach to spectral clustering that generalizes to unseen data points . This approach first learns embeddings of the similarity of each pair of data points in Laplacian ’ s eigenspace and then applies k-means to those embeddings to generate clusters . Unlike SpectralNet , we propose an end-to-end learning approach with a differentiable loss that directly minimizes the normalized cuts . We show that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet , which consequently results in better clustering accuracy . Our evaluation results show that CNC improves generalization accuracy on unseen data points by up to 21.9 % . 3 PRELIMINARIES . Since CNC objective is based on optimizing normalized cuts , in this section , we briefly overview the formal definition of this metric . 3.1 FORMAL DEFINITION OF NORMALIZED CUTS . Let G = ( V , E , W ) be a graph where V = { vi } and E = { e ( vi , vj ) |vi ∈ V , vj ∈ V } are the set of nodes and edges in the graph and wij ∈W is the edge weight of the e ( vi , vj ) . Let n be the number of nodes . A graph G can be clustered into g disjoint sets S1 , S2 , . . . Sg , where the union of the nodes in those sets are V ( ⋃g k=1 Sk = V ) , and each node belongs to only one set ( ⋂g k=1 Sk = ∅ ) , by simply removing edges connecting those sets . For example , in Figure 1 ( a ) , by removing one edge two disjoint clusters are formed . Normalized cuts ( Ncuts ) which is defined based on the graph conductance , has been studied by ( Shi & Malik , 2000 ; Zhang & Rohe , 2018 ) , and the cost of a cut that forms disjoint sets S1 , S2 , . . . Sg is computed as : Ncuts ( S1 , S2 , . . . Sg ) = g∑ k=1 cut ( Sk , S̄k ) vol ( Sk , V ) ( 1 ) Where S̄k represents the complement of Sk , i.e. , S̄k = ⋃ i6=k Si . cut ( Sk , S̄k ) is called cut and is the total weight of the edges that are removed fromG in order to form disjoint sets Sk and S̄k . vol ( Sk , V ) is the total edge weights ( wij ) , whose end points ( vi , or vj ) belong to Sk . The cut and vol are : cut ( Sk , S̄k ) = ∑ vi∈Sk , vj∈S̄k wij , vol ( Sk , V ) = ∑ vi∈Sk ∑ vj∈V wij ( 2 ) Note that in Equation 2 , Sk and S̄k are disjoint , i.e. , Sk ∩ S̄k = ∅ , while in vol , Sk ⊂ V . In running example ( Figure 1 ) , since the edge weights are one , cut ( S1 , S̄1 ) = cut ( S2 , S̄2 ) = 1 , and vol ( S1 , V ) = vol ( S2 , V ) = 2 + 2 + 3 = 7 . Thus the Ncuts ( S1 , S2 ) = 17 + 1 7 = 0.286 . In this example one can see that such clustering results in minimum value of the normalized cuts . CNC aims to find a cut that the normalized cuts ( Equation 1 ) is minimized . 4 CNC FRAMEWORK . Finding the cluster assignments that minimizes the normalized cuts is NP-complete and an approximation to the this problem is based on the eigenvectors of the normalized graph Laplacian which has been studied in ( Shi & Malik , 2000 ; Zhang & Rohe , 2018 ) . CNC , on the other hand , is a neural network framework for learning to cluster in the absence of labeled examples by directly minimizing the continuous relaxation of the normalized cuts . As shown in Algorithm 1 , end-to-end training of the CNC contains two steps , i.e , ( i ) data points embedding ( line 3 ) , and ( ii ) clustering ( lines 4-9 ) . In data points embedding , the goal is to learn embeddings that capture the affinity of the data points , while the clustering step uses those embeddings to learn the CNC model and outputs the cluster assignments . Next , we first focus on the clustering step and we introduce our new differentiable loss function to train CNC model . Later in Section 4.2 , we discuss the details of the embedding step .
This paper presents an end-to-end approach for clustering. The proposed model is called CNC. It simultaneously learns a data embedding that preserve data affinity using Siamese networks, and clusters data in the embedding space. The model is trained by minimizing a differentiable loss function that is derived from normalized cuts. As such, the embedding phase renders the data point friendly to spectral clustering.
SP:6c5368ae026fc1aaf92bdc208d90e4eec999575a
Generalized Clustering by Learning to Optimize Expected Normalized Cuts
We introduce a novel end-to-end approach for learning to cluster in the absence of labeled examples . Our clustering objective is based on optimizing normalized cuts , a criterion which measures both intra-cluster similarity as well as inter-cluster dissimilarity . We define a differentiable loss function equivalent to the expected normalized cuts . Unlike much of the work in unsupervised deep learning , our trained model directly outputs final cluster assignments , rather than embeddings that need further processing to be usable . Our approach generalizes to unseen datasets across a wide variety of domains , including text , and image . Specifically , we achieve state-of-the-art results on popular unsupervised clustering benchmarks ( e.g. , MNIST , Reuters , CIFAR-10 , and CIFAR-100 ) , outperforming the strongest baselines by up to 10.9 % . Our generalization results are superior ( by up to 21.9 % ) to the recent top-performing clustering approach with the ability to generalize . 1 INTRODUCTION . Clustering unlabeled data is an important problem from both a scientific and practical perspective . As technology plays a larger role in daily life , the volume of available data has exploded . However , labeling this data remains very costly and often requires domain expertise . Therefore , unsupervised clustering methods are one of the few viable approaches to gain insight into the structure of these massive unlabeled datasets . One of the most popular clustering methods is spectral clustering ( Shi & Malik , 2000 ; Ng et al. , 2002 ; Von Luxburg , 2007 ) , which first embeds the similarity of each pair of data points in the Laplacian ’ s eigenspace and then uses k-means to generate clusters from it . Spectral clustering not only outperforms commonly used clustering methods , such as k-means ( Von Luxburg , 2007 ) , but also allows us to directly minimize the pairwise distance between data points and solve for the optimal node embeddings analytically . Moreover , it is shown that the eigenvector of the normalized Laplacian matrix can be used to find the approximate solution to the well known normalized cuts problem ( Ng et al. , 2002 ; Von Luxburg , 2007 ) . In this work , we introduce CNC , a framework for Clustering by learning to optimize expected Normalized Cuts . We show that by directly minimizing a continuous relaxation of the normalized cuts problem , CNC enables end-to-end learning approach that outperforms top-performing clustering approaches . We demonstrate that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet , which consequently results in better clustering accuracy . Let us motivate CNC through a simple example . In Figure 1 , we want to cluster 6 images from CIFAR-10 dataset into two clusters . The affinity graph for these data points is shown in Figure 1 ( a ) ( details of constructing such graph is discussed in Section 4.2 ) . In this example , it is obvious that the optimal clustering is the result of cutting the edge connecting the two triangles . Cutting this edge will result in the optimal value for the normalized cuts objective . In CNC , we define a new differentiable loss function equivalent to the expected normalized cuts objective . We train a deep learning model to minimize the proposed loss in an unsupervised manner without the need for any labeled datasets . Our trained model directly returns the probabilities of belonging to each cluster ( Figure 1 ( b ) ) . In this example , the optimal normalized cuts is 0.286 ( Equation 1 ) , and as we can see , the CNC loss also converges to this value ( Figure 1 ( c ) ) . We compare the performance of CNC to several learning-based clustering approaches ( SpectralNet ( Shaham et al. , 2018 ) , DEC ( Xie et al. , 2016 ) , DCN ( Yang et al. , 2017 ) , VaDE ( Jiang et al. , 2017 ) , DEPICT ( Ghasedi Dizaji et al. , 2017 ) , IMSAT ( Hu et al. , 2017 ) , and IIC ( Ji et al. , 2019 ) ) on four datasets : MNIST , Reuters , CIFAR10 , and CIFAR100 . Our results show up to 10.9 % improvement over the baselines . Moreover , generalizing spectral embeddings to unseen data points , a task commonly referred to as out-of-sample-extension ( OOSE ) , is a non-trivial task ( Bengio et al. , 2003 ; Belkin et al. , 2006 ; Mendoza Quispe et al. , 2016 ) . Our results confirm that CNC generalizes to unseen data . Our generalization results are superior ( by up to 21.9 % ) to SpectralNet ( Shaham et al. , 2018 ) , the recent top-performing clustering approach with the ability to generalize . 2 RELATED WORK . Recent deep learning approaches to clustering attempt to embed the input data into a form that is amenable to clustering by k-means or Gaussian Mixture Models . ( Yang et al. , 2017 ; Xie et al. , 2016 ) focused on learning representations for clustering . To find the clustering-friendly latent representations and to better cluster the data , DCN ( Yang et al. , 2017 ) proposed a joint dimensionality reduction ( DR ) and K-means clustering approach in which DR is accomplished via learning a deep neural network . DEC ( Xie et al. , 2016 ) simultaneously learns cluster assignment and the underlying feature representation by iteratively updating a target distribution to sharpen cluster associations . Several other approaches rely on a variational autoencoder that utilizes a Gaussian mixture prior ( Jiang et al. , 2017 ; Dilokthanakul et al. , 2016 ; Hu et al. , 2017 ; Ji et al. , 2019 ; Ben-Yosef & Weinshall , 2018 ) . These approaches are mainly based on data augmentation , where the network is trained to maximize the mutual information between inputs and predicted clusters , while regularizing the network so that the cluster assignment of the data points is consistent with the assignment of the augmented points . Different clustering objectives , such as self-balanced k-means and balanced min-cut , have also been exhaustively studied ( Liu et al. , 2017 ; Chen et al. , 2017 ; Chang et al. , 2014 ) . One of the most effective techniques is spectral clustering , which first generates node embeddings in the eigenspace of the graph Laplacian , and then applies k-means clustering to these vectors ( Shi & Malik , 2000 ; Ng et al. , 2002 ; Von Luxburg , 2007 ) . To address the fact that clusters with the lowest graph conductance tend to have few nodes ( Leskovec , 2009 ; Zhang & Rohe , 2018 ) , ( Zhang & Rohe , 2018 ) proposed regularized spectral clustering to encourage more balanced clusters . Generalizing clustering to unseen nodes and graphs is nontrivial ( Bengio et al. , 2003 ; Belkin et al. , 2006 ; Mendoza Quispe et al. , 2016 ) . A recent work , SpectralNet ( Shaham et al. , 2018 ) , takes a deep learning approach to spectral clustering that generalizes to unseen data points . This approach first learns embeddings of the similarity of each pair of data points in Laplacian ’ s eigenspace and then applies k-means to those embeddings to generate clusters . Unlike SpectralNet , we propose an end-to-end learning approach with a differentiable loss that directly minimizes the normalized cuts . We show that our approach indeed can produce lower normalized cut values than the baseline methods such as SpectralNet , which consequently results in better clustering accuracy . Our evaluation results show that CNC improves generalization accuracy on unseen data points by up to 21.9 % . 3 PRELIMINARIES . Since CNC objective is based on optimizing normalized cuts , in this section , we briefly overview the formal definition of this metric . 3.1 FORMAL DEFINITION OF NORMALIZED CUTS . Let G = ( V , E , W ) be a graph where V = { vi } and E = { e ( vi , vj ) |vi ∈ V , vj ∈ V } are the set of nodes and edges in the graph and wij ∈W is the edge weight of the e ( vi , vj ) . Let n be the number of nodes . A graph G can be clustered into g disjoint sets S1 , S2 , . . . Sg , where the union of the nodes in those sets are V ( ⋃g k=1 Sk = V ) , and each node belongs to only one set ( ⋂g k=1 Sk = ∅ ) , by simply removing edges connecting those sets . For example , in Figure 1 ( a ) , by removing one edge two disjoint clusters are formed . Normalized cuts ( Ncuts ) which is defined based on the graph conductance , has been studied by ( Shi & Malik , 2000 ; Zhang & Rohe , 2018 ) , and the cost of a cut that forms disjoint sets S1 , S2 , . . . Sg is computed as : Ncuts ( S1 , S2 , . . . Sg ) = g∑ k=1 cut ( Sk , S̄k ) vol ( Sk , V ) ( 1 ) Where S̄k represents the complement of Sk , i.e. , S̄k = ⋃ i6=k Si . cut ( Sk , S̄k ) is called cut and is the total weight of the edges that are removed fromG in order to form disjoint sets Sk and S̄k . vol ( Sk , V ) is the total edge weights ( wij ) , whose end points ( vi , or vj ) belong to Sk . The cut and vol are : cut ( Sk , S̄k ) = ∑ vi∈Sk , vj∈S̄k wij , vol ( Sk , V ) = ∑ vi∈Sk ∑ vj∈V wij ( 2 ) Note that in Equation 2 , Sk and S̄k are disjoint , i.e. , Sk ∩ S̄k = ∅ , while in vol , Sk ⊂ V . In running example ( Figure 1 ) , since the edge weights are one , cut ( S1 , S̄1 ) = cut ( S2 , S̄2 ) = 1 , and vol ( S1 , V ) = vol ( S2 , V ) = 2 + 2 + 3 = 7 . Thus the Ncuts ( S1 , S2 ) = 17 + 1 7 = 0.286 . In this example one can see that such clustering results in minimum value of the normalized cuts . CNC aims to find a cut that the normalized cuts ( Equation 1 ) is minimized . 4 CNC FRAMEWORK . Finding the cluster assignments that minimizes the normalized cuts is NP-complete and an approximation to the this problem is based on the eigenvectors of the normalized graph Laplacian which has been studied in ( Shi & Malik , 2000 ; Zhang & Rohe , 2018 ) . CNC , on the other hand , is a neural network framework for learning to cluster in the absence of labeled examples by directly minimizing the continuous relaxation of the normalized cuts . As shown in Algorithm 1 , end-to-end training of the CNC contains two steps , i.e , ( i ) data points embedding ( line 3 ) , and ( ii ) clustering ( lines 4-9 ) . In data points embedding , the goal is to learn embeddings that capture the affinity of the data points , while the clustering step uses those embeddings to learn the CNC model and outputs the cluster assignments . Next , we first focus on the clustering step and we introduce our new differentiable loss function to train CNC model . Later in Section 4.2 , we discuss the details of the embedding step .
The paper suggests a differentiable objective that can be used to train a network to output cluster probabilities for a given datapoint, given a fixed number of clusters and embeddings of the data points to be clustered. In particular, this objective can be seen as a relaxation of the normalized cut objective, where indicator variables in the original formulation are replaced with their expectations under the trained model. The authors experiment with a number of clustering datasets where the number of cluster is known beforehand (and where, for evaluation purposes, the ground truth is known), and find that their method generally improves over the clustering performance of SpectralNet (Shaham et al., 2018) in terms of accuracy and normalized mutual information, and that it finds solutions with lower normalized cut values.
SP:6c5368ae026fc1aaf92bdc208d90e4eec999575a