paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
1 INTRODUCTION . In recent years , many studies have been devoted to explaining the great success of over-parameterized neural networks , where the number of parameters is much larger than that needed to fit a given training dataset . This study also treats over-parameterized two-layer neural networks using smooth activation functions and analyzes the convergence and generalization abilities of the gradient descent method for optimizing this type of network . For over-parameterized two-layer neural networks , Du et al . ( 2019 ) ; Arora et al . ( 2019 ) ; Chizat & Bach ( 2018a ) ; Mei et al . ( 2018 ) showed the global convergence of the gradient descent . These studies are mainly divided into two groups depending on the scaling factor of the output of the networks to which the global convergence property has been demonstrated using different types of proofs . For the scaling factor 1/m , ( m : the number of hidden units ) , Chizat & Bach ( 2018a ) ; Mei et al . ( 2018 ) showed the convergence to the global minimum over probability measures when m → ∞ by utilizing the Wasserstein gradient flow perspective ( Nitanda & Suzuki , 2017 ) on the gradient descent . For the scaling factor 1/mβ ( β < 1 ) , Du et al . ( 2019 ) essentially demonstrated that the kernel smoothing of functional gradients by the neural tangent kernel ( Jacot et al. , 2018 ; Chizat & Bach , 2018b ) has comparable performance with the functional gradient as m → ∞ by making a positivity assumption on the Gram-matrix of this kernel , resulting in the global convergence property . In addition , Arora et al . ( 2019 ) provided a generalization bound via a fine-grained analysis of the gradient descent . These studies provide the first steps to understand the role of over-parameterization of neural networks and the gradient descent on regression problems using the squared loss function . For the classification problems with logistic loss , a few studies ( Allen-Zhu et al. , 2018a ; Cao & Gu , 2019a ; b ) investigated the convergence and generalization abilities of gradient descent under a separability assumption with a suitable model instead of the positivity of the neural tangent kernel . In this study , we further develop this line of research on binary classification problems . Our contributions . We provide fine-grained global convergence and generalization analyses of the gradient descent for two-layer neural networks with smooth activations under a separability assumption with a sufficient margin using a neural tangent model , which is a non-linear model with feature extraction through a neural tangent . We demonstrate that a separability assumption is more suitable than the positivity condition of the neural tangent kernel because ( i ) the positive neural tangent kernel leads to weak separability and conversely , ( ii ) separability leads to the positivity of the neural tangent kernel only on a cone spanned by labels , which is very restrictive compared to the whole space . Therefore , the separability condition is rather weak in this sense but it is enough to ensure global convergence for the classification problems . Thus , a significantly better convergence and generalization analyses with respect to network width can be obtained because the positivity of the neural tangent kernel is not required . Consequently , our theory provides a generalization guarantee for less over-parameterized two-layer networks trained by gradient descent , while most existing results relying on the positive neural tangent kernel essentially require quite high overparameterization . To the best of our knowledge , there are no successful studies for our problem setting ( i.e. , less over-parameterized two-layer neural networks with smooth activation functions for the classification problems with logistic loss ) in the literature . Most studies have focused on highly over-parameterized neural networks with ReLU activation , and less over-parameterized settings have been considered difficult for showing the global convergence property of gradient descent even in the few studies using a separability condition ( Allen-Zhu et al. , 2018a ; Cao & Gu , 2019a ; b ) . However , we note that these studies provided global convergence and generalization analyses of the ( stochastic ) gradient descent for challenging settings ( i.e. , deep ReLU networks ) by making a similar but different assumption than ours . Thus , our and these studies do not include each other because of the difference of the network structure ( i.e. , network depth and activation type ) and assumptions . We here describe the main result informally . A neural tangent model is an infinite-dimensional non-linear model using transformed features ( ∂θσ ( θ ( 0 ) > x ) ) θ ( 0 ) ∼µ0 , where σ is a smooth activation and µ0 is a distribution used to initialize the parameters of the input layer in two-layer neural networks . Theorem 1 states that gradient descent can find an -accurate solution in terms of the expected classification error for a wide class of over-parameterized two-layer neural networks under a separability assumption using a neural tangent model . Theorem 1 ( Informal ) . Suppose that a given data distribution is separable by a neural tangent model with a sufficient margin under L∞-constraint . If for any > 0 , the hyperparameters satisfy one of the following ( i ) β ∈ [ 0 , 1 ) , m = Ω ( −1 1−β ) , T = Θ ( −2 ) , η = Θ ( m2β−1 ) , n = Ω̃ ( −4 ) , ( ii ) β = 0 , m = Θ̃ ( −3/2 ) , T = Θ̃ ( −1 ) , η = Θ ( m−1 ) , n = Ω̃ ( −2 ) . then with high probability over the random initialization and choice of samples of size n , the gradient descent with a learning rate η achieves an expected -classification error within T -iterations . Related work . A few recent studies ( Allen-Zhu et al. , 2018a ; Cao & Gu , 2019a ; b ) are closely related to our work because they also treated the logistic loss function . As stated above , problem settings in our and these studies are somewhat different , but we compare our result with those specialized to two-layer network to show a better property of our problem setting and analyses . Separability assumptions were made on an infinite-width two-layer ReLU network in Cao & Gu ( 2019a ; b ) and on a smooth target function in Allen-Zhu et al . ( 2018a ) . For generalization analyses , our result exhibits much better dependency on the network width owing to a better problem setting with a fine-grained analysis . Table 1 provides a comparison of the hyperparameter settings of networks and gradient descent in related studies to achieve an expected -classification error . As evident in Table 1 , for more comprehensive sizes of two-layer networks with respect to the network width , our theory ensures the same generalization ability as those of Allen-Zhu et al . ( 2018a ) ; Cao & Gu ( 2019a ; b ) . In fact , the network width Ω ( −1 ) and Ω ( −3/2 ) are sufficient in our setting . For the stochastic gradient descent for two-layer networks , Brutzkus et al . ( 2018 ) ; Li & Liang ( 2018 ) provided generalization analyses . Brutzkus et al . ( 2018 ) assumed that datasets are linear separable and this restrictive assumption was relaxed to mixtures of well separated data distributions in Li & Liang ( 2018 ) . However , the analysis in Li & Liang ( 2018 ) is also tailored to only highly overparameterized settings . Concretely , a very large width m = Ω̃ ( −24 ) and the number of samples ( iterations ) n = Θ ( T ) = Õ ( −12 ) are required to achieve an expected -classification error in Li & Liang ( 2018 ) . In addition , it should be noted that global convergence analyses ( Allen-Zhu et al. , 2018b ; Zou et al. , 2018 ) in terms of optimization without the specification of network size will yield loose generalization bounds because the complexities of neural networks can not be specified . Apart from the abovementioned studies , there are many other studies ( Brutzkus & Globerson , 2017 ; Zhong et al. , 2017 ; Tian , 2017 ; Soltanolkotabi , 2017 ; Du et al. , 2019 ; Zhang et al. , 2018 ; Arora et al. , 2019 ; Oymak & Soltanolkotabi , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2019 ) that focus on regression problems but our study focuses on classification problems and demonstrates a better property of gradient descent for over-parameterized networks by utilizing the problem structure of a binary classification . Especially , we show that a separability assumption is more preferable than the positivity condition of the neural tangent kernel ( Du et al. , 2019 ; Arora et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2019 ) and show that the required network width can be significantly reduced . Concretely , the required network widths are at least Ω ( n6 ) ( Du et al. , 2019 ; Wu et al. , 2019 ) , Ω ( n7 −2 ) ( Arora et al. , 2019 ) , and Ω ( n4 ) ( Zhang et al. , 2019 ) . Thus , these widths are very large , as compared to our results because sample complexities are generally slower than or equal to n = Ω ( −2 ) . In addition , proof techniques are also different for the squared loss and the logistic loss functions because the latter function lacks the strong convexity . Thus , we can not utilize the linear convergence property for the logistic loss and parameters will diverge , which also causes the difficulty of showing better generalization ability without a fine-grained analysis . 2 PRELIMINARY . Here , we describe the problem setting for the binary logistic regression and discuss the functional gradients to provide a clear theoretical view of the gradient methods for two-layer neural networks . 2.1 PROBLEM SETTING . Let X = Rd and Y be a feature space and the set of binary labels { −1 , 1 } , respectively . We denote by ν a true probability measure on X ×Y and by νn an empirical probability measure , deduced from observations ( xi , yi ) ni=1 independently drawn from ν , i.e. , dνn ( X , Y ) = ∑n i=1 δ ( xi , yi ) ( X , Y ) dXdY/n , where δ is the Dirac delta function . The marginal distributions of ν and νn on X are denoted by νX and νXn , respectively . For ζ ∈ R and y ∈ Y , let l ( ζ , y ) be the logistic loss : log ( 1 + exp ( −yζ ) ) . Then , the objective function to be minimized is formalized as follows : L ( Θ ) def= E ( X , Y ) ∼νn [ l ( fΘ ( X ) , Y ) ] = 1 n n∑ i=1 l ( fΘ ( xi ) , yi ) , where fΘ : X → R is a two-layer neural network equipped with parameters Θ = ( θr ) mr=1 . When we consider a function fΘ as a variable of the objective function , we denote L ( fΘ ) def = L ( Θ ) . The two-layer neural network treated in this study is formalized as follows . For parameters Θ = ( θr ) m r=1 ( θr ∈ Rd ) and fixed constants ( ar ) mr=1 ∈ { −1 , 1 } m : fΘ ( x ) = 1 mβ m∑ r=1 arσ ( θ > r x ) , ( 1 ) where m is the number of hidden units , β is an order of the scaling factor , and σ : R → R is a smooth activation function such as sigmoid , tanh , swish ( Ramachandran et al . ) ) , and other smooth approximations of ReLU . In the training procedure , the parameters Θ = ( θr ) mr=1 of the input layer are optimized . This setting is the same as those in Du et al . ( 2019 ) ; Arora et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Wu et al . ( 2019 ) , except for the types of activation functions , scaling factor , and loss function .
The authors study the problem of binary logistic regression in a two-layer network with a smooth activation function. They introduce a separability assumption on the dataset using the neural tangent model. This separability assumption is weaker than the more Neural Tangent Kernel assumption that has been extensively studied in the regression literature. In that case, a certain Gram-matrix must be nonnegative. In the current work, the authors observe that the structure of the logistic loss in the binary classification problem restricts the functional gradients to lie in a particular space, meaning that nonnegative of the Gram-matrix is only needed on a subspace. This is the underlying theoretical reason for why they can get improvement over those methods in the setting they study. Under the separability assumption, the authors prove convergent gradient descent and generalization of the ensuring net, while assuming the two-layer networks are less overparameterized than what would have been possible under the Gram-matrix perspective.
SP:e4f6ad5fdfa9a438b8b3be4cbe856ea2ab5d68e6
Gradient Descent can Learn Less Over-parameterized Two-layer Neural Networks on Classification Problems
1 INTRODUCTION . In recent years , many studies have been devoted to explaining the great success of over-parameterized neural networks , where the number of parameters is much larger than that needed to fit a given training dataset . This study also treats over-parameterized two-layer neural networks using smooth activation functions and analyzes the convergence and generalization abilities of the gradient descent method for optimizing this type of network . For over-parameterized two-layer neural networks , Du et al . ( 2019 ) ; Arora et al . ( 2019 ) ; Chizat & Bach ( 2018a ) ; Mei et al . ( 2018 ) showed the global convergence of the gradient descent . These studies are mainly divided into two groups depending on the scaling factor of the output of the networks to which the global convergence property has been demonstrated using different types of proofs . For the scaling factor 1/m , ( m : the number of hidden units ) , Chizat & Bach ( 2018a ) ; Mei et al . ( 2018 ) showed the convergence to the global minimum over probability measures when m → ∞ by utilizing the Wasserstein gradient flow perspective ( Nitanda & Suzuki , 2017 ) on the gradient descent . For the scaling factor 1/mβ ( β < 1 ) , Du et al . ( 2019 ) essentially demonstrated that the kernel smoothing of functional gradients by the neural tangent kernel ( Jacot et al. , 2018 ; Chizat & Bach , 2018b ) has comparable performance with the functional gradient as m → ∞ by making a positivity assumption on the Gram-matrix of this kernel , resulting in the global convergence property . In addition , Arora et al . ( 2019 ) provided a generalization bound via a fine-grained analysis of the gradient descent . These studies provide the first steps to understand the role of over-parameterization of neural networks and the gradient descent on regression problems using the squared loss function . For the classification problems with logistic loss , a few studies ( Allen-Zhu et al. , 2018a ; Cao & Gu , 2019a ; b ) investigated the convergence and generalization abilities of gradient descent under a separability assumption with a suitable model instead of the positivity of the neural tangent kernel . In this study , we further develop this line of research on binary classification problems . Our contributions . We provide fine-grained global convergence and generalization analyses of the gradient descent for two-layer neural networks with smooth activations under a separability assumption with a sufficient margin using a neural tangent model , which is a non-linear model with feature extraction through a neural tangent . We demonstrate that a separability assumption is more suitable than the positivity condition of the neural tangent kernel because ( i ) the positive neural tangent kernel leads to weak separability and conversely , ( ii ) separability leads to the positivity of the neural tangent kernel only on a cone spanned by labels , which is very restrictive compared to the whole space . Therefore , the separability condition is rather weak in this sense but it is enough to ensure global convergence for the classification problems . Thus , a significantly better convergence and generalization analyses with respect to network width can be obtained because the positivity of the neural tangent kernel is not required . Consequently , our theory provides a generalization guarantee for less over-parameterized two-layer networks trained by gradient descent , while most existing results relying on the positive neural tangent kernel essentially require quite high overparameterization . To the best of our knowledge , there are no successful studies for our problem setting ( i.e. , less over-parameterized two-layer neural networks with smooth activation functions for the classification problems with logistic loss ) in the literature . Most studies have focused on highly over-parameterized neural networks with ReLU activation , and less over-parameterized settings have been considered difficult for showing the global convergence property of gradient descent even in the few studies using a separability condition ( Allen-Zhu et al. , 2018a ; Cao & Gu , 2019a ; b ) . However , we note that these studies provided global convergence and generalization analyses of the ( stochastic ) gradient descent for challenging settings ( i.e. , deep ReLU networks ) by making a similar but different assumption than ours . Thus , our and these studies do not include each other because of the difference of the network structure ( i.e. , network depth and activation type ) and assumptions . We here describe the main result informally . A neural tangent model is an infinite-dimensional non-linear model using transformed features ( ∂θσ ( θ ( 0 ) > x ) ) θ ( 0 ) ∼µ0 , where σ is a smooth activation and µ0 is a distribution used to initialize the parameters of the input layer in two-layer neural networks . Theorem 1 states that gradient descent can find an -accurate solution in terms of the expected classification error for a wide class of over-parameterized two-layer neural networks under a separability assumption using a neural tangent model . Theorem 1 ( Informal ) . Suppose that a given data distribution is separable by a neural tangent model with a sufficient margin under L∞-constraint . If for any > 0 , the hyperparameters satisfy one of the following ( i ) β ∈ [ 0 , 1 ) , m = Ω ( −1 1−β ) , T = Θ ( −2 ) , η = Θ ( m2β−1 ) , n = Ω̃ ( −4 ) , ( ii ) β = 0 , m = Θ̃ ( −3/2 ) , T = Θ̃ ( −1 ) , η = Θ ( m−1 ) , n = Ω̃ ( −2 ) . then with high probability over the random initialization and choice of samples of size n , the gradient descent with a learning rate η achieves an expected -classification error within T -iterations . Related work . A few recent studies ( Allen-Zhu et al. , 2018a ; Cao & Gu , 2019a ; b ) are closely related to our work because they also treated the logistic loss function . As stated above , problem settings in our and these studies are somewhat different , but we compare our result with those specialized to two-layer network to show a better property of our problem setting and analyses . Separability assumptions were made on an infinite-width two-layer ReLU network in Cao & Gu ( 2019a ; b ) and on a smooth target function in Allen-Zhu et al . ( 2018a ) . For generalization analyses , our result exhibits much better dependency on the network width owing to a better problem setting with a fine-grained analysis . Table 1 provides a comparison of the hyperparameter settings of networks and gradient descent in related studies to achieve an expected -classification error . As evident in Table 1 , for more comprehensive sizes of two-layer networks with respect to the network width , our theory ensures the same generalization ability as those of Allen-Zhu et al . ( 2018a ) ; Cao & Gu ( 2019a ; b ) . In fact , the network width Ω ( −1 ) and Ω ( −3/2 ) are sufficient in our setting . For the stochastic gradient descent for two-layer networks , Brutzkus et al . ( 2018 ) ; Li & Liang ( 2018 ) provided generalization analyses . Brutzkus et al . ( 2018 ) assumed that datasets are linear separable and this restrictive assumption was relaxed to mixtures of well separated data distributions in Li & Liang ( 2018 ) . However , the analysis in Li & Liang ( 2018 ) is also tailored to only highly overparameterized settings . Concretely , a very large width m = Ω̃ ( −24 ) and the number of samples ( iterations ) n = Θ ( T ) = Õ ( −12 ) are required to achieve an expected -classification error in Li & Liang ( 2018 ) . In addition , it should be noted that global convergence analyses ( Allen-Zhu et al. , 2018b ; Zou et al. , 2018 ) in terms of optimization without the specification of network size will yield loose generalization bounds because the complexities of neural networks can not be specified . Apart from the abovementioned studies , there are many other studies ( Brutzkus & Globerson , 2017 ; Zhong et al. , 2017 ; Tian , 2017 ; Soltanolkotabi , 2017 ; Du et al. , 2019 ; Zhang et al. , 2018 ; Arora et al. , 2019 ; Oymak & Soltanolkotabi , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2019 ) that focus on regression problems but our study focuses on classification problems and demonstrates a better property of gradient descent for over-parameterized networks by utilizing the problem structure of a binary classification . Especially , we show that a separability assumption is more preferable than the positivity condition of the neural tangent kernel ( Du et al. , 2019 ; Arora et al. , 2019 ; Zhang et al. , 2019 ; Wu et al. , 2019 ) and show that the required network width can be significantly reduced . Concretely , the required network widths are at least Ω ( n6 ) ( Du et al. , 2019 ; Wu et al. , 2019 ) , Ω ( n7 −2 ) ( Arora et al. , 2019 ) , and Ω ( n4 ) ( Zhang et al. , 2019 ) . Thus , these widths are very large , as compared to our results because sample complexities are generally slower than or equal to n = Ω ( −2 ) . In addition , proof techniques are also different for the squared loss and the logistic loss functions because the latter function lacks the strong convexity . Thus , we can not utilize the linear convergence property for the logistic loss and parameters will diverge , which also causes the difficulty of showing better generalization ability without a fine-grained analysis . 2 PRELIMINARY . Here , we describe the problem setting for the binary logistic regression and discuss the functional gradients to provide a clear theoretical view of the gradient methods for two-layer neural networks . 2.1 PROBLEM SETTING . Let X = Rd and Y be a feature space and the set of binary labels { −1 , 1 } , respectively . We denote by ν a true probability measure on X ×Y and by νn an empirical probability measure , deduced from observations ( xi , yi ) ni=1 independently drawn from ν , i.e. , dνn ( X , Y ) = ∑n i=1 δ ( xi , yi ) ( X , Y ) dXdY/n , where δ is the Dirac delta function . The marginal distributions of ν and νn on X are denoted by νX and νXn , respectively . For ζ ∈ R and y ∈ Y , let l ( ζ , y ) be the logistic loss : log ( 1 + exp ( −yζ ) ) . Then , the objective function to be minimized is formalized as follows : L ( Θ ) def= E ( X , Y ) ∼νn [ l ( fΘ ( X ) , Y ) ] = 1 n n∑ i=1 l ( fΘ ( xi ) , yi ) , where fΘ : X → R is a two-layer neural network equipped with parameters Θ = ( θr ) mr=1 . When we consider a function fΘ as a variable of the objective function , we denote L ( fΘ ) def = L ( Θ ) . The two-layer neural network treated in this study is formalized as follows . For parameters Θ = ( θr ) m r=1 ( θr ∈ Rd ) and fixed constants ( ar ) mr=1 ∈ { −1 , 1 } m : fΘ ( x ) = 1 mβ m∑ r=1 arσ ( θ > r x ) , ( 1 ) where m is the number of hidden units , β is an order of the scaling factor , and σ : R → R is a smooth activation function such as sigmoid , tanh , swish ( Ramachandran et al . ) ) , and other smooth approximations of ReLU . In the training procedure , the parameters Θ = ( θr ) mr=1 of the input layer are optimized . This setting is the same as those in Du et al . ( 2019 ) ; Arora et al . ( 2019 ) ; Zhang et al . ( 2019 ) ; Wu et al . ( 2019 ) , except for the types of activation functions , scaling factor , and loss function .
This paper studies the training of over-parameterized two layer neural networks with smooth activation functions. In particular, this paper establishes convergence guarantee as well as generalization error bounds under an assumption that the data can be separated by a neural tangent model. The authors also show that the network width requirement in this paper is milder than the existing results for ReLU networks.
SP:e4f6ad5fdfa9a438b8b3be4cbe856ea2ab5d68e6
A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE
1 INTRODUCTION . The terms of deep learning and the corresponding artificial neural networks ( ANNs ) derivatives have been dominating in subject of computer science and keep the current state-of-the-art performance in a widespread of machine learning ’ s application scenario such as computer vision ( Simonyan & Zisserman , 2014 ) , natural language processing ( Collobert & Weston , 2008 ) , speech/audio recognition ( Hinton et al. , 2012 ) , video understanding ( Ye et al. , 2015 ) since the first arising of the AlexNet ( Krizhevsky et al. , 2012 ) , even some of them has beat the humans ’ cognitive level in certain tasks . However , ANNs fail to uptake the advantages of the Neuronal Dynamics , which instantiates as high-power consumption , relatively low responses and etc . Spiking Neuron Networks ( SNNs ) ( Maass , 1997 ) , with inspiration for the propagation of the cortex neurons ( Perrett et al. , 1982 ; Tuckwell , 1988 ) , have been presented continuous attention as a new , power-efficient and hardware friendly technology . In contrast to the mere implementation of spatial information and complicated float point computation of ANNs , SNNs utilize spatial-temporal dynamics to mimic the bio-behavior of neurons , as well as its dyadic-valued computation whose feeding electrical sequential impulses ( i.e. , spikes ) , belong to the binary-like set of { 0,1 } . Benefit from the capabilities of processing binary-spiking signal and consequential effectiveness , there is an alternative for SNNs that has a feasibility of further development of machine learning and neuromorphic application , which has been long-term significantly deployed in many neuromorphic hardware including SpiNNaker ( Furber et al. , 2014 ) , TrueNorth ( Akopyan et al. , 2015 ) and Loihi ( Davies et al. , 2018 ) . In contrast to the ANNs ’ well advanced , salient , proficient training methodology that indicate the conception of BackPropagation ( BP ) ( LeCun et al. , 1998 ) along with its derivatives that consequently give rise to the convergence of ANNs and diverse categories of frameworks ( ie . TensorFlow , PyTorch , et al . ) that make it succinct and available to train more deeper networks . However , for one thing , there are not so much theoretically supported or potent procedure for tackling the issue of training SNNs , which limits SNNs from going deeper , therefore SNNs hardly fulfill the ability in real-world complex missions , such as video-based recognition/detection , natural language pro- cessing et al .. For another thing , there no exit practical auxiliary frameworks that are capable to promote the mature structure of SNNs , which leads to the consequence of few application and rare forward-step development of SNNs . There are still various efforts to make progress in training , deepening the depth and applications of SNNs , whereas many obstacles block the development of SNNs at the same time . As for training , there are many circumvention ways to strengthen the accuracy of SNNs , except for neuromorphic methodology such as spike-timing-dependent plasticity ( STDP ) ( Serrano-Gotarredona et al. , 2013 ) , winner-taken all ( WTA ) ( Makhzani & Frey , 2015 ) . In the first alternative scheme , an ANN is trained firstly , then it is transformed into the SNN version whose network structure is the same as the abovementioned ANN , and neurons analog the behavior of ANN neurons ( Diehl et al. , 2015 ) . The other is the direct supervised learning , also called Gradient descend , which is a superior , prevalent optimization method for this learning procedure . In order to solve the issue of the non-differential problems of spikes , ( Lee et al. , 2016 ) proposed an alternate that treats membrane potential as differential signals and directly uses BP algorithm to train deep SNNs . To act as more bio-behavior , ( Ponulak & Kasiński , 2010 ) introduced the remote supervised STDP-like rule to be capable of the learning of sequential output spike . Besides , ( Urbanczik & Senn , 2009 ) proposed a novel leaning rule whose information will be embedded into the spatio-temporal information during learning of the spike signals . Nevertheless , most of the learning methods presented above are merely engaged in a single aspect of either spatial or temporal information . The applications started to spring up due to the incoming of the event-based cameras composed of Dynamic Visual Sensors ( DVS ) ( Shi et al. , 2018 ) . The mechanism of DVS can be outlined as a simulation of the visual path structures and functionalities of the biological visual systems whose neurons asynchronously communicate and encode the visual information from environment as spatiotemporally sparse light intensity change in the form of spikes . On the strength of the event-based cameras , diverse event-based datasets were acquired such as Poker-DVS , MNIST-DVS ( Serrano-Gotarredona & Linares-Barranco , 2015 ) and CIFAR10-DVS ( Wu et al. , 2019 ) . Embracing the event-based cameras and their derived datasets , a variety of monographs demonstrate the different methodologies whose intentions are to make a plausibility of the application of accordingly components . ( Peng et al. , 2016 ) proposed an event-based classification based on static learning method , named Bag of Events ( BOE in short ) . This method denotes the events of corresponding to the activated pixel of the DVS as joint probability distribution . Moreover , this method tests on multiple datasets such as NMNIST , MNIST-DVS , Poker-DVS , and it reveals that BOE can significantly achieve competitive results in real-time for feature extraction and implementation time as well as the classification accuracy . ( Neil & Liu , 2016 ) proposed a deep CNN to pre-process spiking data from DVS , which is used in various deep network architecture and is also used to achieve an accuracy of 97.40 % on N-MNIST datasets , in spite of its complicated pre-processing approach . In terms of SNNs , ( Indiveri et al. , 2015 ) proposed a SNN architecture , named Feedforward SNN , which is based on spike-based learning and temporary learning , and it achieves 87.41 % accuracy on MNIST-DVS datasets . ( Stromatias et al. , 2015 ) proposed a composite system , including convolutional SNNs , non-spiking fully connected classifier , and spiking output layer with its performance of 97.95 % of accuracy . Together with improving the performance and enhancing the convergence rate of SNNs , the goal that whether a method that can absorb both advantages of ANNs and SNNs can be achieved . To this end , we propose RLIF with both low computational complexity and biological plausibility , to explore its usage in real-world tasks . In summary , the major contributions of this paper can be listed as follows : • We propose RLIF , which absorbs the biological traits from SNNs , follows the unroll structure of RNNs , and enables a seamless way to insert into any sequential model in common deep learning frameworks . • A mass throughput can be implemented through the transition of binary information be- tween an interlayer of RLIF and other sequential layers , which meets the basic principle that the emission of neuron trains are binary values . Furthermore , RLIF can be easily extended into neuromorphic chips since its peculiarity of hardware-friendly . • The experiments conducted in general DVS-based datasets ( MNIST-DVS , CIFAR10-DVS ) and Chinese text summarization ( LCSTS-2.0 ) show that our RLIF is capable of capturing key information through time and has lower parameters compared to its counterparts . 2 PREMISE OF UNDERSTANDING RLIF . As mentioned before , the core idea in our architecture is about how to absorb the biological traits of SNN into RNN . To this end , learning algorithm in SNN will be introduced first and then we do a simple analysis on basic LIF neuron model , which aims to highlight the most relevant parts to our RLIF . 2.1 LEARNING ALGORITHM FOR SNN . To the best of our knowledge , the learning algorithm for SNN could be divided into two categories : i ) unsupervised learning algorithms represented by spike timing dependent plasticity ( STDP ) and ii ) direct supervised learning algorithms represented by gradient-based backpropagation . Classical STDP and its reward-modulated variants ( Legenstein et al. , 2008 ; Frémaux & Gerstner , 2016 ) , the typical SNN learning method which only use the local information to update the weights of model , surrender to difficulties in the convergence of models with many layers on complex datasets ( Masquelier & Thorpe , 2007 ; Diehl & Cook , 2015 ; Tavanaei & Maida , 2016 ) . Illuminated by observing the huge success of backpropagation in ANN , researchers start to explore a new way about how can backpropagation be used in training SNN under the end-to-end paradigm . ( Lee et al. , 2016 ; Jin et al. , 2018 ) have introduce spatial backpropagation method into training SNN which mainly based on conventional backpropagation . As to imitate the temporal characteristics of SNN , ( Wu et al. , 2018 ) pioneered the use of backpropagation in both spatial and temporal domains to train SNN directly , through which it achieved the state-of-the-art accuracy on MNIST and N-MNIST datasets . ( Huh & Sejnowski , 2018 ) introduce a differentiable formulation of spiking dynamics and derive the exact gradient calculation to achieve this and ( Neftci et al. , 2019 ) use surrogate gradient methods to conquer the difficulties associated with the discontinuous nonlinearity . As a step further to increase the speed of training , ( Wu et al. , 2019 ) convert the leaky integrateand-fire ( LIF ) model into an explicitly iterative version so as to train deep SNN with tens of times speedup under backpropagation through time ( BPTT ) . 2.2 LIF NEURON MODEL . Leaky Integrateand-Fire ( LIF ) is the most common and simple model which can modeling neuron operations and some basic dynamic traits effectively with low computational costs . In general , we describe LIF neuron ( layer l and index i ) in differential form as τmem dU li dt = − ( U li − Urest ) +RI li ( 1 ) where Ui refers to the membrane potential , Urest is the resting potential , τmem is the membrane time constant , R is the input resistance , and Ii is the input current ( Gerstner et al. , 2014 ) . When the membrane voltage of neuron reaches it firing threshold ϑ , spikes was released to communicate their output to other neurons . After each spike , Ui is reset to the original resting potential Urest . since the input current is typically generated by synaptic currents triggered by the arrival of presynaptic spikes Slj , ( Neftci et al. , 2019 ) model the dynamics of operations during approximating the time course as an exponentially decaying current following each presynaptic spike by dI li dt = − I l i τsyn︸ ︷︷ ︸ decay + ∑ j W lij · Sl−1j︸ ︷︷ ︸ feed forward + ∑ j V lij · Slj︸ ︷︷ ︸ recurrent ( 2 ) Based on this , the simulation of single LIF neuron can be decomposed into solving two linear differential equations . As RNN , who accepts both the current input xt and the previously hidden state ht−1 and updates the current state via non-linear activation function σ ( ... ) , the basic form is yt = σ ( Wx · xt +Wh · ht−1 + b ) ( 3 ) Apparently , Equation 2 has the similar structure with basic RNN , which provides an insight about paraphrasing LIF into recurrent paradigm .
This paper proposes a brain-inspired recurrent neural network architecture, named Recurrent Leaky Integrate-and-Fire (RLIF). Computationally, the model is designed to mimic how biological neurons behave, e.g. producing binary values. The hope is that this will allow such computational models to be easily implemented on neuromorphic chips and the solution will be more energy-efficient. On neuromorphic MNIST and CIFAR, the proposed model achieves higher classification accuracy than other listed methods. On ROGUE, a text summarization benchmark, the proposed model achieves competitive performance.
SP:ef7cf9c0569adc304bdc8601229ac7579178a871
A SPIKING SEQUENTIAL MODEL: RECURRENT LEAKY INTEGRATE-AND-FIRE
1 INTRODUCTION . The terms of deep learning and the corresponding artificial neural networks ( ANNs ) derivatives have been dominating in subject of computer science and keep the current state-of-the-art performance in a widespread of machine learning ’ s application scenario such as computer vision ( Simonyan & Zisserman , 2014 ) , natural language processing ( Collobert & Weston , 2008 ) , speech/audio recognition ( Hinton et al. , 2012 ) , video understanding ( Ye et al. , 2015 ) since the first arising of the AlexNet ( Krizhevsky et al. , 2012 ) , even some of them has beat the humans ’ cognitive level in certain tasks . However , ANNs fail to uptake the advantages of the Neuronal Dynamics , which instantiates as high-power consumption , relatively low responses and etc . Spiking Neuron Networks ( SNNs ) ( Maass , 1997 ) , with inspiration for the propagation of the cortex neurons ( Perrett et al. , 1982 ; Tuckwell , 1988 ) , have been presented continuous attention as a new , power-efficient and hardware friendly technology . In contrast to the mere implementation of spatial information and complicated float point computation of ANNs , SNNs utilize spatial-temporal dynamics to mimic the bio-behavior of neurons , as well as its dyadic-valued computation whose feeding electrical sequential impulses ( i.e. , spikes ) , belong to the binary-like set of { 0,1 } . Benefit from the capabilities of processing binary-spiking signal and consequential effectiveness , there is an alternative for SNNs that has a feasibility of further development of machine learning and neuromorphic application , which has been long-term significantly deployed in many neuromorphic hardware including SpiNNaker ( Furber et al. , 2014 ) , TrueNorth ( Akopyan et al. , 2015 ) and Loihi ( Davies et al. , 2018 ) . In contrast to the ANNs ’ well advanced , salient , proficient training methodology that indicate the conception of BackPropagation ( BP ) ( LeCun et al. , 1998 ) along with its derivatives that consequently give rise to the convergence of ANNs and diverse categories of frameworks ( ie . TensorFlow , PyTorch , et al . ) that make it succinct and available to train more deeper networks . However , for one thing , there are not so much theoretically supported or potent procedure for tackling the issue of training SNNs , which limits SNNs from going deeper , therefore SNNs hardly fulfill the ability in real-world complex missions , such as video-based recognition/detection , natural language pro- cessing et al .. For another thing , there no exit practical auxiliary frameworks that are capable to promote the mature structure of SNNs , which leads to the consequence of few application and rare forward-step development of SNNs . There are still various efforts to make progress in training , deepening the depth and applications of SNNs , whereas many obstacles block the development of SNNs at the same time . As for training , there are many circumvention ways to strengthen the accuracy of SNNs , except for neuromorphic methodology such as spike-timing-dependent plasticity ( STDP ) ( Serrano-Gotarredona et al. , 2013 ) , winner-taken all ( WTA ) ( Makhzani & Frey , 2015 ) . In the first alternative scheme , an ANN is trained firstly , then it is transformed into the SNN version whose network structure is the same as the abovementioned ANN , and neurons analog the behavior of ANN neurons ( Diehl et al. , 2015 ) . The other is the direct supervised learning , also called Gradient descend , which is a superior , prevalent optimization method for this learning procedure . In order to solve the issue of the non-differential problems of spikes , ( Lee et al. , 2016 ) proposed an alternate that treats membrane potential as differential signals and directly uses BP algorithm to train deep SNNs . To act as more bio-behavior , ( Ponulak & Kasiński , 2010 ) introduced the remote supervised STDP-like rule to be capable of the learning of sequential output spike . Besides , ( Urbanczik & Senn , 2009 ) proposed a novel leaning rule whose information will be embedded into the spatio-temporal information during learning of the spike signals . Nevertheless , most of the learning methods presented above are merely engaged in a single aspect of either spatial or temporal information . The applications started to spring up due to the incoming of the event-based cameras composed of Dynamic Visual Sensors ( DVS ) ( Shi et al. , 2018 ) . The mechanism of DVS can be outlined as a simulation of the visual path structures and functionalities of the biological visual systems whose neurons asynchronously communicate and encode the visual information from environment as spatiotemporally sparse light intensity change in the form of spikes . On the strength of the event-based cameras , diverse event-based datasets were acquired such as Poker-DVS , MNIST-DVS ( Serrano-Gotarredona & Linares-Barranco , 2015 ) and CIFAR10-DVS ( Wu et al. , 2019 ) . Embracing the event-based cameras and their derived datasets , a variety of monographs demonstrate the different methodologies whose intentions are to make a plausibility of the application of accordingly components . ( Peng et al. , 2016 ) proposed an event-based classification based on static learning method , named Bag of Events ( BOE in short ) . This method denotes the events of corresponding to the activated pixel of the DVS as joint probability distribution . Moreover , this method tests on multiple datasets such as NMNIST , MNIST-DVS , Poker-DVS , and it reveals that BOE can significantly achieve competitive results in real-time for feature extraction and implementation time as well as the classification accuracy . ( Neil & Liu , 2016 ) proposed a deep CNN to pre-process spiking data from DVS , which is used in various deep network architecture and is also used to achieve an accuracy of 97.40 % on N-MNIST datasets , in spite of its complicated pre-processing approach . In terms of SNNs , ( Indiveri et al. , 2015 ) proposed a SNN architecture , named Feedforward SNN , which is based on spike-based learning and temporary learning , and it achieves 87.41 % accuracy on MNIST-DVS datasets . ( Stromatias et al. , 2015 ) proposed a composite system , including convolutional SNNs , non-spiking fully connected classifier , and spiking output layer with its performance of 97.95 % of accuracy . Together with improving the performance and enhancing the convergence rate of SNNs , the goal that whether a method that can absorb both advantages of ANNs and SNNs can be achieved . To this end , we propose RLIF with both low computational complexity and biological plausibility , to explore its usage in real-world tasks . In summary , the major contributions of this paper can be listed as follows : • We propose RLIF , which absorbs the biological traits from SNNs , follows the unroll structure of RNNs , and enables a seamless way to insert into any sequential model in common deep learning frameworks . • A mass throughput can be implemented through the transition of binary information be- tween an interlayer of RLIF and other sequential layers , which meets the basic principle that the emission of neuron trains are binary values . Furthermore , RLIF can be easily extended into neuromorphic chips since its peculiarity of hardware-friendly . • The experiments conducted in general DVS-based datasets ( MNIST-DVS , CIFAR10-DVS ) and Chinese text summarization ( LCSTS-2.0 ) show that our RLIF is capable of capturing key information through time and has lower parameters compared to its counterparts . 2 PREMISE OF UNDERSTANDING RLIF . As mentioned before , the core idea in our architecture is about how to absorb the biological traits of SNN into RNN . To this end , learning algorithm in SNN will be introduced first and then we do a simple analysis on basic LIF neuron model , which aims to highlight the most relevant parts to our RLIF . 2.1 LEARNING ALGORITHM FOR SNN . To the best of our knowledge , the learning algorithm for SNN could be divided into two categories : i ) unsupervised learning algorithms represented by spike timing dependent plasticity ( STDP ) and ii ) direct supervised learning algorithms represented by gradient-based backpropagation . Classical STDP and its reward-modulated variants ( Legenstein et al. , 2008 ; Frémaux & Gerstner , 2016 ) , the typical SNN learning method which only use the local information to update the weights of model , surrender to difficulties in the convergence of models with many layers on complex datasets ( Masquelier & Thorpe , 2007 ; Diehl & Cook , 2015 ; Tavanaei & Maida , 2016 ) . Illuminated by observing the huge success of backpropagation in ANN , researchers start to explore a new way about how can backpropagation be used in training SNN under the end-to-end paradigm . ( Lee et al. , 2016 ; Jin et al. , 2018 ) have introduce spatial backpropagation method into training SNN which mainly based on conventional backpropagation . As to imitate the temporal characteristics of SNN , ( Wu et al. , 2018 ) pioneered the use of backpropagation in both spatial and temporal domains to train SNN directly , through which it achieved the state-of-the-art accuracy on MNIST and N-MNIST datasets . ( Huh & Sejnowski , 2018 ) introduce a differentiable formulation of spiking dynamics and derive the exact gradient calculation to achieve this and ( Neftci et al. , 2019 ) use surrogate gradient methods to conquer the difficulties associated with the discontinuous nonlinearity . As a step further to increase the speed of training , ( Wu et al. , 2019 ) convert the leaky integrateand-fire ( LIF ) model into an explicitly iterative version so as to train deep SNN with tens of times speedup under backpropagation through time ( BPTT ) . 2.2 LIF NEURON MODEL . Leaky Integrateand-Fire ( LIF ) is the most common and simple model which can modeling neuron operations and some basic dynamic traits effectively with low computational costs . In general , we describe LIF neuron ( layer l and index i ) in differential form as τmem dU li dt = − ( U li − Urest ) +RI li ( 1 ) where Ui refers to the membrane potential , Urest is the resting potential , τmem is the membrane time constant , R is the input resistance , and Ii is the input current ( Gerstner et al. , 2014 ) . When the membrane voltage of neuron reaches it firing threshold ϑ , spikes was released to communicate their output to other neurons . After each spike , Ui is reset to the original resting potential Urest . since the input current is typically generated by synaptic currents triggered by the arrival of presynaptic spikes Slj , ( Neftci et al. , 2019 ) model the dynamics of operations during approximating the time course as an exponentially decaying current following each presynaptic spike by dI li dt = − I l i τsyn︸ ︷︷ ︸ decay + ∑ j W lij · Sl−1j︸ ︷︷ ︸ feed forward + ∑ j V lij · Slj︸ ︷︷ ︸ recurrent ( 2 ) Based on this , the simulation of single LIF neuron can be decomposed into solving two linear differential equations . As RNN , who accepts both the current input xt and the previously hidden state ht−1 and updates the current state via non-linear activation function σ ( ... ) , the basic form is yt = σ ( Wx · xt +Wh · ht−1 + b ) ( 3 ) Apparently , Equation 2 has the similar structure with basic RNN , which provides an insight about paraphrasing LIF into recurrent paradigm .
Recently, it has been shown that spiking neural networks (SNN) can be trained efficiently, in a supervised manner, using backpropagation through time. Indeed, the most commonly used spiking neuron model, the leaky integrate-and-fire neuron (LIF), obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential. The firing threshold causes a non-differentiability issue, but it can be overcome using a surrogate gradient. In practice, it means that SNNs can be trained on GPUs using standard deep learning frameworks such as PyTorch or TensorFlow.
SP:ef7cf9c0569adc304bdc8601229ac7579178a871
Efficient Probabilistic Logic Reasoning with Graph Neural Networks
1 INTRODUCTION . Knowledge graphs collect and organize relations and attributes about entities , which are playing an increasingly important role in many applications , including question answering and information retrieval . Since knowledge graphs may contain incorrect , incomplete or duplicated records , additional processing such as link prediction , attribute classification , and record de-duplication is typically needed to improve the quality of knowledge graphs and derive new facts . Markov Logic Networks ( MLNs ) were proposed to combine hard logic rules and probabilistic graphical models , which can be applied to various tasks on knowledge graphs ( Richardson & Domingos , 2006 ) . The logic rules incorporate prior knowledge and allow MLNs to generalize in tasks with small amount of labeled data , while the graphical model formalism provides a principled framework for dealing with uncertainty in data . However , inference in MLN is computationally intensive , typically exponential in the number of entities , limiting the real-world application of MLN . Also , logic rules can only cover a small part of the possible combinations of knowledge graph relations , hence limiting the application of models that are purely based on logic rules . Graph neural networks ( GNNs ) have recently gained increasing popularity for addressing many graph related problems effectively ( Dai et al. , 2016 ; Li et al. , 2016 ; Kipf & Welling , 2017 ; Schlichtkrull et al. , 2018 ) . GNN-based methods typically require sufficient labeled instances on specific end tasks to achieve good performance , however , knowledge graphs have the long-tail nature ( Xiong et al. , 2018 ) , i.e. , a large portion the relations in only are a few triples . Such data scarcity problem among long-tail relations poses tough challenge for purely data-driven methods . In this paper , we explore the combination of the best of both worlds , aiming for a method which is data-driven yet can still exploit the prior knowledge encoded in logic rules . To this end , we design a simple variant of graph neural networks , named ExpressGNN , which can be efficiently trained in the variational EM framework for MLN . An overview of our method is illustrated in Fig . 1 . ExpressGNN and the corresponding reasoning framework lead to the following desiderata : Variational EM Posterior Encoding Likelihood Decoding GNNMLN Knowledge Graph 𝜙 '' ( ⋅ ) formula potential predicate posterior Figure 1 : Overview of our method for combining MLN and GNN using the variational EM framework . • Efficient inference and learning : ExpressGNN can be viewed as the inference network for MLN , which scales up MLN inference to much larger knowledge graph problems . • Combining logic rules and data supervision : ExpressGNN can leverage the prior knowledge encoded in logic rules , as well as the supervision from graph structured data . • Compact and expressive model : ExpressGNN may have small number of parameters , yet it is sufficient to represent mean-field distributions in MLN . • Capability of zero-shot learning : ExpressGNN can deal with the zero-shot learning problem where the target predicate has few or zero labeled instances . 2 RELATED WORK . Statistical relational learning . There is an extensive literature relating the topic of logic reasoning . Here we only focus on the approaches that are most relevant to statistical relational learning on knowledge graphs . Logic rules can compactly encode the domain knowledge and complex dependencies . Thus , hard logic rules are widely used for reasoning in earlier attempts , such as expert systems ( Ignizio , 1991 ) and inductive logic programming ( Muggleton & De Raedt , 1994 ) . However , hard logic is very brittle and has difficulty in coping with uncertainty in both the logic rules and the facts in knowledge graphs . Later studies have explored to introduce probabilistic graphical model in logic reasoning , seeking to combine the advantages of relational and probabilistic approaches . Representative works including Relational Markov Networks ( RMNs ; Taskar et al . ( 2007 ) ) and Markov Logic Networks ( MLNs ; Richardson & Domingos ( 2006 ) ) were proposed in this background . Markov Logic Networks . MLNs have been widely studied due to the principled probabilistic model and effectiveness in a variety of reasoning tasks , including entity resolution ( Singla & Domingos , 2006a ) , social networks ( Zhang et al. , 2014 ) , information extraction ( Poon & Domingos , 2007 ) , etc . MLNs elegantly handle the noise in both logic rules and knowledge graphs . However , the inference and learning in MLNs is computationally expensive due to the exponential cost of constructing the ground Markov network and the NP-complete optimization problem . This hinders MLNs to be applied to industry-scale applications . Many works appear in the literature to improve the original MLNs in both accuracy ( Singla & Domingos , 2005 ; Mihalkova & Mooney , 2007 ) and efficiency ( Singla & Domingos , 2006b ; 2008 ; Poon & Domingos , 2006 ; Khot et al. , 2011 ; Bach et al. , 2015 ) . Nevertheless , to date , MLNs still struggle to handle large-scale knowledge bases in practice . Our framework ExpressGNN overcomes the scalability challenge of MLNs by efficient stochastic training algorithm and compact posterior parameterization with graph neural networks . Graph neural networks . Graph neural networks ( GNNs ; Dai et al . ( 2016 ) ; Kipf & Welling ( 2017 ) ) can learn effective representations of nodes by encoding local graph structures and node attributes . Due to the compactness of model and the capability of inductive learning , GNNs are widely used in modeling relational data ( Schlichtkrull et al. , 2018 ; Battaglia et al. , 2018 ) . Recently , Qu et al . ( 2019 ) proposed Graph Markov Neural Networks ( GMNNs ) , which employs GNNs together with conditional random fields to learn object representations . These existing works are simply data-driven , and not able to leverage the domain knowledge or human prior encoded in logic rules . To the best of our knowledge , ExpressGNN is the first work that connects GNNs with first-order logic rules to combine the advantages of both worlds . Knowledge graph embedding . Another line of research for knowledge graph reasoning is in the family of knowledge graph embedding methods , such as TransE ( Bordes et al. , 2013 ) , NTN ( Socher et al. , 2013 ) , DistMult ( Kadlec et al. , 2017 ) , ComplEx ( Trouillon et al. , 2016 ) , and RotatE ( Sun et al. , 2019 ) . These methods design various scoring functions to model relational patterns for knowledge graph reasoning , which are very effective in learning the transductive embeddings of both entities and relations . However , these methods are not able to leverage logic rules , which can be crucial in some relational learning tasks , and have no consistent probabilistic model . Compared to these methods , ExpressGNN has consistent probabilistic model built in the framework , and can incorporate knowledge from logic rules . A recent concurrent work Qu & Tang ( 2019 ) has proposed probabilistic Logic Neural Network ( pLogicNet ) , which integrates knowledge graph embedding methods with MLNs with EM framework . Compared to pLogicNet which uses a flattened embedding table as the entity representation , our work explicitly captures the structure knowledge encoded in the knowledge graph with GNNs and supplement the knowledge from logic formulae for the prediction task . 3 PRELIMINARY Knowledge Graph . A knowledge graph is a tuple K = ( C , R , O ) consisting of a set C = { c1 , . . . , cM } of M entities , a set R = { r1 , . . . , rN } of N relations , and a collection O = { o1 , . . . , oL } of L observed facts . In the language of first-order logic , entities are also called constants . For instance , a constant can be a person or an object . Relations are also called predicates . Each predicate is a logic function defined over C , i.e. , r ( · ) : C × . . .× C 7→ { 0 , 1 } . In general , the arguments of predicates are asymmetric . For instance , for the predicate r ( c , c′ ) : = L ( c , c′ ) ( L for Like ) which checks whether c likes c′ , the arguments c and c′ are not exchangeable . With a particular set of entities assigned to the arguments , the predicate is called a ground predicate , and each ground predicate ≡ a binary random variable , which will be used to define MLN . For a d-ary predicate , there are Md ways to ground it . We denote an assignment as ar . For instance , with ar = ( c , c′ ) , we can simply write a ground predicate r ( c , c′ ) as r ( ar ) . Each observed fact in knowledge bases is a truth value { 0 , 1 } assigned to a ground predicate . For instance , a fact o can be [ L ( c , c′ ) = 1 ] . The number of observed facts is typically much smaller than that of unobserved facts . We adopt the open-world paradigm and treat these unobserved facts ≡ latent variables . As a clearer representation , we express a knowledge base K by a bipartite graph GK = ( C , O , E ) , where nodes on one side of the graph correspond to constants C and nodes on the other side correspond to observed factsO , which is called factor in this case . The set of T edges , E = { e1 , . . . , eT } , connect constants and the observed facts . More specifically , an edge e = ( c , o , i ) between node c and o exists , if the ground predicate associated with o uses c as an argument in its i-th argument position ( Fig . 2 ) . Markov Logic Networks . MLNs use logic formulae to define potential functions in undirected graphical models . A logic formula f ( · ) : C × . . .× C 7→ { 0 , 1 } is a binary function defined via the composition of a few predicates . For instance , a logic formula f ( c , c′ ) can be Smoke ( c ) ∧ Friend ( c , c′ ) ⇒ Smoke ( c′ ) ⇐⇒ ¬Smoke ( c ) ∨ ¬Friend ( c , c′ ) ∨ Smoke ( c′ ) , where ¬ is negation and the equivalence is established by De Morgan ’ s law . Similar to predicates , we denote an assignment of constants to the arguments of a formula f as af , and the entire collection of consistent assignments of constants as Af = { a1f , a2f , . . . } . A formula with constants assigned to all of its arguments is called a ground formula . Given these logic representations , MLN can be defined as a joint distribution over all observed facts O and unobserved factsH as Pw ( O , H ) : = 1Z ( w ) exp ( ∑ f∈F wf ∑ af∈Af φf ( af ) ) , ( 1 ) where Z ( w ) is the partition function summing over all ground predicates and φf ( · ) is the potential function defined by a formula f as illustrated in Fig . 2 . One form of φf ( · ) can simply be the truth value of the logic formula f . For instance , if the formula is f ( c , c′ ) : = ¬S ( c ) ∨¬F ( c , c′ ) ∨S ( c′ ) , then φf ( c , c ′ ) can simply take value 1 when f ( c , c′ ) is true and 0 otherwise . Other more sophisticated φf can also be designed , which have the potential to take into account complex entities , such as images or texts , but will not be the focus of this paper . The weight wf can be viewed as the confidence score of the formula f : the higher the weight , the more accurate the formula is . Difference between KG and MLN . We note that the graph topology of knowledge graphs and MLN can are very different , although MLN is defined on top of knowledge graphs . Knowledge graphs are typically very sparse , where the number of edges ( observed relations ) is typically linear in the number of entities . However , the graphs associated with MLN are much denser , where the number of nodes can be quadratic or more in the number of entities , and the number of edges ( dependency between variables ) is also high-order polynomials in the number of entities .
This paper proposes a framework for solving the probabilistic logic reasoning problem by integrating Markov neural networks and graph neural networks to combine their individual features into a more expressive and scalable framework. Graph neural networks are used for learning representations for Knowledge graphs and are quite scalable when it comes to probabilistic inference. But no prior rules can be incorporated and it requires significant amount of examples per target in order to converge. On the other hand, MLN are quite powerful for logical reasoning and dealing with noisy data but its inference process is computationally intensive and does not scale. Combining these two frameworks seem to result in a powerful framework which generalizes well to new knowledge graphs, does inference and is able to scale to large entities.
SP:18ddfe1194f8f8c208068c83157cb7935b962c7a
Efficient Probabilistic Logic Reasoning with Graph Neural Networks
1 INTRODUCTION . Knowledge graphs collect and organize relations and attributes about entities , which are playing an increasingly important role in many applications , including question answering and information retrieval . Since knowledge graphs may contain incorrect , incomplete or duplicated records , additional processing such as link prediction , attribute classification , and record de-duplication is typically needed to improve the quality of knowledge graphs and derive new facts . Markov Logic Networks ( MLNs ) were proposed to combine hard logic rules and probabilistic graphical models , which can be applied to various tasks on knowledge graphs ( Richardson & Domingos , 2006 ) . The logic rules incorporate prior knowledge and allow MLNs to generalize in tasks with small amount of labeled data , while the graphical model formalism provides a principled framework for dealing with uncertainty in data . However , inference in MLN is computationally intensive , typically exponential in the number of entities , limiting the real-world application of MLN . Also , logic rules can only cover a small part of the possible combinations of knowledge graph relations , hence limiting the application of models that are purely based on logic rules . Graph neural networks ( GNNs ) have recently gained increasing popularity for addressing many graph related problems effectively ( Dai et al. , 2016 ; Li et al. , 2016 ; Kipf & Welling , 2017 ; Schlichtkrull et al. , 2018 ) . GNN-based methods typically require sufficient labeled instances on specific end tasks to achieve good performance , however , knowledge graphs have the long-tail nature ( Xiong et al. , 2018 ) , i.e. , a large portion the relations in only are a few triples . Such data scarcity problem among long-tail relations poses tough challenge for purely data-driven methods . In this paper , we explore the combination of the best of both worlds , aiming for a method which is data-driven yet can still exploit the prior knowledge encoded in logic rules . To this end , we design a simple variant of graph neural networks , named ExpressGNN , which can be efficiently trained in the variational EM framework for MLN . An overview of our method is illustrated in Fig . 1 . ExpressGNN and the corresponding reasoning framework lead to the following desiderata : Variational EM Posterior Encoding Likelihood Decoding GNNMLN Knowledge Graph 𝜙 '' ( ⋅ ) formula potential predicate posterior Figure 1 : Overview of our method for combining MLN and GNN using the variational EM framework . • Efficient inference and learning : ExpressGNN can be viewed as the inference network for MLN , which scales up MLN inference to much larger knowledge graph problems . • Combining logic rules and data supervision : ExpressGNN can leverage the prior knowledge encoded in logic rules , as well as the supervision from graph structured data . • Compact and expressive model : ExpressGNN may have small number of parameters , yet it is sufficient to represent mean-field distributions in MLN . • Capability of zero-shot learning : ExpressGNN can deal with the zero-shot learning problem where the target predicate has few or zero labeled instances . 2 RELATED WORK . Statistical relational learning . There is an extensive literature relating the topic of logic reasoning . Here we only focus on the approaches that are most relevant to statistical relational learning on knowledge graphs . Logic rules can compactly encode the domain knowledge and complex dependencies . Thus , hard logic rules are widely used for reasoning in earlier attempts , such as expert systems ( Ignizio , 1991 ) and inductive logic programming ( Muggleton & De Raedt , 1994 ) . However , hard logic is very brittle and has difficulty in coping with uncertainty in both the logic rules and the facts in knowledge graphs . Later studies have explored to introduce probabilistic graphical model in logic reasoning , seeking to combine the advantages of relational and probabilistic approaches . Representative works including Relational Markov Networks ( RMNs ; Taskar et al . ( 2007 ) ) and Markov Logic Networks ( MLNs ; Richardson & Domingos ( 2006 ) ) were proposed in this background . Markov Logic Networks . MLNs have been widely studied due to the principled probabilistic model and effectiveness in a variety of reasoning tasks , including entity resolution ( Singla & Domingos , 2006a ) , social networks ( Zhang et al. , 2014 ) , information extraction ( Poon & Domingos , 2007 ) , etc . MLNs elegantly handle the noise in both logic rules and knowledge graphs . However , the inference and learning in MLNs is computationally expensive due to the exponential cost of constructing the ground Markov network and the NP-complete optimization problem . This hinders MLNs to be applied to industry-scale applications . Many works appear in the literature to improve the original MLNs in both accuracy ( Singla & Domingos , 2005 ; Mihalkova & Mooney , 2007 ) and efficiency ( Singla & Domingos , 2006b ; 2008 ; Poon & Domingos , 2006 ; Khot et al. , 2011 ; Bach et al. , 2015 ) . Nevertheless , to date , MLNs still struggle to handle large-scale knowledge bases in practice . Our framework ExpressGNN overcomes the scalability challenge of MLNs by efficient stochastic training algorithm and compact posterior parameterization with graph neural networks . Graph neural networks . Graph neural networks ( GNNs ; Dai et al . ( 2016 ) ; Kipf & Welling ( 2017 ) ) can learn effective representations of nodes by encoding local graph structures and node attributes . Due to the compactness of model and the capability of inductive learning , GNNs are widely used in modeling relational data ( Schlichtkrull et al. , 2018 ; Battaglia et al. , 2018 ) . Recently , Qu et al . ( 2019 ) proposed Graph Markov Neural Networks ( GMNNs ) , which employs GNNs together with conditional random fields to learn object representations . These existing works are simply data-driven , and not able to leverage the domain knowledge or human prior encoded in logic rules . To the best of our knowledge , ExpressGNN is the first work that connects GNNs with first-order logic rules to combine the advantages of both worlds . Knowledge graph embedding . Another line of research for knowledge graph reasoning is in the family of knowledge graph embedding methods , such as TransE ( Bordes et al. , 2013 ) , NTN ( Socher et al. , 2013 ) , DistMult ( Kadlec et al. , 2017 ) , ComplEx ( Trouillon et al. , 2016 ) , and RotatE ( Sun et al. , 2019 ) . These methods design various scoring functions to model relational patterns for knowledge graph reasoning , which are very effective in learning the transductive embeddings of both entities and relations . However , these methods are not able to leverage logic rules , which can be crucial in some relational learning tasks , and have no consistent probabilistic model . Compared to these methods , ExpressGNN has consistent probabilistic model built in the framework , and can incorporate knowledge from logic rules . A recent concurrent work Qu & Tang ( 2019 ) has proposed probabilistic Logic Neural Network ( pLogicNet ) , which integrates knowledge graph embedding methods with MLNs with EM framework . Compared to pLogicNet which uses a flattened embedding table as the entity representation , our work explicitly captures the structure knowledge encoded in the knowledge graph with GNNs and supplement the knowledge from logic formulae for the prediction task . 3 PRELIMINARY Knowledge Graph . A knowledge graph is a tuple K = ( C , R , O ) consisting of a set C = { c1 , . . . , cM } of M entities , a set R = { r1 , . . . , rN } of N relations , and a collection O = { o1 , . . . , oL } of L observed facts . In the language of first-order logic , entities are also called constants . For instance , a constant can be a person or an object . Relations are also called predicates . Each predicate is a logic function defined over C , i.e. , r ( · ) : C × . . .× C 7→ { 0 , 1 } . In general , the arguments of predicates are asymmetric . For instance , for the predicate r ( c , c′ ) : = L ( c , c′ ) ( L for Like ) which checks whether c likes c′ , the arguments c and c′ are not exchangeable . With a particular set of entities assigned to the arguments , the predicate is called a ground predicate , and each ground predicate ≡ a binary random variable , which will be used to define MLN . For a d-ary predicate , there are Md ways to ground it . We denote an assignment as ar . For instance , with ar = ( c , c′ ) , we can simply write a ground predicate r ( c , c′ ) as r ( ar ) . Each observed fact in knowledge bases is a truth value { 0 , 1 } assigned to a ground predicate . For instance , a fact o can be [ L ( c , c′ ) = 1 ] . The number of observed facts is typically much smaller than that of unobserved facts . We adopt the open-world paradigm and treat these unobserved facts ≡ latent variables . As a clearer representation , we express a knowledge base K by a bipartite graph GK = ( C , O , E ) , where nodes on one side of the graph correspond to constants C and nodes on the other side correspond to observed factsO , which is called factor in this case . The set of T edges , E = { e1 , . . . , eT } , connect constants and the observed facts . More specifically , an edge e = ( c , o , i ) between node c and o exists , if the ground predicate associated with o uses c as an argument in its i-th argument position ( Fig . 2 ) . Markov Logic Networks . MLNs use logic formulae to define potential functions in undirected graphical models . A logic formula f ( · ) : C × . . .× C 7→ { 0 , 1 } is a binary function defined via the composition of a few predicates . For instance , a logic formula f ( c , c′ ) can be Smoke ( c ) ∧ Friend ( c , c′ ) ⇒ Smoke ( c′ ) ⇐⇒ ¬Smoke ( c ) ∨ ¬Friend ( c , c′ ) ∨ Smoke ( c′ ) , where ¬ is negation and the equivalence is established by De Morgan ’ s law . Similar to predicates , we denote an assignment of constants to the arguments of a formula f as af , and the entire collection of consistent assignments of constants as Af = { a1f , a2f , . . . } . A formula with constants assigned to all of its arguments is called a ground formula . Given these logic representations , MLN can be defined as a joint distribution over all observed facts O and unobserved factsH as Pw ( O , H ) : = 1Z ( w ) exp ( ∑ f∈F wf ∑ af∈Af φf ( af ) ) , ( 1 ) where Z ( w ) is the partition function summing over all ground predicates and φf ( · ) is the potential function defined by a formula f as illustrated in Fig . 2 . One form of φf ( · ) can simply be the truth value of the logic formula f . For instance , if the formula is f ( c , c′ ) : = ¬S ( c ) ∨¬F ( c , c′ ) ∨S ( c′ ) , then φf ( c , c ′ ) can simply take value 1 when f ( c , c′ ) is true and 0 otherwise . Other more sophisticated φf can also be designed , which have the potential to take into account complex entities , such as images or texts , but will not be the focus of this paper . The weight wf can be viewed as the confidence score of the formula f : the higher the weight , the more accurate the formula is . Difference between KG and MLN . We note that the graph topology of knowledge graphs and MLN can are very different , although MLN is defined on top of knowledge graphs . Knowledge graphs are typically very sparse , where the number of edges ( observed relations ) is typically linear in the number of entities . However , the graphs associated with MLN are much denser , where the number of nodes can be quadratic or more in the number of entities , and the number of edges ( dependency between variables ) is also high-order polynomials in the number of entities .
The paper proposes to use graph neural networks (GNN) for inference in MLN. The main motivation seems to be that inference in traditional MLN is computationally inefficient. The paper is cryptic about precisely why this is the case. There is some allusion in the introduction as to grounding being exponential in the number of entities and the exponent being related to the number of variables in the clauses of the MLN but this should be more clearly stated (e.g., does inference being exponential in the number of entities hold for lifted BP?). In an effort to speed up inference, the authors propose to use GNN instead. Since GNN expressivity is limited, the authors propose to use entity specific embeddings to increase expressivity. The final ingredient is a mean-field approximation that helps break up the likelihood expression. Experiments are conducted on standard MLN benchmarks (UW-CSE, Kinship, Cora) and link prediction tasks. ExpressGNN achieves a 5-10X speedup compared to HL-MRF. On Cora HL-MRF seems to have run out of memory. On link prediction tasks, ExpressGNN seems to achieve better accuracy but this result is a bit difficult to appreciate since the ExpressGNN can't learn rules and the authors used NeuralLP to learn the rules followed by using ExpressGNN to learn parameters and inference.
SP:18ddfe1194f8f8c208068c83157cb7935b962c7a
Disentangled Cumulants Help Successor Representations Transfer to New Tasks
1 INTRODUCTION . Natural intelligence is able to solve many diverse tasks by transferring knowledge and skills from one task to another . For example , by knowing about objects and how to move them in 3D space , it is possible to learn how to sort them by shape or colour faster . However , many of the current state-of-the-art artificial reinforcement learning ( RL ) agents often struggle with such basic skill transfer . They are able to solve single tasks well , often beyond the ability of any natural intelligence ( Silver et al. , 2016 ; Mnih et al. , 2015 ; Jaderberg et al. , 2017 ) , however even small deviations from the task that the agent was trained on can result in catastrophic failures ( Lake et al. , 2016 ; Rusu et al. , 2016 ) . Although improving transfer in RL agents is an active area of research ( Higgins et al. , 2017a ; Rusu et al. , 2016 ; Nair et al. , 2018 ; Barreto et al. , 2018 ; Wulfmeier et al. , 2019 ; Torrey & Shavlik , 2010 ; Taylor & Stone , 2009 ; Thrun & Pratt , 2012 ; Caruana , 1997 ; Jaderberg et al. , 2017 ; Riedmiller et al. , 2018 ) , most typical deep RL agents start learning every task from scratch . This means that each time they have to re-learn how to perceive the world ( the mapping from a high-dimensional observation to state ) , and also how to act ( the mapping from state to action ) , with the majority of time arguably spent on the former . The optimisation procedure naturally discards information that is irrelevant to the task , which means that the learnt state representation is often unsuitable for new tasks . Biological intelligence appears to operate differently . A lot of knowledge tends to be discovered and learnt without explicit supervision ( Tolman , 1948 ; Clark , 2013 ; Friston , 2010 ) . This basic knowledge can then form the behavioural basis that can be used to solve new tasks faster . In this paper we argue that such transferable knowledge and skills should be acquired in artificial agents too . In particular , we want to start by building agents that have the ability to discover stable entities that make up the world and to learn basic skills to manipulate these entities . Compositional re-use of such skills enables biological intelligence to find reasonable solutions to many naturally occurring tasks , from goal-directed movement ( controlling your own position ) , to food gathering ( controlling the position of fruit and and nuts ) , or building a simple defence system ( re-positioning multiple stones into a fence or digging a ditch ) . In this paper we concentrate on goal-based natural tasks that can be expressed in natural language , and that do not require a specific execution order of actions . To this end , we propose a principled way to learn a small set of policies which can be re-used by the agents to quickly produce reasonable performance on an exponentially large set of goal-driven tasks within an environment . We propose a method on how to discover these policies in the absence of external supervision , where the agent accumulates a transferable set of basic skills through intrinsically motivated interactions with the environment . This first stage of free play builds the foundation to later solve many diverse extrinsically specified downstream tasks . We suggest formalising such a two-stage pipeline as the endogenous reinforcement learning ( ERL ) setting , in order to provide a consistent evaluation framework for some of the existing and future work on building RL agents with intrinsic learning signals ( Gregor et al. , 2017 ; Eysenbach et al. , 2019 ; Hansen et al. , 2019 ; Nair et al. , 2018 ; Laversanne-Finot et al. , 2018 ) . We propose a disjoint two step research pipeline , where the agent is allowed unlimited access to the environment in the ERL stage , where no extrinsic rewards are provided and the agent is supposed to learn as much as it can through endogenously ( intrinsically ) driven interactions with the environment . This is followed by a standard RL stage where the success of the previous step is evaluation in terms of data-efficiency of learning on multiple diverse extrinsically ( exogenously ) specified downstream tasks in the same environment . We hope that by working in this extreme two stage setting , where the agents have to learn useful knowledge with no access to task rewards , we can develop algorithms that learn more robust and transferable policies even in the traditional RL setting . In this paper we propose to use the ERL stage to discover disentangled features through task-free interactions with the environment , and then solving a number of goal-driven self-generated tasks specified in the learnt disentangled feature space . In particular , we suggest learning k disentangled features , discretising them intom bins each , and learning km feature control policies that achieve the respective bin value of the given feature . We then re-combine the feature control policies learnt in the ERL stage to solve downstream tasks in the RL stage in a few-shot manner using Generalized Policy Improvement ( GPI ) ( Barreto et al. , 2018 ) . We demonstrate theoretically and empirically that our proposed set of basis policies that learn to control disentangled features produce significantly better generalisation over a large number of downstream tasks . Intuitively , disentangled representations consist of the smallest set of features that represent those aspects of the world state that are independently affected by natural transformations and together explain the most of the variance observed in an environment ( Higgins et al. , 2018 ) ( see Fig . 1 ) . Disentangled features , therefore , carve the world at its joints and provide a parsimonious representation of the world state that also points to which aspects of the world are stable , and which can in principle be transformed independently of each other . We conjecture that disentangled features align well with the idealised state space in which natural tasks are defined . Hence , by learning a set of policies that can control these features an agent will acquire a set of basis policies which spans a large set of natural tasks defined in such an environment . Note that both disentangled features and their respective control policies can be learnt without an externally specified task , purely in the ERL setting . We provide both theoretical justification for this setup , as well as experimental illustrations of the benefit of disentangled representations in a large set of tasks of varying difficulty . Hence , the main contribution of this work is a theoretical result that extends the GPI framework to guarantee achievability on a large set of natural goal-driven tasks given a small set of basis policies that control disentangled features . In particular , we demonstrate that given k disentangled features discretised intom bins , we can guarantee achievability with a deterministic policy on at least ( m+1 ) k downstream tasks by using GPI to recombine km feature control policies discovered and learnt purely through intrinsically driven interactions with the environment in the total absence of environment rewards . Our result holds for any tasks that can be specified in natural language and do not require a particular ordering of the actions to be solved . For example , our approach would be able to solve a task that requires sorting objects in space based on their colour or shape , or tidying up a messy playroom by putting all the toys in a box , but it will not be able to solve a task like cooking a meal , where the execution order of the different stages in the recipe matters . Related work Past work on supervised and reinforcement learning has demonstrated how multitask learning , transfer and adaptation can provide strong performance gains across various domains ( Caruana , 1997 ; Thrun & Pratt , 2012 ; Yosinski et al. , 2014 ; Girshick et al. , 2014 ; Jaderberg et al. , 2017 ; Riedmiller et al. , 2018 ; Wulfmeier et al. , 2019 ) . Typically these approaches use hand-crafted auxiliary tasks to boost learning of the downstream tasks of interest , which is not scalable and comes with no guarantees on which set of auxiliary tasks is optimal for boosting performance on a large number of downstream tasks . A number of other past approaches shared our motivation of replacing the hand-crafted auxiliary task specification by an automatic way of discovering a diverse and useful set of policies in the absence of externally specified tasks . The predominant approach so far has been to optimise an objective that encourages behaviours that are both diverse and distinguishable from each other ( Gregor et al. , 2017 ; Eysenbach et al. , 2019 ; Hansen et al. , 2019 ) , or to learn how to solve intrinsic tasks sampled from a learnt representation space ( Nair et al. , 2018 ; Laversanne-Finot et al. , 2018 ) . While these approaches have been shown to be successful on transferring the learnt policies to solve certain downstream tasks , none of them provided theoretical guarantees on the downstream task coverage by the basis set of policies . Such guarantees were however provided by van Niekerk et al . ( 2018 ) and Barreto et al . ( 2017 ; 2018 ) . These papers calculated how well a given set of policies can be transferred to solve a wide range of downstream tasks . However , they left the question of how to discover such a set of basis policies open . Hence , our work provides a unique perspective by addressing both the questions of what makes a good basis set of policies to get certain guarantees on final task coverage , and how these policies may be learnt in the ERL setting . Other related literature worth noting is the work by Higgins et al . ( 2017b ) , who showed that learning a downstream task policy over disentangled representations improved its robustness to visual changes in the environment . Another piece of work ( Machado et al. , 2018 ) demonstrated the usefulness of discovering reward-agnostic options through successor feature learning for improving data efficiency in downstream task learning . The benefits of these options , however , were primarily through improving exploration . No guarantees were given in terms of downstream task coverage . 2 BACKGROUND . Basic Reinforcement Learning ( RL ) formalism . An RL agent interacts with its environment through a sequence of actions in such a way as to maximise the expected cumulative discounted rewards ( Sutton & Barto , 1998 ) . The RL problem is typically expressed using the formalism of Markov Decision Processes ( MDPs ) ( Puterman , 1994 ) . An MDP is a tuple M = ( S , A , P , R , γ ) , where S and A are the sets of states and actions , P is the transition probability that predicts the distribution over next states given the current state and action s′ ∼P ( ·|s , a ) , R is the distribution of rewards r∼R ( s , a , s′ ) received for making the transition s a7→ s′ , and γ ∈ [ 0,1 ) is the discount factor used to make future rewards progressively less valuable . Given an MDP , the goal of the agent is to maximise the expected returnGt= ∑∞ i=0γ irt+i . This is done by learning a policy π ( a|s ) that selects the optimal action a∈A in each state s∈S . A typical RL problem attempts to find the optimal policy π∗= argmax π E [ ∑ t≥0γ tr|π ] , where the expectation is taken over all possible interaction sequences of the agent ’ s policy with the environment . The optimal policy is learnt with respect to a particular task operationalised through the choice of the reward functionR ( s , a , s′ ) . Successor Features The successor feature ( SF ) representation is a way of decoupling the dynamics of an environment from its reward function . This is done by representing an environment reward as r ( s , a , s′ ) = φ ( s , a , s′ ) > w where φ ( s , a , s′ ) is a vector of environment features . Notably , this representation does not decrease the expressivity of r since no assumptions are made on the form of φ . Moreover , this representation allows for decomposing value functions as follows : Qπ ( s , a ) =Eπ [ ∞∑ k=0 γkφ ( st , at , st+1 ) |s0 =s , at=a ] > wj=ψ ( s , a ) π > wj . ( 1 ) where ψ ( s , a ) π is a vector of reward-independent successor features . GPI & GPE Generalised Policy Improvement ( GPI ) and Generalised Policy Evaluation ( GPE ) ( Barreto et al. , 2017 ) can be used to transfer a set of existing policies to solve new tasks . The framework is specified for a set of MDPs : Mφ ( S , A , P , γ ) = { Mφ ( S , A , P , r , γ ) | r ( s , a , s′ ) =φ ( s , a , s′ ) > w } ( 2 ) induced by all possible choices of weights w that specify all possible rewards r , given a state space S , action spaceA , transition probabilities P , discount factor γ and features φ ( s , a , s′ ) . Note that the features are meant to be the same for all MDPs M ∈Mφ . Given a policy πi learnt to solve task i specified bywi , we can evaluate its value under a different reward rj=φ ( s , a , s′ ) > wj using GPE : Qπij ( s , a ) =ψ ( s , a ) πi > wj ( 3 ) using our definition of successor features defined in ( 1 ) . Hence , given a set of policies π1 , π2 , ... , πi induced by rewards r1 , r2 , ... , ri over a subset of the MDPs M ′ ⊂Mφ , we can get a new policy πj for a new task induced by rj ( note that Mj ∈Mφ , Mj∩M ′=∅ ) according to : πj ( s ) =argmax a max i ψ ( s , a ) πi > wj . ( 4 )
The paper addresses the problem of policy transfer in reinforcement learning, which is an extremely relevant open problem in RL, and is being actively studied by the community. 
The authors propose a framework for discovering a set of policies without external supervision which can then be used to produce reasonable performance on extrinsic tasks. 
The work exhibits originality in that it shows that disentangled representations, learned by intrinsic rewards, can lead to learn behaviours that are transferable to novel situations. 

SP:31a497e4a1c74532ad3357b19e9fa4000db61115
Disentangled Cumulants Help Successor Representations Transfer to New Tasks
1 INTRODUCTION . Natural intelligence is able to solve many diverse tasks by transferring knowledge and skills from one task to another . For example , by knowing about objects and how to move them in 3D space , it is possible to learn how to sort them by shape or colour faster . However , many of the current state-of-the-art artificial reinforcement learning ( RL ) agents often struggle with such basic skill transfer . They are able to solve single tasks well , often beyond the ability of any natural intelligence ( Silver et al. , 2016 ; Mnih et al. , 2015 ; Jaderberg et al. , 2017 ) , however even small deviations from the task that the agent was trained on can result in catastrophic failures ( Lake et al. , 2016 ; Rusu et al. , 2016 ) . Although improving transfer in RL agents is an active area of research ( Higgins et al. , 2017a ; Rusu et al. , 2016 ; Nair et al. , 2018 ; Barreto et al. , 2018 ; Wulfmeier et al. , 2019 ; Torrey & Shavlik , 2010 ; Taylor & Stone , 2009 ; Thrun & Pratt , 2012 ; Caruana , 1997 ; Jaderberg et al. , 2017 ; Riedmiller et al. , 2018 ) , most typical deep RL agents start learning every task from scratch . This means that each time they have to re-learn how to perceive the world ( the mapping from a high-dimensional observation to state ) , and also how to act ( the mapping from state to action ) , with the majority of time arguably spent on the former . The optimisation procedure naturally discards information that is irrelevant to the task , which means that the learnt state representation is often unsuitable for new tasks . Biological intelligence appears to operate differently . A lot of knowledge tends to be discovered and learnt without explicit supervision ( Tolman , 1948 ; Clark , 2013 ; Friston , 2010 ) . This basic knowledge can then form the behavioural basis that can be used to solve new tasks faster . In this paper we argue that such transferable knowledge and skills should be acquired in artificial agents too . In particular , we want to start by building agents that have the ability to discover stable entities that make up the world and to learn basic skills to manipulate these entities . Compositional re-use of such skills enables biological intelligence to find reasonable solutions to many naturally occurring tasks , from goal-directed movement ( controlling your own position ) , to food gathering ( controlling the position of fruit and and nuts ) , or building a simple defence system ( re-positioning multiple stones into a fence or digging a ditch ) . In this paper we concentrate on goal-based natural tasks that can be expressed in natural language , and that do not require a specific execution order of actions . To this end , we propose a principled way to learn a small set of policies which can be re-used by the agents to quickly produce reasonable performance on an exponentially large set of goal-driven tasks within an environment . We propose a method on how to discover these policies in the absence of external supervision , where the agent accumulates a transferable set of basic skills through intrinsically motivated interactions with the environment . This first stage of free play builds the foundation to later solve many diverse extrinsically specified downstream tasks . We suggest formalising such a two-stage pipeline as the endogenous reinforcement learning ( ERL ) setting , in order to provide a consistent evaluation framework for some of the existing and future work on building RL agents with intrinsic learning signals ( Gregor et al. , 2017 ; Eysenbach et al. , 2019 ; Hansen et al. , 2019 ; Nair et al. , 2018 ; Laversanne-Finot et al. , 2018 ) . We propose a disjoint two step research pipeline , where the agent is allowed unlimited access to the environment in the ERL stage , where no extrinsic rewards are provided and the agent is supposed to learn as much as it can through endogenously ( intrinsically ) driven interactions with the environment . This is followed by a standard RL stage where the success of the previous step is evaluation in terms of data-efficiency of learning on multiple diverse extrinsically ( exogenously ) specified downstream tasks in the same environment . We hope that by working in this extreme two stage setting , where the agents have to learn useful knowledge with no access to task rewards , we can develop algorithms that learn more robust and transferable policies even in the traditional RL setting . In this paper we propose to use the ERL stage to discover disentangled features through task-free interactions with the environment , and then solving a number of goal-driven self-generated tasks specified in the learnt disentangled feature space . In particular , we suggest learning k disentangled features , discretising them intom bins each , and learning km feature control policies that achieve the respective bin value of the given feature . We then re-combine the feature control policies learnt in the ERL stage to solve downstream tasks in the RL stage in a few-shot manner using Generalized Policy Improvement ( GPI ) ( Barreto et al. , 2018 ) . We demonstrate theoretically and empirically that our proposed set of basis policies that learn to control disentangled features produce significantly better generalisation over a large number of downstream tasks . Intuitively , disentangled representations consist of the smallest set of features that represent those aspects of the world state that are independently affected by natural transformations and together explain the most of the variance observed in an environment ( Higgins et al. , 2018 ) ( see Fig . 1 ) . Disentangled features , therefore , carve the world at its joints and provide a parsimonious representation of the world state that also points to which aspects of the world are stable , and which can in principle be transformed independently of each other . We conjecture that disentangled features align well with the idealised state space in which natural tasks are defined . Hence , by learning a set of policies that can control these features an agent will acquire a set of basis policies which spans a large set of natural tasks defined in such an environment . Note that both disentangled features and their respective control policies can be learnt without an externally specified task , purely in the ERL setting . We provide both theoretical justification for this setup , as well as experimental illustrations of the benefit of disentangled representations in a large set of tasks of varying difficulty . Hence , the main contribution of this work is a theoretical result that extends the GPI framework to guarantee achievability on a large set of natural goal-driven tasks given a small set of basis policies that control disentangled features . In particular , we demonstrate that given k disentangled features discretised intom bins , we can guarantee achievability with a deterministic policy on at least ( m+1 ) k downstream tasks by using GPI to recombine km feature control policies discovered and learnt purely through intrinsically driven interactions with the environment in the total absence of environment rewards . Our result holds for any tasks that can be specified in natural language and do not require a particular ordering of the actions to be solved . For example , our approach would be able to solve a task that requires sorting objects in space based on their colour or shape , or tidying up a messy playroom by putting all the toys in a box , but it will not be able to solve a task like cooking a meal , where the execution order of the different stages in the recipe matters . Related work Past work on supervised and reinforcement learning has demonstrated how multitask learning , transfer and adaptation can provide strong performance gains across various domains ( Caruana , 1997 ; Thrun & Pratt , 2012 ; Yosinski et al. , 2014 ; Girshick et al. , 2014 ; Jaderberg et al. , 2017 ; Riedmiller et al. , 2018 ; Wulfmeier et al. , 2019 ) . Typically these approaches use hand-crafted auxiliary tasks to boost learning of the downstream tasks of interest , which is not scalable and comes with no guarantees on which set of auxiliary tasks is optimal for boosting performance on a large number of downstream tasks . A number of other past approaches shared our motivation of replacing the hand-crafted auxiliary task specification by an automatic way of discovering a diverse and useful set of policies in the absence of externally specified tasks . The predominant approach so far has been to optimise an objective that encourages behaviours that are both diverse and distinguishable from each other ( Gregor et al. , 2017 ; Eysenbach et al. , 2019 ; Hansen et al. , 2019 ) , or to learn how to solve intrinsic tasks sampled from a learnt representation space ( Nair et al. , 2018 ; Laversanne-Finot et al. , 2018 ) . While these approaches have been shown to be successful on transferring the learnt policies to solve certain downstream tasks , none of them provided theoretical guarantees on the downstream task coverage by the basis set of policies . Such guarantees were however provided by van Niekerk et al . ( 2018 ) and Barreto et al . ( 2017 ; 2018 ) . These papers calculated how well a given set of policies can be transferred to solve a wide range of downstream tasks . However , they left the question of how to discover such a set of basis policies open . Hence , our work provides a unique perspective by addressing both the questions of what makes a good basis set of policies to get certain guarantees on final task coverage , and how these policies may be learnt in the ERL setting . Other related literature worth noting is the work by Higgins et al . ( 2017b ) , who showed that learning a downstream task policy over disentangled representations improved its robustness to visual changes in the environment . Another piece of work ( Machado et al. , 2018 ) demonstrated the usefulness of discovering reward-agnostic options through successor feature learning for improving data efficiency in downstream task learning . The benefits of these options , however , were primarily through improving exploration . No guarantees were given in terms of downstream task coverage . 2 BACKGROUND . Basic Reinforcement Learning ( RL ) formalism . An RL agent interacts with its environment through a sequence of actions in such a way as to maximise the expected cumulative discounted rewards ( Sutton & Barto , 1998 ) . The RL problem is typically expressed using the formalism of Markov Decision Processes ( MDPs ) ( Puterman , 1994 ) . An MDP is a tuple M = ( S , A , P , R , γ ) , where S and A are the sets of states and actions , P is the transition probability that predicts the distribution over next states given the current state and action s′ ∼P ( ·|s , a ) , R is the distribution of rewards r∼R ( s , a , s′ ) received for making the transition s a7→ s′ , and γ ∈ [ 0,1 ) is the discount factor used to make future rewards progressively less valuable . Given an MDP , the goal of the agent is to maximise the expected returnGt= ∑∞ i=0γ irt+i . This is done by learning a policy π ( a|s ) that selects the optimal action a∈A in each state s∈S . A typical RL problem attempts to find the optimal policy π∗= argmax π E [ ∑ t≥0γ tr|π ] , where the expectation is taken over all possible interaction sequences of the agent ’ s policy with the environment . The optimal policy is learnt with respect to a particular task operationalised through the choice of the reward functionR ( s , a , s′ ) . Successor Features The successor feature ( SF ) representation is a way of decoupling the dynamics of an environment from its reward function . This is done by representing an environment reward as r ( s , a , s′ ) = φ ( s , a , s′ ) > w where φ ( s , a , s′ ) is a vector of environment features . Notably , this representation does not decrease the expressivity of r since no assumptions are made on the form of φ . Moreover , this representation allows for decomposing value functions as follows : Qπ ( s , a ) =Eπ [ ∞∑ k=0 γkφ ( st , at , st+1 ) |s0 =s , at=a ] > wj=ψ ( s , a ) π > wj . ( 1 ) where ψ ( s , a ) π is a vector of reward-independent successor features . GPI & GPE Generalised Policy Improvement ( GPI ) and Generalised Policy Evaluation ( GPE ) ( Barreto et al. , 2017 ) can be used to transfer a set of existing policies to solve new tasks . The framework is specified for a set of MDPs : Mφ ( S , A , P , γ ) = { Mφ ( S , A , P , r , γ ) | r ( s , a , s′ ) =φ ( s , a , s′ ) > w } ( 2 ) induced by all possible choices of weights w that specify all possible rewards r , given a state space S , action spaceA , transition probabilities P , discount factor γ and features φ ( s , a , s′ ) . Note that the features are meant to be the same for all MDPs M ∈Mφ . Given a policy πi learnt to solve task i specified bywi , we can evaluate its value under a different reward rj=φ ( s , a , s′ ) > wj using GPE : Qπij ( s , a ) =ψ ( s , a ) πi > wj ( 3 ) using our definition of successor features defined in ( 1 ) . Hence , given a set of policies π1 , π2 , ... , πi induced by rewards r1 , r2 , ... , ri over a subset of the MDPs M ′ ⊂Mφ , we can get a new policy πj for a new task induced by rj ( note that Mj ∈Mφ , Mj∩M ′=∅ ) according to : πj ( s ) =argmax a max i ψ ( s , a ) πi > wj . ( 4 )
This paper proposes to pre-train policies on some goal-reaching tasks, and then leverage the associated successor features to improve the learning of a new task. The method heavily draws from the Generalized Policy Evaluation/Improvement framework without adding much to it. The only relevant point would be showing (as the title indicates) how to obtain disentangled cumulants, and whether they help transfer to new tasks. Nevertheless, both the definition, the full method, and the claimed benefits are quite ambiguous.
SP:31a497e4a1c74532ad3357b19e9fa4000db61115
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
1 INTRODUCTION . Reinforcement learning ( RL ) is typically concerned with finding an optimal policy that maximises expected return for a given Markov decision process ( MDP ) with an unknown reward and transition function . If these were known , the optimal policy could in theory be computed without environment interactions . By contrast , learning in an unknown environment usually requires trading off exploration ( learning about the environment ) and exploitation ( taking promising actions ) . Balancing this trade-off is key to maximising expected return during learning , which is desirable in many settings , particularly in high-stakes real-world applications like healthcare and education ( Liu et al. , 2014 ; Yauney & Shah , 2018 ) . A Bayes-optimal policy , which does this trade-off optimally , conditions actions not only on the environment state but on the agent ’ s own uncertainty about the current MDP . In principle , a Bayes-optimal policy can be computed using the framework of Bayes-adaptive Markov decision processes ( BAMDPs ) ( Martin , 1967 ; Duff & Barto , 2002 ) , in which the agent maintains a belief distribution over possible environments . Augmenting the state space of the underlying MDP with this belief yields a BAMDP , a special case of a belief MDP ( Kaelbling et al. , 1998 ) . A Bayes-optimal agent maximises expected return in the BAMDP by systematically seeking out the data needed to quickly reduce uncertainty , but only insofar as doing so helps maximise expected return . Its performance is bounded from above by the optimal policy for the given MDP , which does not need to take exploratory actions but requires prior knowledge about the MDP to compute . Unfortunately , planning in a BAMDP , i.e. , computing a Bayes-optimal policy that conditions on the augmented state , is intractable for all but the smallest tasks . A common shortcut is to rely instead on posterior sampling ( Thompson , 1933 ; Strens , 2000 ; Osband et al. , 2013 ) . Here , the agent periodically samples a single hypothesis MDP ( e.g. , at the beginning of an episode ) from its posterior , and the policy that is optimal for the sampled MDP is followed until the next sample is drawn . Planning is far more tractable since it is done on a regular MDP , not a BAMDP . However , posterior sampling ’ s exploration can be highly inefficient and far from Bayes-optimal . ∗luisa.zintgraf @ cs.ox.ac.uk Consider the example of a gridworld in Figure 1 , where the agent must navigate to an unknown goal located in the grey area ( 1a ) . To maintain a posterior , the agent can uniformly assign non-zero probability to cells where the goal could be , and zero to all other cells . A Bayes-optimal strategy strategically searches the set of goal positions that the posterior considers possible , until the goal is found ( 1b ) . Posterior sampling by contrast samples a possible goal position , takes the shortest route there , and then resamples a different goal position from the updated posterior ( 1c ) . Doing so is much less efficient since the agent ’ s uncertainty is not reduced optimally ( e.g. , states are revisited ) . As this example illustrates , Bayes-optimal policies can explore much more efficiently than posterior sampling . A key challenge is to learn approximately Bayes-optimal policies while retaining the tractability of posterior sampling . In addition , the inference involved in maintaining a posterior belief , needed even for posterior sampling , may itself be intractable . In this paper , we combine ideas from Bayesian RL , approximate variational inference , and metalearning to tackle these challenges , and equip an agent with the ability to strategically explore unseen ( but related ) environments for a given distribution , in order to maximise its expected online return . More specifically , we propose variational Bayes-Adaptive Deep RL ( variBAD ) , a way to meta-learn to perform approximate inference on an unknown task,1 and incorporate task uncertainty directly during action selection . Given a distribution over MDPs p ( M ) , we represent a single MDPM using a learned , low-dimensional stochastic latent variable m and jointly meta-train : 1 . A variational auto-encoder that can infer the posterior distribution over m in a new task , given the agent ’ s experience , while interacting with the environment , and 2 . A policy that conditions on this posterior belief over MDP embeddings , and thus learns how to trade off exploration and exploitation when selecting actions under task uncertainty . Figure 1e shows the performance of our method versus the hard-coded optimal ( with privileged goal information ) , Bayes-optimal , and posterior sampling exploration strategies . VariBAD ’ s performance closely matches the Bayes-optimal one , matching optimal performance from the third rollout . 1We use the terms environment , task , and MDP , interchangeably . Previous approaches to BAMDPs are only tractable in environments with small action and state spaces or rely on privileged information about the task during training . VariBAD offers a tractable and flexible approach for learning Bayes-adaptive policies tailored to the training task distribution , with the only assumption that such a distribution is available for meta-training . We evaluate our approach on the gridworld shown above and on MuJoCo domains that are widely used in meta-RL , and show that variBAD exhibits superior exploratory behaviour at test time compared to existing meta-learning methods , achieving higher returns during learning . As such , variBAD opens a path to tractable approximate Bayes-optimal exploration for deep reinforcement learning . 2 BACKGROUND . We define a Markov decision process ( MDP ) as a tuple M = ( S , A , R , T , T0 , γ , H ) with S a set of states , A a set of actions , R ( rt+1|st , at , st+1 ) a reward function , T ( st+1|st , at ) a transition function , T0 ( s0 ) an initial state distribution , γ a discount factor , andH the horizon . In the standard RL setting , we want to learn a policy π that maximises J ( π ) = ET0 , T , π [ ∑H−1 t=0 γ tR ( rt+1|st , at , st+1 ) ] , the expected return . Here , we consider a multi-task meta-learning setting , which we introduce next . 2.1 TRAINING SETUP . We adopt the standard meta-learning setting where we have a distribution p ( M ) over MDPs from which we can sample during meta-training , with an MDP Mi ∼ p ( M ) defined by a tuple Mi = ( S , A , Ri , Ti , Ti,0 , γ , H ) . Across tasks , the reward and transition functions vary but share some structure . The index i represents an unknown task description ( e.g. , a goal position or natural language instruction ) or task ID . Sampling an MDP from p ( M ) is typically done by sampling a reward and transition function from a distribution p ( R , T ) . During meta-training , batches of tasks are repeatedly sampled , and a small training procedure is performed on each of them , with the goal of learning to learn ( for an overview of existing methods see Sec 4 ) . At meta-test time , the agent is evaluated based on the average return it achieves during learning , for tasks drawn from p. Doing this well requires at least two things : ( 1 ) incorporating prior knowledge obtained in related tasks , and ( 2 ) reasoning about task uncertainty when selecting actions to trade off exploration and exploitation . In the following , we combine ideas from meta-learning and Bayesian RL to tackle these challenges . 2.2 BAYESIAN REINFORCEMENT LEARNING . When the MDP is unknown , optimal decision making has to trade off exploration and exploitation when selecting actions . In principle , this can be done by taking a Bayesian approach to reinforcement learning formalised as a Bayes-Adaptive MDP ( BAMDP ) , the solution to which is a Bayesoptimal policy ( Bellman , 1956 ; Duff & Barto , 2002 ; Ghavamzadeh et al. , 2015 ) . In the Bayesian formulation of RL , we assume that the transition and reward functions are distributed according to a prior b0 = p ( R , T ) . Since the agent does not have access to the true reward and transition function , it can maintain a belief bt ( R , T ) = p ( R , T |τ : t ) , which is the posterior over the MDP given the agent ’ s experience τ : t = { s0 , a0 , r1 , s1 , a1 , . . . , st } up until the current timestep . This is often done by maintaining a distribution over the model parameters . To allow the agent to incorporate the task uncertainty into its decision-making , this belief can be augmented to the state , resulting in hyper-states s+t ∈ S+ = S × B , where B is the belief space . These transition according to T+ ( s+t+1|s + t , at , rt ) = T + ( st+1 , bt+1|st , at , rt , bt ) = T+ ( st+1|st , at , bt ) T+ ( bt+1|st , at , rt , bt , st+1 ) = Ebt [ T ( st+1|st , at ) ] δ ( bt+1 = p ( R , T |τ : t+1 ) ) ( 1 ) i.e. , the new environment state st is the expected new state w.r.t . the current posterior distribution of the transition function , and the belief is updated deterministically according to Bayes rule . The reward function on hyper-states is defined as the expected reward under the current posterior ( after the state transition ) over reward functions , R+ ( s+t , at , s + t+1 ) = R + ( st , bt , at , st+1 , bt+1 ) = Ebt+1 [ R ( st , at , st+1 ) ] . ( 2 ) This results in a BAMDP M+ = ( S+ , A , R+ , T+ , T+0 , γ , H+ ) ( Duff & Barto , 2002 ) , which is a special case of a belief MDP , i.e , the MDP formed by taking the posterior beliefs maintained by an agent in a partially observable MDP and reinterpreting them as Markov states ( Cassandra et al. , 1994 ) . In an arbitrary belief MDP , the belief is over a hidden state that can change over time . In a BAMDP , the belief is over the transition and reward functions , which are constant for a given task . The agent ’ s objective is now to maximise the expected return in the BAMDP , J + ( π ) = Eb0 , T+0 , T+ , π H+−1∑ t=0 γtR+ ( rt+1|s+t , at , s+t+1 ) , ( 3 ) i.e. , maximise the expected return in an initially unknown environment , while learning , within the horizonH+ . Note the distinction between the MDP horizonH and BAMDP horizonH+ . Although they often coincide , we might instead want the agent to act Bayes-optimal within the first N MDP episodes , so H+=N × H . Trading off exploration and exploitation optimally depends heavily on how much time the agent has left ( e.g. , to decide whether information-seeking actions are worth it ) . The objective in ( 3 ) is maximised by the Bayes-optimal policy , which automatically trades off exploration and exploitation : it takes exploratory actions to reduce its task uncertainty only insofar as it helps to maximise the expected return within the horizon . The BAMDP framework is powerful because it provides a principled way of formulating Bayes-optimal behaviour . However , solving the BAMDP is hopelessly intractable for most interesting problems . The main challenges are as follows . • We typically do not know the parameterisation of the true reward and/or transition model , • The belief update ( computing the posterior p ( R , T |τ : t ) ) is often intractable , and • Even with the correct posterior , planning in belief space is typically intractable . In the following , we propose a method that simultaneously meta-learns the reward and transition functions , how to perform inference in an unknown MDP , and how to use the belief to maximise expected online return . Since the Bayes-adaptive policy is learned end-to-end with the inference framework , no planning is necessary at test time . We make minimal assumptions ( no privileged task information is required during training ) , resulting in a highly flexible and scalable approach to Bayes-adaptive Deep RL .
This paper considers a version of reinforcement learning problem where an unknown prior distribution over Markov decision processes are assumed and the learner can sample from it. After sampling a MDP, a standard reinforcement learning is done. Then the paper investigates the Bayes-optimal strategy for such meta-learning setting. The experiments are done for an artificial maze solving tasks.
SP:daf8080733b61b118faad8dca6f09691ecaa3005
VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning
1 INTRODUCTION . Reinforcement learning ( RL ) is typically concerned with finding an optimal policy that maximises expected return for a given Markov decision process ( MDP ) with an unknown reward and transition function . If these were known , the optimal policy could in theory be computed without environment interactions . By contrast , learning in an unknown environment usually requires trading off exploration ( learning about the environment ) and exploitation ( taking promising actions ) . Balancing this trade-off is key to maximising expected return during learning , which is desirable in many settings , particularly in high-stakes real-world applications like healthcare and education ( Liu et al. , 2014 ; Yauney & Shah , 2018 ) . A Bayes-optimal policy , which does this trade-off optimally , conditions actions not only on the environment state but on the agent ’ s own uncertainty about the current MDP . In principle , a Bayes-optimal policy can be computed using the framework of Bayes-adaptive Markov decision processes ( BAMDPs ) ( Martin , 1967 ; Duff & Barto , 2002 ) , in which the agent maintains a belief distribution over possible environments . Augmenting the state space of the underlying MDP with this belief yields a BAMDP , a special case of a belief MDP ( Kaelbling et al. , 1998 ) . A Bayes-optimal agent maximises expected return in the BAMDP by systematically seeking out the data needed to quickly reduce uncertainty , but only insofar as doing so helps maximise expected return . Its performance is bounded from above by the optimal policy for the given MDP , which does not need to take exploratory actions but requires prior knowledge about the MDP to compute . Unfortunately , planning in a BAMDP , i.e. , computing a Bayes-optimal policy that conditions on the augmented state , is intractable for all but the smallest tasks . A common shortcut is to rely instead on posterior sampling ( Thompson , 1933 ; Strens , 2000 ; Osband et al. , 2013 ) . Here , the agent periodically samples a single hypothesis MDP ( e.g. , at the beginning of an episode ) from its posterior , and the policy that is optimal for the sampled MDP is followed until the next sample is drawn . Planning is far more tractable since it is done on a regular MDP , not a BAMDP . However , posterior sampling ’ s exploration can be highly inefficient and far from Bayes-optimal . ∗luisa.zintgraf @ cs.ox.ac.uk Consider the example of a gridworld in Figure 1 , where the agent must navigate to an unknown goal located in the grey area ( 1a ) . To maintain a posterior , the agent can uniformly assign non-zero probability to cells where the goal could be , and zero to all other cells . A Bayes-optimal strategy strategically searches the set of goal positions that the posterior considers possible , until the goal is found ( 1b ) . Posterior sampling by contrast samples a possible goal position , takes the shortest route there , and then resamples a different goal position from the updated posterior ( 1c ) . Doing so is much less efficient since the agent ’ s uncertainty is not reduced optimally ( e.g. , states are revisited ) . As this example illustrates , Bayes-optimal policies can explore much more efficiently than posterior sampling . A key challenge is to learn approximately Bayes-optimal policies while retaining the tractability of posterior sampling . In addition , the inference involved in maintaining a posterior belief , needed even for posterior sampling , may itself be intractable . In this paper , we combine ideas from Bayesian RL , approximate variational inference , and metalearning to tackle these challenges , and equip an agent with the ability to strategically explore unseen ( but related ) environments for a given distribution , in order to maximise its expected online return . More specifically , we propose variational Bayes-Adaptive Deep RL ( variBAD ) , a way to meta-learn to perform approximate inference on an unknown task,1 and incorporate task uncertainty directly during action selection . Given a distribution over MDPs p ( M ) , we represent a single MDPM using a learned , low-dimensional stochastic latent variable m and jointly meta-train : 1 . A variational auto-encoder that can infer the posterior distribution over m in a new task , given the agent ’ s experience , while interacting with the environment , and 2 . A policy that conditions on this posterior belief over MDP embeddings , and thus learns how to trade off exploration and exploitation when selecting actions under task uncertainty . Figure 1e shows the performance of our method versus the hard-coded optimal ( with privileged goal information ) , Bayes-optimal , and posterior sampling exploration strategies . VariBAD ’ s performance closely matches the Bayes-optimal one , matching optimal performance from the third rollout . 1We use the terms environment , task , and MDP , interchangeably . Previous approaches to BAMDPs are only tractable in environments with small action and state spaces or rely on privileged information about the task during training . VariBAD offers a tractable and flexible approach for learning Bayes-adaptive policies tailored to the training task distribution , with the only assumption that such a distribution is available for meta-training . We evaluate our approach on the gridworld shown above and on MuJoCo domains that are widely used in meta-RL , and show that variBAD exhibits superior exploratory behaviour at test time compared to existing meta-learning methods , achieving higher returns during learning . As such , variBAD opens a path to tractable approximate Bayes-optimal exploration for deep reinforcement learning . 2 BACKGROUND . We define a Markov decision process ( MDP ) as a tuple M = ( S , A , R , T , T0 , γ , H ) with S a set of states , A a set of actions , R ( rt+1|st , at , st+1 ) a reward function , T ( st+1|st , at ) a transition function , T0 ( s0 ) an initial state distribution , γ a discount factor , andH the horizon . In the standard RL setting , we want to learn a policy π that maximises J ( π ) = ET0 , T , π [ ∑H−1 t=0 γ tR ( rt+1|st , at , st+1 ) ] , the expected return . Here , we consider a multi-task meta-learning setting , which we introduce next . 2.1 TRAINING SETUP . We adopt the standard meta-learning setting where we have a distribution p ( M ) over MDPs from which we can sample during meta-training , with an MDP Mi ∼ p ( M ) defined by a tuple Mi = ( S , A , Ri , Ti , Ti,0 , γ , H ) . Across tasks , the reward and transition functions vary but share some structure . The index i represents an unknown task description ( e.g. , a goal position or natural language instruction ) or task ID . Sampling an MDP from p ( M ) is typically done by sampling a reward and transition function from a distribution p ( R , T ) . During meta-training , batches of tasks are repeatedly sampled , and a small training procedure is performed on each of them , with the goal of learning to learn ( for an overview of existing methods see Sec 4 ) . At meta-test time , the agent is evaluated based on the average return it achieves during learning , for tasks drawn from p. Doing this well requires at least two things : ( 1 ) incorporating prior knowledge obtained in related tasks , and ( 2 ) reasoning about task uncertainty when selecting actions to trade off exploration and exploitation . In the following , we combine ideas from meta-learning and Bayesian RL to tackle these challenges . 2.2 BAYESIAN REINFORCEMENT LEARNING . When the MDP is unknown , optimal decision making has to trade off exploration and exploitation when selecting actions . In principle , this can be done by taking a Bayesian approach to reinforcement learning formalised as a Bayes-Adaptive MDP ( BAMDP ) , the solution to which is a Bayesoptimal policy ( Bellman , 1956 ; Duff & Barto , 2002 ; Ghavamzadeh et al. , 2015 ) . In the Bayesian formulation of RL , we assume that the transition and reward functions are distributed according to a prior b0 = p ( R , T ) . Since the agent does not have access to the true reward and transition function , it can maintain a belief bt ( R , T ) = p ( R , T |τ : t ) , which is the posterior over the MDP given the agent ’ s experience τ : t = { s0 , a0 , r1 , s1 , a1 , . . . , st } up until the current timestep . This is often done by maintaining a distribution over the model parameters . To allow the agent to incorporate the task uncertainty into its decision-making , this belief can be augmented to the state , resulting in hyper-states s+t ∈ S+ = S × B , where B is the belief space . These transition according to T+ ( s+t+1|s + t , at , rt ) = T + ( st+1 , bt+1|st , at , rt , bt ) = T+ ( st+1|st , at , bt ) T+ ( bt+1|st , at , rt , bt , st+1 ) = Ebt [ T ( st+1|st , at ) ] δ ( bt+1 = p ( R , T |τ : t+1 ) ) ( 1 ) i.e. , the new environment state st is the expected new state w.r.t . the current posterior distribution of the transition function , and the belief is updated deterministically according to Bayes rule . The reward function on hyper-states is defined as the expected reward under the current posterior ( after the state transition ) over reward functions , R+ ( s+t , at , s + t+1 ) = R + ( st , bt , at , st+1 , bt+1 ) = Ebt+1 [ R ( st , at , st+1 ) ] . ( 2 ) This results in a BAMDP M+ = ( S+ , A , R+ , T+ , T+0 , γ , H+ ) ( Duff & Barto , 2002 ) , which is a special case of a belief MDP , i.e , the MDP formed by taking the posterior beliefs maintained by an agent in a partially observable MDP and reinterpreting them as Markov states ( Cassandra et al. , 1994 ) . In an arbitrary belief MDP , the belief is over a hidden state that can change over time . In a BAMDP , the belief is over the transition and reward functions , which are constant for a given task . The agent ’ s objective is now to maximise the expected return in the BAMDP , J + ( π ) = Eb0 , T+0 , T+ , π H+−1∑ t=0 γtR+ ( rt+1|s+t , at , s+t+1 ) , ( 3 ) i.e. , maximise the expected return in an initially unknown environment , while learning , within the horizonH+ . Note the distinction between the MDP horizonH and BAMDP horizonH+ . Although they often coincide , we might instead want the agent to act Bayes-optimal within the first N MDP episodes , so H+=N × H . Trading off exploration and exploitation optimally depends heavily on how much time the agent has left ( e.g. , to decide whether information-seeking actions are worth it ) . The objective in ( 3 ) is maximised by the Bayes-optimal policy , which automatically trades off exploration and exploitation : it takes exploratory actions to reduce its task uncertainty only insofar as it helps to maximise the expected return within the horizon . The BAMDP framework is powerful because it provides a principled way of formulating Bayes-optimal behaviour . However , solving the BAMDP is hopelessly intractable for most interesting problems . The main challenges are as follows . • We typically do not know the parameterisation of the true reward and/or transition model , • The belief update ( computing the posterior p ( R , T |τ : t ) ) is often intractable , and • Even with the correct posterior , planning in belief space is typically intractable . In the following , we propose a method that simultaneously meta-learns the reward and transition functions , how to perform inference in an unknown MDP , and how to use the belief to maximise expected online return . Since the Bayes-adaptive policy is learned end-to-end with the inference framework , no planning is necessary at test time . We make minimal assumptions ( no privileged task information is required during training ) , resulting in a highly flexible and scalable approach to Bayes-adaptive Deep RL .
This paper presents a new deep reinforcement learning method that can efficiently trade-off exploration and exploitation. An optimal policy for this trade-off can be solved under the Bayesian-adaptive MDP framework, but in practice, the computation is often intractable. To solve the challenge and approximate a Bayesian-optimal policy, the proposed method VariBAD combines meta-learning, variational inference, and bayesian RL. Specifically, the algorithm learns latent representations of task embeddings and performs tractable approximate inference by optimizing a tractable lower bound of the objective.
SP:daf8080733b61b118faad8dca6f09691ecaa3005
DSReg: Using Distant Supervision as a Regularizer
1 INTRODUCTION . Consider the following sentences in a text-classification task , in which we want to identify text describing hotels with good service/staff ( as depicted as aspect-level sentiment classification in Tang et al . ( 2015 ) ; Li et al . ( 2016 ) ; Lei et al . ( 2016 ) ) : • S1 : the staff are great . ( positive ) • S2 : the location is great but the staff are surly and unhelpful .. ( hard-negative ) • S3 : the staff are surly and unhelpful . ( easy-negative ) S1 is a positive example since it describes a hotel with good staff . Both S2 and S3 are negative because staff are unhelpful . However , since S2 is lexically and semantically similar with S1 , standard models can be easily confused . As another example , in reading comprehension tasks like NarrativeQA Kočiskỳ et al . ( 2018 ) , truth answers are human-generated ones and might not have exact matches in the original passage . A commonly adopted strategy is to first locate similar sequences from the original passage using a pre-defined threshold ( using metrics like ROUGE-L ) and then treat them as positive training examples . Sequences that are semantically similar but right below this specific threshold will be treated as negative examples and will thus inevitably introduce massive labeling noise in training . This problem is ubiquitous in a wide range of NLP tasks , i.e. , some of the negative examples are highly similar to positive examples . We refer to these negative examples as hard-negative examples for the rest of this paper . Similarly , the negative examples that are not similar to the positive examples are refered to as easy-negative examples . Hard-negative examples can cause big trouble in model training , because the nuance between positive examples and hard-negative examples can cause confusion for a model trained from scratch . To make things worse , when there is a class-balance problem where the number of negative examples are overwhelmingly larger than that of positive examples ( which is true in many real-world use cases ) , the model will be at loose ends because positive features are buried in the sea of negative features . To tackle this issue , we propose using the idea of distant supervision ( e.g. , Mintz et al . ( 2009 ) ; Riedel et al . ( 2010 ) ) to regularize training . We first harvest hard-negative examples using distant supervision . This process can be done by a method as simple as using word overlapping metrics ( e.g. , ROUGE , BLEU or whether a sentence contains some certain keywords ) . With the harvested hard-negative examples , we transform the original binary classification setting to a multi-task learning setting , in which we jointly optimize the original target objective of distinguishing positive examples from negative examples along with an auxiliary objective of distinguishing soften positive examples ( comprised of positive examples and hard-negative examples ) from easy-negative examples . For a neural network model , this goal can be easily achieved by using different output layers to readout the final-layer representations . In this way , the features that are shared between positive examples and hard negative examples can be captured by the model . Models can easily tell which features that can distinguish positive examples the most . Using this unbelievably simple strategy , we improve the performance of in a range of different NLP tasks , including text classification , sequence labeling and reading comprehension . The key contributions of this work can be summarized as follows : • We study a general situation in NLP , where a subset of the negative examples are highly similar to the positive examples . We analyze why it is a problem and how to deal with it . • We propose a general strategy that utilize the idea of distant supervision to harvest hard- negative training examples , and transform the original task to a multi-task learning problem . The strategy is widely applicable for a variaty of tasks . • Using this unbelievably simple strategy , we can obtain significant improvement on the tasks of text-classification , sequence-labeling and reading comprehension . 2 RELATED WORK . Distant Supervision Mintz et al . ( 2009 ) ; Riedel et al . ( 2010 ) ; Hoffmann et al . ( 2011 ) ; Surdeanu et al . ( 2012 ) It is proposed to address the data sparsity issue in relation extraction . Suppose that we wish to extract sentences expressing the ISCAPITAL relation , distant supervision augments the positve training set by first aligning unlabeled text corpus with all entity pairs between which the ISCAPITAL relation holds and then treating all aligned texts as positive training examples . The idea has been extended to other domains such as sentiment analysis Go et al . ( 2009 ) , computer security event Ritter et al . ( 2015 ) , life event extraction Li et al . ( 2014 ) and image classification Chen and Gupta ( 2015 ) . Deep leaning techniques have significantly improved the results of distant supervision for relation extraction Zeng et al . ( 2017 ) ; Luo et al . ( 2017 ) ; Lin et al . ( 2017 ) ; Toutanova et al . ( 2015 ) . Multi-Task Learning ( MTL ) The idea of using data harvested via distant supervision as auxiliary supervision signals is inspired by recent progress on multi-task learning : models for auxiliary tasks share hidden states or parameters with models for the main task and act as regularizers . In addition , neural models often celebrate performance boost when jointly trained for multiple tasks Collobert et al . ( 2011 ) ; Chen et al . ( 2017 ) ; Hashimoto et al . ( 2017 ) ; FitzGerald et al . ( 2015 ) . For instance , Luong et al . ( 2015 ) use sequence-to-sequence model to jointly train machine translation , parsing and image caption generation models . Dong et al . ( 2015 ) adopt an alternating training approach for different language pairs , i.e. , they optimize each task objective for a fixed number of parameter updates ( or mini-batches ) before switching to a different language pair . Swayamdipta et al . ( 2018 ) propose using syntactic tasks to regularize semantic tasks like semantic role labeling . Hashimoto et al . ( 2017 ) improve universal syntactic dependency parsing using a multi-task learning approach . 3 MODELS . In this section , we discuss the details of the proposed model . We focus on two different types of NLP tasks , text classification and sequence labeling . 3.1 TEXT CLASSIFICATION . Suppose that we have text-label pairs D = { xi , yi } . xi consists of a sequence of tokens xi = { wi,1 , wi,2 , ... , wi , ni } , where ni denotes the number of tokens in xi . Each text xi is paired with a binary label yi ∈ { 0 , 1 } . The training set can be divided into a positive set D+ and a negative set D− . Let ŷi denote the model prediction . The standard training objective can be given as follows : L1 =− ∑ ( xi , yi ) ∈D logP ( ŷi = yi|xi ) =− ∑ ( xi , yi ) ∈D+ logP ( ŷi = 1|xi ) − ∑ ( xi , yi ) ∈D− logP ( ŷi = 0|xi ) ( 1 ) LetDhard-neg denote the hard-negative examples retrieved using distant supervision . Here we introduce a new label z : z = 1 for instances in Dhard-neg ∪D+ , and z = 0 for instances in D− −Dhard-neg . We regularize L1 using an additional objective L2 : L2 =− ∑ ( xi , zi ) ∈D+∪Dhard-neg logP ( ẑi = 1|xi ) − ∑ ( xi , zi ) ∈D−−Dhard-neg logP ( ẑi = 0|xi ) ( 2 ) L2 can be thought as an objective to capture the shared features in positive examples and hardnegative examples . Equ.2 can also be extended to another similar form , distinguishing between Dhard-pos∪D− ( i.e. , the union of positive examples that are similar to negative and negative examples ) and D+ −Dhard-pos . Empirically , we also find that adding one more three-class classification objective L3 , which separates positive vs hard-negative vs easy-negative introduces additional performance boost . We suggest that adding this three-class classification will additionally highlight the difference between hard negative examples and easy negative examples for the model . The label is denoted by l , where l = 0 for easy negative examples , l = 1 for positive examples and l = 2 for hard negative examples . This leads the final objective function at test time to be : L = λ1L1 + λ2L2 + λ3L3 ( 3 ) where λ1 + λ2 + λ3 = 1 are used to control the relative importance of each loss . For a neural classification model , p ( z|x ) , p ( y|x ) and p ( l|x ) share the same model structure . The input text x is first mapped to a d-dimensional vector representation hx using suitable contextualization strategy , such as LSTMs Hochreiter and Schmidhuber ( 1997 ) , CNNs Kim ( 2014 ) or transformers Vaswani et al . ( 2017 ) . Then hx is fed to three fully connected layers with softmax activation function to compute p ( y|x ) , p ( z|x ) and p ( l|x ) respectively : p ( y|x ) = softmax ( Wyhx ) p ( z|x ) = softmax ( Wzhx ) p ( l|x ) = softmax ( Wlhx ) ( 4 ) where Wy , Wz ∈ R2×d , Wl ∈ R3×d . 3.2 SEQUENCE LABELING . In sequence labeling tasks Lafferty et al . ( 2001 ) ; Ratinov and Roth ( 2009 ) ; Collobert et al . ( 2011 ) ; Huang et al . ( 2015 ) ; Ma and Hovy ( 2016 ) ; Chiu and Nichols ( 2016 ) , a model is trained to assign labels to each of the tokens in a text sequence . Suppose that we are to assign labels to all tokens in a chunk of text D = { x1 , x2 , ... , xnD } where nD denotes the number of tokens in D. Let us consider a simple case where we only have one type of tag and we will use the standard IOB ( short for inside , outside , beginning ) sequence labeling format . In this case , it is a three-class classification problem , assigning yi ∈ ( B , I , O ) to each token . We treat tokens with label B and I as D+ and tokens with label O as D− . The objective function for the vanilla sequence labeling task is given as follows : L1 =− logP ( y1 : nD |x1 : nD ) ( 5 ) P ( y1 : nD |x1 : nD ) can be computed using standard sequence tagging models such as CRF Lafferty et al . ( 2001 ) , hybrid CRF+neural models Huang et al . ( 2015 ) ; Ma and Hovy ( 2016 ) ; Chiu and Nichols ( 2016 ) ; Ye and Ling ( 2018 ) or purely neural models Collobert et al . ( 2011 ) ; Devlin et al . ( 2018 ) . To take into account negative examples that are highly similar to positive ones , we use the idea of distant supervision to first retrieve the set of hard-negative dataset Dhard-neg . Akin to the text classification task , we introduce a new label zi ∈ ( B , I , O ) , indicating whether the current token belongs to Dhard-neg . To incorporate the collected hard-negative examples into the model , again we introduce an auxiliary objective of assigning correct z labels to different tokens : L2 =− logP ( z1 : nD |x1 : nD ) ( 6 ) Similar to the text classification task , we also want to separate hard negative examples , easy negative examples and positive examples , so we associate each example with a label l. We thus have distinct “ outside ” and “ beginning ” labels for positive examples and hard-negative examples , i.e. , l ∈ ( Bpos , Ipos , Bhard-neg , Ihard-neg , O ) . Labels for different categories regarding y and z are shown in Table 2 . The final objective function is thus as follows : L = λ1L1 + λ2L2 + λ3L3 ( 7 ) Again λ1 + λ2 + λ3 = 1 . At the training time , the two functions are simultaneously trained . At the test time , we only use P ( yi|x1 : nD ) to predict yi as the final decision . For CRF-based models Huang et al . ( 2015 ) ; Ma and Hovy ( 2016 ) ; Chiu and Nichols ( 2016 ) , neural representations are fed to the CRF layer and used as features for decision making . As in Ma and Hovy ( 2016 ) , neural representation hxi is computed for each token/position using LSTMs and CNNs , and then forwarded to the CRF layer . The key issue with CRF-based models is that the CRF model is only able to output one single label . This rules out the possibility of directly feeding hx to three readout layers to simultaneously predict y , z and l. We propose the following solution : we use three separate CRFs to predict y ( for L1 ) , z ( for L2 ) and l ( for L3 ) , denoted by CRFy , CRFz and CRFl . The three CRFs use the same hidden representation hx obtained from the same neural model as inputs , but independently learn their own features weights . We iteratively perform gradient descent on the three CRF s , and the error from the three CRF s are back-propagated to the neural model iteratively .
This paper proposes to improve performance of NLP tasks by focusing on negative examples that are similar to positive examples (e.g. hard negatives). This is achieved by regularizing the model using extra output classifiers trained to classify examples into up to three classes: positive, negative-easy, and negative-hard. Since those labels are not provided in the original data, examples are classified using heuristics (e.g. negative examples that contain a lot of features predictive of a positive class will be considered as hard-negative examples), which are used to provide distant supervision. This general approach is evaluated on phrase classification tasks, one information extraction task, and one MRCQA task.
SP:08350406f571056b5652cff2e4b0c5ed7ccb13c8
DSReg: Using Distant Supervision as a Regularizer
1 INTRODUCTION . Consider the following sentences in a text-classification task , in which we want to identify text describing hotels with good service/staff ( as depicted as aspect-level sentiment classification in Tang et al . ( 2015 ) ; Li et al . ( 2016 ) ; Lei et al . ( 2016 ) ) : • S1 : the staff are great . ( positive ) • S2 : the location is great but the staff are surly and unhelpful .. ( hard-negative ) • S3 : the staff are surly and unhelpful . ( easy-negative ) S1 is a positive example since it describes a hotel with good staff . Both S2 and S3 are negative because staff are unhelpful . However , since S2 is lexically and semantically similar with S1 , standard models can be easily confused . As another example , in reading comprehension tasks like NarrativeQA Kočiskỳ et al . ( 2018 ) , truth answers are human-generated ones and might not have exact matches in the original passage . A commonly adopted strategy is to first locate similar sequences from the original passage using a pre-defined threshold ( using metrics like ROUGE-L ) and then treat them as positive training examples . Sequences that are semantically similar but right below this specific threshold will be treated as negative examples and will thus inevitably introduce massive labeling noise in training . This problem is ubiquitous in a wide range of NLP tasks , i.e. , some of the negative examples are highly similar to positive examples . We refer to these negative examples as hard-negative examples for the rest of this paper . Similarly , the negative examples that are not similar to the positive examples are refered to as easy-negative examples . Hard-negative examples can cause big trouble in model training , because the nuance between positive examples and hard-negative examples can cause confusion for a model trained from scratch . To make things worse , when there is a class-balance problem where the number of negative examples are overwhelmingly larger than that of positive examples ( which is true in many real-world use cases ) , the model will be at loose ends because positive features are buried in the sea of negative features . To tackle this issue , we propose using the idea of distant supervision ( e.g. , Mintz et al . ( 2009 ) ; Riedel et al . ( 2010 ) ) to regularize training . We first harvest hard-negative examples using distant supervision . This process can be done by a method as simple as using word overlapping metrics ( e.g. , ROUGE , BLEU or whether a sentence contains some certain keywords ) . With the harvested hard-negative examples , we transform the original binary classification setting to a multi-task learning setting , in which we jointly optimize the original target objective of distinguishing positive examples from negative examples along with an auxiliary objective of distinguishing soften positive examples ( comprised of positive examples and hard-negative examples ) from easy-negative examples . For a neural network model , this goal can be easily achieved by using different output layers to readout the final-layer representations . In this way , the features that are shared between positive examples and hard negative examples can be captured by the model . Models can easily tell which features that can distinguish positive examples the most . Using this unbelievably simple strategy , we improve the performance of in a range of different NLP tasks , including text classification , sequence labeling and reading comprehension . The key contributions of this work can be summarized as follows : • We study a general situation in NLP , where a subset of the negative examples are highly similar to the positive examples . We analyze why it is a problem and how to deal with it . • We propose a general strategy that utilize the idea of distant supervision to harvest hard- negative training examples , and transform the original task to a multi-task learning problem . The strategy is widely applicable for a variaty of tasks . • Using this unbelievably simple strategy , we can obtain significant improvement on the tasks of text-classification , sequence-labeling and reading comprehension . 2 RELATED WORK . Distant Supervision Mintz et al . ( 2009 ) ; Riedel et al . ( 2010 ) ; Hoffmann et al . ( 2011 ) ; Surdeanu et al . ( 2012 ) It is proposed to address the data sparsity issue in relation extraction . Suppose that we wish to extract sentences expressing the ISCAPITAL relation , distant supervision augments the positve training set by first aligning unlabeled text corpus with all entity pairs between which the ISCAPITAL relation holds and then treating all aligned texts as positive training examples . The idea has been extended to other domains such as sentiment analysis Go et al . ( 2009 ) , computer security event Ritter et al . ( 2015 ) , life event extraction Li et al . ( 2014 ) and image classification Chen and Gupta ( 2015 ) . Deep leaning techniques have significantly improved the results of distant supervision for relation extraction Zeng et al . ( 2017 ) ; Luo et al . ( 2017 ) ; Lin et al . ( 2017 ) ; Toutanova et al . ( 2015 ) . Multi-Task Learning ( MTL ) The idea of using data harvested via distant supervision as auxiliary supervision signals is inspired by recent progress on multi-task learning : models for auxiliary tasks share hidden states or parameters with models for the main task and act as regularizers . In addition , neural models often celebrate performance boost when jointly trained for multiple tasks Collobert et al . ( 2011 ) ; Chen et al . ( 2017 ) ; Hashimoto et al . ( 2017 ) ; FitzGerald et al . ( 2015 ) . For instance , Luong et al . ( 2015 ) use sequence-to-sequence model to jointly train machine translation , parsing and image caption generation models . Dong et al . ( 2015 ) adopt an alternating training approach for different language pairs , i.e. , they optimize each task objective for a fixed number of parameter updates ( or mini-batches ) before switching to a different language pair . Swayamdipta et al . ( 2018 ) propose using syntactic tasks to regularize semantic tasks like semantic role labeling . Hashimoto et al . ( 2017 ) improve universal syntactic dependency parsing using a multi-task learning approach . 3 MODELS . In this section , we discuss the details of the proposed model . We focus on two different types of NLP tasks , text classification and sequence labeling . 3.1 TEXT CLASSIFICATION . Suppose that we have text-label pairs D = { xi , yi } . xi consists of a sequence of tokens xi = { wi,1 , wi,2 , ... , wi , ni } , where ni denotes the number of tokens in xi . Each text xi is paired with a binary label yi ∈ { 0 , 1 } . The training set can be divided into a positive set D+ and a negative set D− . Let ŷi denote the model prediction . The standard training objective can be given as follows : L1 =− ∑ ( xi , yi ) ∈D logP ( ŷi = yi|xi ) =− ∑ ( xi , yi ) ∈D+ logP ( ŷi = 1|xi ) − ∑ ( xi , yi ) ∈D− logP ( ŷi = 0|xi ) ( 1 ) LetDhard-neg denote the hard-negative examples retrieved using distant supervision . Here we introduce a new label z : z = 1 for instances in Dhard-neg ∪D+ , and z = 0 for instances in D− −Dhard-neg . We regularize L1 using an additional objective L2 : L2 =− ∑ ( xi , zi ) ∈D+∪Dhard-neg logP ( ẑi = 1|xi ) − ∑ ( xi , zi ) ∈D−−Dhard-neg logP ( ẑi = 0|xi ) ( 2 ) L2 can be thought as an objective to capture the shared features in positive examples and hardnegative examples . Equ.2 can also be extended to another similar form , distinguishing between Dhard-pos∪D− ( i.e. , the union of positive examples that are similar to negative and negative examples ) and D+ −Dhard-pos . Empirically , we also find that adding one more three-class classification objective L3 , which separates positive vs hard-negative vs easy-negative introduces additional performance boost . We suggest that adding this three-class classification will additionally highlight the difference between hard negative examples and easy negative examples for the model . The label is denoted by l , where l = 0 for easy negative examples , l = 1 for positive examples and l = 2 for hard negative examples . This leads the final objective function at test time to be : L = λ1L1 + λ2L2 + λ3L3 ( 3 ) where λ1 + λ2 + λ3 = 1 are used to control the relative importance of each loss . For a neural classification model , p ( z|x ) , p ( y|x ) and p ( l|x ) share the same model structure . The input text x is first mapped to a d-dimensional vector representation hx using suitable contextualization strategy , such as LSTMs Hochreiter and Schmidhuber ( 1997 ) , CNNs Kim ( 2014 ) or transformers Vaswani et al . ( 2017 ) . Then hx is fed to three fully connected layers with softmax activation function to compute p ( y|x ) , p ( z|x ) and p ( l|x ) respectively : p ( y|x ) = softmax ( Wyhx ) p ( z|x ) = softmax ( Wzhx ) p ( l|x ) = softmax ( Wlhx ) ( 4 ) where Wy , Wz ∈ R2×d , Wl ∈ R3×d . 3.2 SEQUENCE LABELING . In sequence labeling tasks Lafferty et al . ( 2001 ) ; Ratinov and Roth ( 2009 ) ; Collobert et al . ( 2011 ) ; Huang et al . ( 2015 ) ; Ma and Hovy ( 2016 ) ; Chiu and Nichols ( 2016 ) , a model is trained to assign labels to each of the tokens in a text sequence . Suppose that we are to assign labels to all tokens in a chunk of text D = { x1 , x2 , ... , xnD } where nD denotes the number of tokens in D. Let us consider a simple case where we only have one type of tag and we will use the standard IOB ( short for inside , outside , beginning ) sequence labeling format . In this case , it is a three-class classification problem , assigning yi ∈ ( B , I , O ) to each token . We treat tokens with label B and I as D+ and tokens with label O as D− . The objective function for the vanilla sequence labeling task is given as follows : L1 =− logP ( y1 : nD |x1 : nD ) ( 5 ) P ( y1 : nD |x1 : nD ) can be computed using standard sequence tagging models such as CRF Lafferty et al . ( 2001 ) , hybrid CRF+neural models Huang et al . ( 2015 ) ; Ma and Hovy ( 2016 ) ; Chiu and Nichols ( 2016 ) ; Ye and Ling ( 2018 ) or purely neural models Collobert et al . ( 2011 ) ; Devlin et al . ( 2018 ) . To take into account negative examples that are highly similar to positive ones , we use the idea of distant supervision to first retrieve the set of hard-negative dataset Dhard-neg . Akin to the text classification task , we introduce a new label zi ∈ ( B , I , O ) , indicating whether the current token belongs to Dhard-neg . To incorporate the collected hard-negative examples into the model , again we introduce an auxiliary objective of assigning correct z labels to different tokens : L2 =− logP ( z1 : nD |x1 : nD ) ( 6 ) Similar to the text classification task , we also want to separate hard negative examples , easy negative examples and positive examples , so we associate each example with a label l. We thus have distinct “ outside ” and “ beginning ” labels for positive examples and hard-negative examples , i.e. , l ∈ ( Bpos , Ipos , Bhard-neg , Ihard-neg , O ) . Labels for different categories regarding y and z are shown in Table 2 . The final objective function is thus as follows : L = λ1L1 + λ2L2 + λ3L3 ( 7 ) Again λ1 + λ2 + λ3 = 1 . At the training time , the two functions are simultaneously trained . At the test time , we only use P ( yi|x1 : nD ) to predict yi as the final decision . For CRF-based models Huang et al . ( 2015 ) ; Ma and Hovy ( 2016 ) ; Chiu and Nichols ( 2016 ) , neural representations are fed to the CRF layer and used as features for decision making . As in Ma and Hovy ( 2016 ) , neural representation hxi is computed for each token/position using LSTMs and CNNs , and then forwarded to the CRF layer . The key issue with CRF-based models is that the CRF model is only able to output one single label . This rules out the possibility of directly feeding hx to three readout layers to simultaneously predict y , z and l. We propose the following solution : we use three separate CRFs to predict y ( for L1 ) , z ( for L2 ) and l ( for L3 ) , denoted by CRFy , CRFz and CRFl . The three CRFs use the same hidden representation hx obtained from the same neural model as inputs , but independently learn their own features weights . We iteratively perform gradient descent on the three CRF s , and the error from the three CRF s are back-propagated to the neural model iteratively .
This paper is aimed at tackling a general issue in NLP: Hard-negative training data (negative but very similar to positive) can easily confuse standard NLP model. To solve this problem, the authors first applied distant supervision technique to harvest hard-negative training examples and then transform the original task to a multi-task learning problem by splitting the original labels to positive, hard-negative, and easy-negative examples. The authors consider using 3 different objective functions: L1, the original cross entropy loss; L2, capturing the shared features in positive and hard-negative examples as regularizer of L1 by introducing a new label z; L3, a three-class classification objective using softmax.
SP:08350406f571056b5652cff2e4b0c5ed7ccb13c8
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
1 INTRODUCTION . Since the first Adversarial Example ( AE ) against traffic sign image classification discovered by Eykholt et al . ( Eykholt et al. , 2018 ) , several research work in adversarial machine learning ( Eykholt et al. , 2017 ; Xie et al. , 2017 ; Lu et al. , 2017a ; b ; Zhao et al. , 2018b ; Chen et al. , 2018 ; Cao et al. , 2019 ) started to focus on the context of visual perception in autonomous driving , and studied AEs on object detection models . For example , Eykholt et al . ( Eykholt et al. , 2017 ) and Zhong et al . ( Zhong et al. , 2018 ) studied AEs in the form of adversarial stickers on stop signs or the back of front cars against YOLO object detectors ( Redmon & Farhadi , 2017 ) , and performed indoor experiments to demonstrate the attack feasibility in the real world . Building upon these work , most recently Zhao et al . ( Zhao et al. , 2018b ) leveraged image transformation techniques to improve the robustness of such adversarial sticker attacks in outdoor settings , and were able to achieve a 72 % attack success rate with a car running at a constant speed of 30 km/h on real roads . While these results from prior work are alarming , object detection is in fact only the first half of the visual perception pipeline in autonomous driving , or in robotic systems in general — in the second half , the detected objects must also be tracked , in a process called Multiple Object Tracking ( MOT ) , to build the moving trajectories , called trackers , of surrounding obstacles . This is required for the subsequent driving decision making process , which needs the built trajectories to predict future moving trajectories for these obstacles and then plan a driving path accordingly to avoid collisions with them . To ensure high tracking accuracy and robustness against errors in object detection , in MOT only the detection results with sufficient consistency and stability across multiple frames can be included in the tracking results and actually influence the driving decisions . Thus , MOT in the visual ∗Equal contribution perception of autonomous driving poses a general challenge to existing attack techniques that blindly target objection detection . For example , as shown by our analysis later in §4 , an attack on objection detection needs to succeed consecutively for at least 60 frames to fool a representative MOT process , which requires an at least 98 % attack success rate ( §4 ) . To the best of our knowledge , no existing attacks on objection detection can achieve such a high success rate ( Eykholt et al. , 2017 ; Xie et al. , 2017 ; Lu et al. , 2017a ; b ; Zhao et al. , 2018b ; Chen et al. , 2018 ) . In this paper , we are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving , i.e. , both object detection and object tracking , and discover a novel attack technique , called tracker hijacking , that can effectively fool the MOT process using AEs on object detection . Our key insight is that although it is highly difficult to directly create a tracker for fake objects or delete a tracker for existing objects , we can carefully design AEs to attack the tracking error reduction process in MOT to deviate the tracking results of existing objects towards an attacker-desired moving direction . Such process is designed for increasing the robustness and accuracy of the tracking results , but ironically , we find that it can be exploited by attackers to substantially alter the tracking results . Leveraging such attack technique , successful AEs on as few as one single frame is enough to move an existing object in to or out of the headway of an autonomous vehicle and thus may cause potential safety hazards . We select 20 out of 100 randomly sampled video clips from the Berkeley Deep Drive dataset for evaluation . Under recommended MOT configurations in practice ( Zhu et al. , 2018 ) and normal measurement noise levels , we find that our attack can succeed with successful AEs on as few as one frame , and 2 to 3 consecutive frames on average . We reproduce and compare with previous attacks that blindly target object detection , and find that when attacking 3 consecutive frames , our attack has a nearly 100 % success rate while attacks that blindly target object detection only have up to 25 % . Contributions . In summary , this paper makes the following contributions : • We are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving , i.e. , both object detection and MOT . We find that without considering MOT , an attack blindly targeting object detection needs at least a success rate of 98 % to actually affect the complete visual perception pipeline in autonomous driving , which is a requirement that no existing attack technique can satisfy . • We discover a novel attack technique , tracker hijacking , that can effectively fool MOT using AEs on object detection . This technique exploits the tracking error reduction process in MOT , and can enable successful AEs on as few as one single frame to move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards . • The attack evaluation using the Berkeley Deep Drive dataset shows that our attack can succeed with successful AEs on as few as one frame , and only 2 to 3 consecutive frames on average , and when 3 consecutive frames are attacked , our attack has a nearly 100 % success rate while attacks that blindly target object detection only have up to 25 % . • Code and evaluation data are all available at GitHub ( Github ) . 2 BACKGROUND AND RELATED WORK . Adversarial examples for object detection . Since the first physical adversarial examples against traffic sign classifier demonstrated by Eykholt et al . ( Eykholt et al. , 2018 ) , several work in adversarial machine learning ( Eykholt et al. , 2017 ; Xie et al. , 2017 ; Lu et al. , 2017a ; b ; Zhao et al. , 2018b ; Chen et al. , 2018 ) have been focused on the visual perception task in autonomous driving , and more specifically , the object detection models . To achieve high attack effectiveness in practice , the key challenge is how to design robust attacks that can survive distortions in real-world driving scenarios such as different viewing angles , distances , lighting conditions , and camera limitations . For example , Lu et al . ( Lu et al. , 2017a ) shows that AEs against Faster-RCNN ( Ren et al. , 2015 ) generalize well across a sequence of images in digital space , but fail in most of the sequence in physical world ; Eykholt et al . ( Eykholt et al. , 2017 ) generates adversarial stickers that , when attached to stop sign , can fool YOLOv2 ( Redmon & Farhadi , 2017 ) object detector , while it is only demonstrated in indoor experiment within short distance ; Chen et al . ( Chen et al. , 2018 ) generates AEs based on expectation over transformation techniques , while their evaluation shows that the AEs are not robust to multiple angles , probably due to not considering perspective transformations ( Zhao et al. , 2018b ) . It was not until recently that physical adversarial attacks against object detectors achieve a decent success rate ( 70 % ) in fixed-speed ( 6 km/h and 30 km/h ) road test ( Zhao et al. , 2018b ) . While the current progress in attacking object detection is indeed impressive , in this paper we argue that in the actual visual perception pipeline of autonomous driving , object tracking , or more specifically MOT , is a integral step , and without considering it , existing adversarial attacks against object detection still can not affect the visual perception results even with high attack success rate . As shown in our evaluation in §4 , with a common setup of MOT , an attack on object detection needs to reliably fool at least 60 consecutive frames to erase one object ( e.g. , stop sign ) from the tracking results , in which case even a 98 % attack success rate on object detectors is not enough ( §4 ) . MOT background . MOT aims to identify objects and their trajectories in video frame sequence . With the recent advances in object detection , tracking-by-detection ( Luo et al. , 2014 ) has become the dominant MOT paradigm , where the detection step identifies the objects in the images and the tracking step links the objects to the trajectories ( i.e. , trackers ) . Such paradigm is widely adopted in autonomous driving systems today ( Baidu ; Kato et al. , 2018 ; 2015 ; Zhao et al. , 2018a ; Ess et al. , 2010 ; MathWorks ; Udacity ) , and a more detailed illustration is in Fig . 1 . As shown , each detected objects at time t will be associated with a dynamic state model ( e.g. , position , velocity ) , which represents the past trajectory of the object ( track|t−1 ) . A per-track Kalman filter ( Baidu ; Kato et al. , 2018 ; Feng et al. , 2019 ; Murray , 2017 ; Yoon et al. , 2016 ) is used to maintain the state model , which operates in a recursive predict-update loop : the predict step estimates current object state according to a motion model , and the update step takes the detection results detc|t as measurement to update its state estimation result track|t . The association between detected objects with existing trackers is formulated as a bipartite matching problem ( Sharma et al. , 2018 ; Feng et al. , 2019 ; Murray , 2017 ) based on the pairwise similarity costs between the trackers and detected objects , and the most commonly used similarity metric is the spatial-based cost , which measures the overlapping between bounding boxes , or bboxes ( Baidu ; Long et al. , 2018 ; Xiang et al. , 2015 ; Sharma et al. , 2018 ; Feng et al. , 2019 ; Murray , 2017 ; Zhu et al. , 2018 ; Yoon et al. , 2016 ; Bergmann et al. , 2019 ; Bewley et al. , 2016 ) . To reduce errors in this association , an accurate velocity estimation is necessary in the Kalman filter prediction ( Choi , 2015 ; Yilmaz et al. , 2006 ) . Due to the discreteness of camera frames , Kalman filter uses the velocity model to estimate the location of the tracked object in the next frame in order to compensate the object motion between frames . However , as described later in §3 , such error reduction process unexpectedly makes it possible to perform tracker hijacking . MOT manages tracker creation and deletion with two thresholds . Specifically a new tracker will be created only when the object has been constantly detected for a certain number of frames , this threshold will be referred to as the hit count , or H in the rest of the paper . This helps to filter out occasional false positives produced by object detectors . On the other hand , a tracker will be deleted if no objects is associated with for a duration of R frames , or called a reserved age . It prevents the tracks from being accidentally deleted due to infrequent false negatives of object detectors . The configuration of R and H usually depends on both the accuracy of detection models , and the frame rate ( fps ) . Previous work suggest a configuration of R = 2· fps , and H = 0.2· fps ( Zhu et al. , 2018 ) , which gives a R = 60 frames and H = 6 frames for a common 30 fps visual perception system . We will show in §4 that an attack that blindly targeting object detection needs to constantly fool at least 60 frames ( R ) to erase an object , while our proposed tracker hijacking attack can fabricate object that last for R frames and vanish target object for H frames in the tracking result by attacking as few as one frame , and only 2~3 frames on average ( S4 ) .
The paper addresses adversarial attacks against visual perception pipelines in autonomous driving. Both subprocesses in the visual perception pipeline, object detection and multiple object tracking (MOT), are considered. The paper proposes a novel approach in adversarial attacks, the tracking hijacking, which can fool the MOT process using Adversarial Examples (AEs) in object detection. The key idea is to exploit the tracking error to place specific attacks on single frames in MOT, which can lead to a displacement of the detected objects. It is shown that the proposed method can effectively attack the perception pipeline by just fooling 2 to 3 consecutive frames on average.
SP:c64e935f86a415a464632de66ffe1d610df585e4
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking
1 INTRODUCTION . Since the first Adversarial Example ( AE ) against traffic sign image classification discovered by Eykholt et al . ( Eykholt et al. , 2018 ) , several research work in adversarial machine learning ( Eykholt et al. , 2017 ; Xie et al. , 2017 ; Lu et al. , 2017a ; b ; Zhao et al. , 2018b ; Chen et al. , 2018 ; Cao et al. , 2019 ) started to focus on the context of visual perception in autonomous driving , and studied AEs on object detection models . For example , Eykholt et al . ( Eykholt et al. , 2017 ) and Zhong et al . ( Zhong et al. , 2018 ) studied AEs in the form of adversarial stickers on stop signs or the back of front cars against YOLO object detectors ( Redmon & Farhadi , 2017 ) , and performed indoor experiments to demonstrate the attack feasibility in the real world . Building upon these work , most recently Zhao et al . ( Zhao et al. , 2018b ) leveraged image transformation techniques to improve the robustness of such adversarial sticker attacks in outdoor settings , and were able to achieve a 72 % attack success rate with a car running at a constant speed of 30 km/h on real roads . While these results from prior work are alarming , object detection is in fact only the first half of the visual perception pipeline in autonomous driving , or in robotic systems in general — in the second half , the detected objects must also be tracked , in a process called Multiple Object Tracking ( MOT ) , to build the moving trajectories , called trackers , of surrounding obstacles . This is required for the subsequent driving decision making process , which needs the built trajectories to predict future moving trajectories for these obstacles and then plan a driving path accordingly to avoid collisions with them . To ensure high tracking accuracy and robustness against errors in object detection , in MOT only the detection results with sufficient consistency and stability across multiple frames can be included in the tracking results and actually influence the driving decisions . Thus , MOT in the visual ∗Equal contribution perception of autonomous driving poses a general challenge to existing attack techniques that blindly target objection detection . For example , as shown by our analysis later in §4 , an attack on objection detection needs to succeed consecutively for at least 60 frames to fool a representative MOT process , which requires an at least 98 % attack success rate ( §4 ) . To the best of our knowledge , no existing attacks on objection detection can achieve such a high success rate ( Eykholt et al. , 2017 ; Xie et al. , 2017 ; Lu et al. , 2017a ; b ; Zhao et al. , 2018b ; Chen et al. , 2018 ) . In this paper , we are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving , i.e. , both object detection and object tracking , and discover a novel attack technique , called tracker hijacking , that can effectively fool the MOT process using AEs on object detection . Our key insight is that although it is highly difficult to directly create a tracker for fake objects or delete a tracker for existing objects , we can carefully design AEs to attack the tracking error reduction process in MOT to deviate the tracking results of existing objects towards an attacker-desired moving direction . Such process is designed for increasing the robustness and accuracy of the tracking results , but ironically , we find that it can be exploited by attackers to substantially alter the tracking results . Leveraging such attack technique , successful AEs on as few as one single frame is enough to move an existing object in to or out of the headway of an autonomous vehicle and thus may cause potential safety hazards . We select 20 out of 100 randomly sampled video clips from the Berkeley Deep Drive dataset for evaluation . Under recommended MOT configurations in practice ( Zhu et al. , 2018 ) and normal measurement noise levels , we find that our attack can succeed with successful AEs on as few as one frame , and 2 to 3 consecutive frames on average . We reproduce and compare with previous attacks that blindly target object detection , and find that when attacking 3 consecutive frames , our attack has a nearly 100 % success rate while attacks that blindly target object detection only have up to 25 % . Contributions . In summary , this paper makes the following contributions : • We are the first to study adversarial machine learning attacks considering the complete visual perception pipeline in autonomous driving , i.e. , both object detection and MOT . We find that without considering MOT , an attack blindly targeting object detection needs at least a success rate of 98 % to actually affect the complete visual perception pipeline in autonomous driving , which is a requirement that no existing attack technique can satisfy . • We discover a novel attack technique , tracker hijacking , that can effectively fool MOT using AEs on object detection . This technique exploits the tracking error reduction process in MOT , and can enable successful AEs on as few as one single frame to move an existing object in to or out of the headway of an autonomous vehicle to cause potential safety hazards . • The attack evaluation using the Berkeley Deep Drive dataset shows that our attack can succeed with successful AEs on as few as one frame , and only 2 to 3 consecutive frames on average , and when 3 consecutive frames are attacked , our attack has a nearly 100 % success rate while attacks that blindly target object detection only have up to 25 % . • Code and evaluation data are all available at GitHub ( Github ) . 2 BACKGROUND AND RELATED WORK . Adversarial examples for object detection . Since the first physical adversarial examples against traffic sign classifier demonstrated by Eykholt et al . ( Eykholt et al. , 2018 ) , several work in adversarial machine learning ( Eykholt et al. , 2017 ; Xie et al. , 2017 ; Lu et al. , 2017a ; b ; Zhao et al. , 2018b ; Chen et al. , 2018 ) have been focused on the visual perception task in autonomous driving , and more specifically , the object detection models . To achieve high attack effectiveness in practice , the key challenge is how to design robust attacks that can survive distortions in real-world driving scenarios such as different viewing angles , distances , lighting conditions , and camera limitations . For example , Lu et al . ( Lu et al. , 2017a ) shows that AEs against Faster-RCNN ( Ren et al. , 2015 ) generalize well across a sequence of images in digital space , but fail in most of the sequence in physical world ; Eykholt et al . ( Eykholt et al. , 2017 ) generates adversarial stickers that , when attached to stop sign , can fool YOLOv2 ( Redmon & Farhadi , 2017 ) object detector , while it is only demonstrated in indoor experiment within short distance ; Chen et al . ( Chen et al. , 2018 ) generates AEs based on expectation over transformation techniques , while their evaluation shows that the AEs are not robust to multiple angles , probably due to not considering perspective transformations ( Zhao et al. , 2018b ) . It was not until recently that physical adversarial attacks against object detectors achieve a decent success rate ( 70 % ) in fixed-speed ( 6 km/h and 30 km/h ) road test ( Zhao et al. , 2018b ) . While the current progress in attacking object detection is indeed impressive , in this paper we argue that in the actual visual perception pipeline of autonomous driving , object tracking , or more specifically MOT , is a integral step , and without considering it , existing adversarial attacks against object detection still can not affect the visual perception results even with high attack success rate . As shown in our evaluation in §4 , with a common setup of MOT , an attack on object detection needs to reliably fool at least 60 consecutive frames to erase one object ( e.g. , stop sign ) from the tracking results , in which case even a 98 % attack success rate on object detectors is not enough ( §4 ) . MOT background . MOT aims to identify objects and their trajectories in video frame sequence . With the recent advances in object detection , tracking-by-detection ( Luo et al. , 2014 ) has become the dominant MOT paradigm , where the detection step identifies the objects in the images and the tracking step links the objects to the trajectories ( i.e. , trackers ) . Such paradigm is widely adopted in autonomous driving systems today ( Baidu ; Kato et al. , 2018 ; 2015 ; Zhao et al. , 2018a ; Ess et al. , 2010 ; MathWorks ; Udacity ) , and a more detailed illustration is in Fig . 1 . As shown , each detected objects at time t will be associated with a dynamic state model ( e.g. , position , velocity ) , which represents the past trajectory of the object ( track|t−1 ) . A per-track Kalman filter ( Baidu ; Kato et al. , 2018 ; Feng et al. , 2019 ; Murray , 2017 ; Yoon et al. , 2016 ) is used to maintain the state model , which operates in a recursive predict-update loop : the predict step estimates current object state according to a motion model , and the update step takes the detection results detc|t as measurement to update its state estimation result track|t . The association between detected objects with existing trackers is formulated as a bipartite matching problem ( Sharma et al. , 2018 ; Feng et al. , 2019 ; Murray , 2017 ) based on the pairwise similarity costs between the trackers and detected objects , and the most commonly used similarity metric is the spatial-based cost , which measures the overlapping between bounding boxes , or bboxes ( Baidu ; Long et al. , 2018 ; Xiang et al. , 2015 ; Sharma et al. , 2018 ; Feng et al. , 2019 ; Murray , 2017 ; Zhu et al. , 2018 ; Yoon et al. , 2016 ; Bergmann et al. , 2019 ; Bewley et al. , 2016 ) . To reduce errors in this association , an accurate velocity estimation is necessary in the Kalman filter prediction ( Choi , 2015 ; Yilmaz et al. , 2006 ) . Due to the discreteness of camera frames , Kalman filter uses the velocity model to estimate the location of the tracked object in the next frame in order to compensate the object motion between frames . However , as described later in §3 , such error reduction process unexpectedly makes it possible to perform tracker hijacking . MOT manages tracker creation and deletion with two thresholds . Specifically a new tracker will be created only when the object has been constantly detected for a certain number of frames , this threshold will be referred to as the hit count , or H in the rest of the paper . This helps to filter out occasional false positives produced by object detectors . On the other hand , a tracker will be deleted if no objects is associated with for a duration of R frames , or called a reserved age . It prevents the tracks from being accidentally deleted due to infrequent false negatives of object detectors . The configuration of R and H usually depends on both the accuracy of detection models , and the frame rate ( fps ) . Previous work suggest a configuration of R = 2· fps , and H = 0.2· fps ( Zhu et al. , 2018 ) , which gives a R = 60 frames and H = 6 frames for a common 30 fps visual perception system . We will show in §4 that an attack that blindly targeting object detection needs to constantly fool at least 60 frames ( R ) to erase an object , while our proposed tracker hijacking attack can fabricate object that last for R frames and vanish target object for H frames in the tracking result by attacking as few as one frame , and only 2~3 frames on average ( S4 ) .
This paper is about conducting evasion attacks against Multiple Object Tracking (MOT) techniques. Compared to existing work on adversarial examples against object detection, to attack MOT techniques, the adversary needs to successfully fool multiple frames, and the authors show that by naively using existing attack approaches, the adversary needs to achieve 98% single-frame attack success rate to fool the tracking system, which is too hard for existing attack algorithms. Therefore, this paper proposes a smart way of attacking MOT techniques by leveraging the properties of the tracking algorithm. In particular, they generate adversarial perturbations to remove the original bounding box while adding a fake bounding box that has some overlap with the original bounding box, so that the system will compute the movement of the object in a wrong way. They evaluate on videos in Berkeley Deep Drive dataset, and show that by attacking 2~3 frames, they can achieve nearly 100% attack success rate, while the attack success rate is only 25% if the tracking algorithm is not considered when crafting the attacks.
SP:c64e935f86a415a464632de66ffe1d610df585e4
Evaluating Lossy Compression Rates of Deep Generative Models
Deep generative models have achieved remarkable progress in recent years . Despite this progress , quantitative evaluation and comparison of generative models remains as one of the important challenges . One of the most popular metrics for evaluating generative models is the log-likelihood . While the direct computation of log-likelihood can be intractable , it has been recently shown that the loglikelihood of some of the most interesting generative models such as variational autoencoders ( VAE ) or generative adversarial networks ( GAN ) can be efficiently estimated using annealed importance sampling ( AIS ) . In this work , we argue that the log-likelihood metric by itself can not represent all the different performance characteristics of generative models , and propose to use rate distortion curves to evaluate and compare deep generative models . We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate . We evaluate lossy compression rates of different deep generative models such as VAEs , GANs ( and its variants ) and adversarial autoencoders ( AAE ) on MNIST and CIFAR10 , and arrive at a number of insights not obtainable from log-likelihoods alone . 1 INTRODUCTION . Generative models of images represent one of the most exciting areas of rapid progress of AI ( Brock et al. , 2019 ; Karras et al. , 2018b ; a ) . However , evaluating the performance of generative models remains a significant challenge . Many of the most successful models , most notably Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , are implicit generative models for which computation of log-likelihoods is intractable or even undefined . Evaluation typically focuses on metrics such as the Inception score ( Salimans et al. , 2016 ) or the Fréchet Inception Distance ( FID ) score ( Heusel et al. , 2017 ) , which do not have nearly the same degree of theoretical underpinning as likelihood-based metrics . Log-likelihoods are one of the most important measures of generative models . Their utility is evidenced by the fact that likelihoods ( or equivalent metrics such as perplexity or bits-per-dimension ) are reported in nearly all cases where it ’ s convenient to compute them . Unfortunately , computation of log-likelihoods for implicit generative models remains a difficult problem . Furthermore , log-likelihoods have important conceptual limitations . For continuous inputs in the image domain , the metric is often dominated by the fine-grained distribution over pixels rather than the high-level structure . For models with low-dimensional support , one needs to assign an observation model , such as ( rather arbitrary ) isotropic Gaussian noise ( Wu et al. , 2016 ) . Lossless compression metrics for GANs often give absurdly large bits-per-dimension ( e.g . 1014 ) which fails to reflect the true performance of the model ( Grover et al. , 2018 ; Danihelka et al. , 2017 ) . See Theis et al . ( 2015 ) for more discussion of limitations of likelihood-based evaluation . Typically , one is not interested in describing the pixels of an image directly , and it would be sufficient to generate images close to the true data distribution in some metric such as Euclidean distance . For this reason , there has been much interest in Wasserstein distance as a criterion for generative models , since the measure exploits precisely this metric structure ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ; Salimans et al. , 2018 ) . However , Wasserstein distance remains difficult to approximate , and hence it is not routinely used to evaluate generative models . We aim to achieve the best of both worlds by measuring lossy compression rates of deep generative models . In particular , we aim to estimate the rate distortion function , which measures the number of bits required to match a distribution to within a given distortion . Like Wasserstein distance , it can exploit the metric structure of the observation space , but like log-likelihoods , it connects to the rich literature of probabilistic and information theoretic analysis of generative models . By focusing on different parts of the rate distortion curve , one can achieve different tradeoffs between the description length and the fidelity of reconstruction — thereby fixing the problem whereby lossless compression focuses on the details at the expense of high-level structure . It has the further advantage that the distortion metric need not have a probabilistic interpretation ; hence , one is free to use more perceptually valid distortion metrics such as structural similarity ( SSIM ) ( Wang et al. , 2004 ) or distances between hidden representations of a convolutional network ( Huang et al. , 2018 ) . Algorithmically , computing rate distortion functions raises similar challenges to estimating loglikelihoods . We show that the rate distortion curve can be computed by finding the normalizing constants of a family of unnormalized probability distributions over the noise variables z. Interestingly , when the distortion metric is squared error , these distributions correspond to the posterior distributions of z for Gaussian observation models with different variances ; hence , the rate distortion analysis generalizes the evaluation of log-likelihoods with Gaussian observation models . Annealed Importance Sampling ( AIS ) ( Neal , 2001 ) is currently the most effective general-purpose method for estimating log-likelihoods of implicit generative models , and was used by Wu et al . ( 2016 ) to compare log-likelihoods of a variety of implicit generative models . The algorithm is based on gradually interpolating between a tractable initial distribution and an intractable target distribution . We show that when AIS is used to estimate log-likelihoods under a Gaussian observation model , the sequence of intermediate distributions corresponds to precisely the distributions needed to compute the rate distortion curve . Since AIS maintains a stochastic lower bound on the normalizing constants of these distributions , it automatically produces an upper bound on the entire rate distortion curve . Furthermore , the tightness of the bound can be validated on simulated data using bidirectional Monte Carlo ( BDMC ) ( Grosse et al. , 2015 ; Wu et al. , 2016 ) . Hence , we can approximate the entire rate distortion curve for roughly the same computational cost as a single log-likelihood estimate . We use our rate distortion approximations to study a variety of variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ) , GANs and adversarial autoencoders ( AAE ) ( Makhzani et al. , 2015 ) , and arrive at a number of insights not obtainable from log-likelihoods alone . For instance , we observe that VAEs and GANs have different rate distortion tradeoffs : While VAEs with larger code size can generally achieve better lossless compression rates , their performances drop at lossy compression in the low-rate regime . Conversely , expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime . We find that increasing the capacity of GANs by increasing the code size ( width ) has a qualitatively different effect on the rate distortion tradeoffs than increasing the depth . We also find that that different GAN variants with the same code size achieve nearly identical RD curves , and that the code size dominates the performance differences between GANs . 2 BACKGROUND . 2.1 ANNEALED IMPORTANCE SAMPLING . Annealed importance sampling ( AIS ) ( Neal , 2001 ) is a Monte Carlo algorithm based on constructing a sequence of n intermediate distributions pk ( z ) = p̃k ( z ) Zk , where k ∈ { 0 , . . . , n } , between a tractable initial distribution p0 ( z ) and the intractable target distribution pn ( z ) . At the the k-th state ( 0 ≤ k ≤ n ) , the forward distribution qf and the un-normalized backward distribution q̃b are qf ( z0 , . . . , zk ) = p0 ( z0 ) T0 ( z1|z0 ) . . . Tk−1 ( zk|zk−1 ) , ( 1 ) q̃b ( z0 , . . . , zk ) = p̃k ( zk ) T̃k−1 ( zk−1|zk ) . . . T̃0 ( z0|z1 ) , ( 2 ) where Tk is an MCMC kernel that leaves pk ( z ) invariant ; and T̃k is its reverse kernel . We run M independent AIS chains , numbered i = 1 , . . . , M . Let zik be the k-th state of the i-th chain . The importance weights and normalized importance weights are wik = q̃b ( z i 1 , . . . , z i k ) qf ( zi1 , . . . , z i k ) = p̃1 ( z i 1 ) p0 ( zi1 ) p̃2 ( z i 2 ) p̃1 ( zi2 ) . . . p̃k ( z i k ) p̃k−1 ( zik ) , w̃ik = wik∑M i=1 w i k . ( 3 ) At the k-th step , the unbiased partition function estimate of pk ( z ) is Ẑk = 1M ∑M i=1 w i k. At the k-th step , we define the AIS distribution qAISk ( z ) as the distribution obtained by first sampling z1k , . . . , z M k from the M parallel chains using the forward distribution qf ( z i 1 , . . . , z i M ) , and then re-sampling these samples based on w̃ik . More formally , the AIS distribution is defined as follows : qAISk ( z ) = E∏M i=1 qf ( z i 1 , ... , z i k ) [ M∑ i=1 w̃ikδ ( z− zik ) ] . ( 4 ) Bidirectional Monte Carlo . We know that the log partition function estimate log Ẑ is a stochastic lower bound on logZ ( Jensen ’ s inequality ) . As the result , using the forward AIS distribution as the proposal distribution results in a lower bound on the data log-likelihood . By running AIS in reverse , however , we obtain an upper bound on logZ . However , in order to run the AIS in reverse , we need exact samples from the true posterior , which is only possible on the simulated data . The combination of the AIS lower bound and upper bound on the log partition function is called bidirectional Monte Carlo ( BDMC ) , and the gap between these bounds is called the BDMC gap ( Grosse et al. , 2015 ) . We note that AIS combined with BDMC has been used to estimate log-likelihoods for implicit generative models ( Wu et al. , 2016 ) . In this work , we validate our AIS experiments by using the BDMC gap to measure the accuracy of our partition function estimators . 2.2 RATE DISTORTION THEORY . Let x be a random variable that comes from the data distribution pd ( x ) . Shannon ’ s fundamental compression theorem states that we can compress this random variable losslessly at the rate of H ( x ) . But if we allow lossy compression , we can compress x at the rate of R , where R ≤ H ( x ) , using the code z , and have a lossy reconstruction x̂ = f ( z ) with the distortion of D , given a distortion measure d ( x , x̂ ) = d ( x , f ( z ) ) . The rate distortion theory quantifies the trade-off between the lossy compression rate R and the distortion D. The rate distortion function R ( D ) is defined as the minimum number of bits per sample required to achieve lossy compression of the data such that the average distortion measured by the distortion function is less than D. Shannon ’ s rate distortion theorem states thatR ( D ) equals the minimum of the following optimization problem : min q ( z|x ) I ( z ; x ) s.t . Eq ( x , z ) [ d ( x , f ( z ) ) ] ≤ D. ( 5 ) where the optimization is over the channel conditional distribution q ( z|x ) . Suppose the datadistribution is pd ( x ) . The channel conditional q ( z|x ) induces the joint distribution q ( z , x ) = pd ( x ) q ( z|x ) , which defines the mutual information I ( z ; x ) . q ( z ) is the marginal distribution over z of the joint distribution q ( z , x ) , and is called the output marginal distribution . We can rewrite the optimization of Eq . 5 using the method of Lagrange multipliers as follows : min q ( z|x ) I ( z ; x ) + βEq ( z , x ) [ d ( x , f ( z ) ) ] . ( 6 )
This paper presents a method for evaluating latent-variable generative models in terms of the rate-distortion curve that compares the number of bits needed to encode the representation with how well you can reconstruct an input under some distortion measure. To estimate this curve, the author’s use AIS and show how intermediate distributions in AIS can be used to bound and estimate rate and distortion. They apply their evaluation to GANs, VAEs, and AAEs trained on MNIST and CIFAR-10.
SP:950c845314ba6a65208f78be3e42b47d79befd7f
Evaluating Lossy Compression Rates of Deep Generative Models
Deep generative models have achieved remarkable progress in recent years . Despite this progress , quantitative evaluation and comparison of generative models remains as one of the important challenges . One of the most popular metrics for evaluating generative models is the log-likelihood . While the direct computation of log-likelihood can be intractable , it has been recently shown that the loglikelihood of some of the most interesting generative models such as variational autoencoders ( VAE ) or generative adversarial networks ( GAN ) can be efficiently estimated using annealed importance sampling ( AIS ) . In this work , we argue that the log-likelihood metric by itself can not represent all the different performance characteristics of generative models , and propose to use rate distortion curves to evaluate and compare deep generative models . We show that we can approximate the entire rate distortion curve using one single run of AIS for roughly the same computational cost as a single log-likelihood estimate . We evaluate lossy compression rates of different deep generative models such as VAEs , GANs ( and its variants ) and adversarial autoencoders ( AAE ) on MNIST and CIFAR10 , and arrive at a number of insights not obtainable from log-likelihoods alone . 1 INTRODUCTION . Generative models of images represent one of the most exciting areas of rapid progress of AI ( Brock et al. , 2019 ; Karras et al. , 2018b ; a ) . However , evaluating the performance of generative models remains a significant challenge . Many of the most successful models , most notably Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , are implicit generative models for which computation of log-likelihoods is intractable or even undefined . Evaluation typically focuses on metrics such as the Inception score ( Salimans et al. , 2016 ) or the Fréchet Inception Distance ( FID ) score ( Heusel et al. , 2017 ) , which do not have nearly the same degree of theoretical underpinning as likelihood-based metrics . Log-likelihoods are one of the most important measures of generative models . Their utility is evidenced by the fact that likelihoods ( or equivalent metrics such as perplexity or bits-per-dimension ) are reported in nearly all cases where it ’ s convenient to compute them . Unfortunately , computation of log-likelihoods for implicit generative models remains a difficult problem . Furthermore , log-likelihoods have important conceptual limitations . For continuous inputs in the image domain , the metric is often dominated by the fine-grained distribution over pixels rather than the high-level structure . For models with low-dimensional support , one needs to assign an observation model , such as ( rather arbitrary ) isotropic Gaussian noise ( Wu et al. , 2016 ) . Lossless compression metrics for GANs often give absurdly large bits-per-dimension ( e.g . 1014 ) which fails to reflect the true performance of the model ( Grover et al. , 2018 ; Danihelka et al. , 2017 ) . See Theis et al . ( 2015 ) for more discussion of limitations of likelihood-based evaluation . Typically , one is not interested in describing the pixels of an image directly , and it would be sufficient to generate images close to the true data distribution in some metric such as Euclidean distance . For this reason , there has been much interest in Wasserstein distance as a criterion for generative models , since the measure exploits precisely this metric structure ( Arjovsky et al. , 2017 ; Gulrajani et al. , 2017 ; Salimans et al. , 2018 ) . However , Wasserstein distance remains difficult to approximate , and hence it is not routinely used to evaluate generative models . We aim to achieve the best of both worlds by measuring lossy compression rates of deep generative models . In particular , we aim to estimate the rate distortion function , which measures the number of bits required to match a distribution to within a given distortion . Like Wasserstein distance , it can exploit the metric structure of the observation space , but like log-likelihoods , it connects to the rich literature of probabilistic and information theoretic analysis of generative models . By focusing on different parts of the rate distortion curve , one can achieve different tradeoffs between the description length and the fidelity of reconstruction — thereby fixing the problem whereby lossless compression focuses on the details at the expense of high-level structure . It has the further advantage that the distortion metric need not have a probabilistic interpretation ; hence , one is free to use more perceptually valid distortion metrics such as structural similarity ( SSIM ) ( Wang et al. , 2004 ) or distances between hidden representations of a convolutional network ( Huang et al. , 2018 ) . Algorithmically , computing rate distortion functions raises similar challenges to estimating loglikelihoods . We show that the rate distortion curve can be computed by finding the normalizing constants of a family of unnormalized probability distributions over the noise variables z. Interestingly , when the distortion metric is squared error , these distributions correspond to the posterior distributions of z for Gaussian observation models with different variances ; hence , the rate distortion analysis generalizes the evaluation of log-likelihoods with Gaussian observation models . Annealed Importance Sampling ( AIS ) ( Neal , 2001 ) is currently the most effective general-purpose method for estimating log-likelihoods of implicit generative models , and was used by Wu et al . ( 2016 ) to compare log-likelihoods of a variety of implicit generative models . The algorithm is based on gradually interpolating between a tractable initial distribution and an intractable target distribution . We show that when AIS is used to estimate log-likelihoods under a Gaussian observation model , the sequence of intermediate distributions corresponds to precisely the distributions needed to compute the rate distortion curve . Since AIS maintains a stochastic lower bound on the normalizing constants of these distributions , it automatically produces an upper bound on the entire rate distortion curve . Furthermore , the tightness of the bound can be validated on simulated data using bidirectional Monte Carlo ( BDMC ) ( Grosse et al. , 2015 ; Wu et al. , 2016 ) . Hence , we can approximate the entire rate distortion curve for roughly the same computational cost as a single log-likelihood estimate . We use our rate distortion approximations to study a variety of variational autoencoders ( VAEs ) ( Kingma & Welling , 2013 ) , GANs and adversarial autoencoders ( AAE ) ( Makhzani et al. , 2015 ) , and arrive at a number of insights not obtainable from log-likelihoods alone . For instance , we observe that VAEs and GANs have different rate distortion tradeoffs : While VAEs with larger code size can generally achieve better lossless compression rates , their performances drop at lossy compression in the low-rate regime . Conversely , expanding the capacity of GANs appears to bring substantial reductions in distortion at the high-rate regime without any corresponding deterioration in quality in the low-rate regime . We find that increasing the capacity of GANs by increasing the code size ( width ) has a qualitatively different effect on the rate distortion tradeoffs than increasing the depth . We also find that that different GAN variants with the same code size achieve nearly identical RD curves , and that the code size dominates the performance differences between GANs . 2 BACKGROUND . 2.1 ANNEALED IMPORTANCE SAMPLING . Annealed importance sampling ( AIS ) ( Neal , 2001 ) is a Monte Carlo algorithm based on constructing a sequence of n intermediate distributions pk ( z ) = p̃k ( z ) Zk , where k ∈ { 0 , . . . , n } , between a tractable initial distribution p0 ( z ) and the intractable target distribution pn ( z ) . At the the k-th state ( 0 ≤ k ≤ n ) , the forward distribution qf and the un-normalized backward distribution q̃b are qf ( z0 , . . . , zk ) = p0 ( z0 ) T0 ( z1|z0 ) . . . Tk−1 ( zk|zk−1 ) , ( 1 ) q̃b ( z0 , . . . , zk ) = p̃k ( zk ) T̃k−1 ( zk−1|zk ) . . . T̃0 ( z0|z1 ) , ( 2 ) where Tk is an MCMC kernel that leaves pk ( z ) invariant ; and T̃k is its reverse kernel . We run M independent AIS chains , numbered i = 1 , . . . , M . Let zik be the k-th state of the i-th chain . The importance weights and normalized importance weights are wik = q̃b ( z i 1 , . . . , z i k ) qf ( zi1 , . . . , z i k ) = p̃1 ( z i 1 ) p0 ( zi1 ) p̃2 ( z i 2 ) p̃1 ( zi2 ) . . . p̃k ( z i k ) p̃k−1 ( zik ) , w̃ik = wik∑M i=1 w i k . ( 3 ) At the k-th step , the unbiased partition function estimate of pk ( z ) is Ẑk = 1M ∑M i=1 w i k. At the k-th step , we define the AIS distribution qAISk ( z ) as the distribution obtained by first sampling z1k , . . . , z M k from the M parallel chains using the forward distribution qf ( z i 1 , . . . , z i M ) , and then re-sampling these samples based on w̃ik . More formally , the AIS distribution is defined as follows : qAISk ( z ) = E∏M i=1 qf ( z i 1 , ... , z i k ) [ M∑ i=1 w̃ikδ ( z− zik ) ] . ( 4 ) Bidirectional Monte Carlo . We know that the log partition function estimate log Ẑ is a stochastic lower bound on logZ ( Jensen ’ s inequality ) . As the result , using the forward AIS distribution as the proposal distribution results in a lower bound on the data log-likelihood . By running AIS in reverse , however , we obtain an upper bound on logZ . However , in order to run the AIS in reverse , we need exact samples from the true posterior , which is only possible on the simulated data . The combination of the AIS lower bound and upper bound on the log partition function is called bidirectional Monte Carlo ( BDMC ) , and the gap between these bounds is called the BDMC gap ( Grosse et al. , 2015 ) . We note that AIS combined with BDMC has been used to estimate log-likelihoods for implicit generative models ( Wu et al. , 2016 ) . In this work , we validate our AIS experiments by using the BDMC gap to measure the accuracy of our partition function estimators . 2.2 RATE DISTORTION THEORY . Let x be a random variable that comes from the data distribution pd ( x ) . Shannon ’ s fundamental compression theorem states that we can compress this random variable losslessly at the rate of H ( x ) . But if we allow lossy compression , we can compress x at the rate of R , where R ≤ H ( x ) , using the code z , and have a lossy reconstruction x̂ = f ( z ) with the distortion of D , given a distortion measure d ( x , x̂ ) = d ( x , f ( z ) ) . The rate distortion theory quantifies the trade-off between the lossy compression rate R and the distortion D. The rate distortion function R ( D ) is defined as the minimum number of bits per sample required to achieve lossy compression of the data such that the average distortion measured by the distortion function is less than D. Shannon ’ s rate distortion theorem states thatR ( D ) equals the minimum of the following optimization problem : min q ( z|x ) I ( z ; x ) s.t . Eq ( x , z ) [ d ( x , f ( z ) ) ] ≤ D. ( 5 ) where the optimization is over the channel conditional distribution q ( z|x ) . Suppose the datadistribution is pd ( x ) . The channel conditional q ( z|x ) induces the joint distribution q ( z , x ) = pd ( x ) q ( z|x ) , which defines the mutual information I ( z ; x ) . q ( z ) is the marginal distribution over z of the joint distribution q ( z , x ) , and is called the output marginal distribution . We can rewrite the optimization of Eq . 5 using the method of Lagrange multipliers as follows : min q ( z|x ) I ( z ; x ) + βEq ( z , x ) [ d ( x , f ( z ) ) ] . ( 6 )
The paper proposes a new way to evaluate generative models that don't have tractable likelihoods, such as VAEs or GANs. Such generative models are composed of a prior over latent variables and a decoder that maps latent variables to data. The idea is to evaluate a trained model in terms of the best (lossy) compression rate that can be achieved by encoding a datapoint (e.g. an image) into the latent space, as a function of a permitted distortion between the datapoint and its reconstruction after decoding. The paper describes a method that estimates an upper bound on this rate-distortion curve using annealed importance sampling. The method is applied in evaluating and comparing a few VAE, GAN and AAE architectures on images (MNIST and CIFAR-10).
SP:950c845314ba6a65208f78be3e42b47d79befd7f
On the Dynamics and Convergence of Weight Normalization for Training Neural Networks
1 INTRODUCTION . Dynamic normalization in neural networks is a re-parametrization procedure between the layers that improves stability during training and leads to faster convergence . This approach was popularized with the introduction of Batch Normalization ( BatchNorm ) in [ 20 ] and has led to a plethora of additional normalization methods , notably including Layer Normalization ( LayerNorm ) [ 6 ] and Weight Normalization ( WeightNorm ) [ 28 ] . WeightNorm was proposed as a method that emulates BatchNorm and benefits from similar stability and convergence properties . Moreover , WeightNorm has the advantage of not requiring a batch setting , therefore considerably reducing the computational overhead that is imposed by BatchNorm [ 16 ] . WeightNorm is widely used in training of neural networks and is the focus of this work . Today , normalization methods are ubiquitous in the training of neural nets since in practice they significantly improve the convergence speed and stability in training . Despite the impressive empirical results and massive popularity of dynamic normalization methods , explaining their utility and proving that they converge when training with non-smooth , non-convex loss functions has remained an unsolved problem . In this paper we provide sufficient conditions on the data , initialization , and over-parameterization for dynamically normalized ReLU networks to converge to a global minimum of the loss function , and rigorously illustrate the utility of normalization methods . Consider the class of 2-layer ReLU neural networks f : Rd → R parameterized by ( W , c ) ∈ Rm×d × Rm as f ( x ; W , c ) = 1√ m m∑ k=1 ckσ ( w > k x ) . ( 1.1 ) Here the activation function is the ReLU , σ ( s ) = max { s , 0 } [ 26 ] , m denotes the width of the second layer , and f is normalized accordingly by a factor √ m. We investigate gradient descent training with WeightNorm for ( 1.1 ) , which re-parameterizes the network in terms of ( V , g , c ) ∈ Rm×d×Rm×Rm as f ( x ; V , g , c ) = 1√ m m∑ k=1 ckσ ( gk · v > k x ‖vk‖2 ) . ( 1.2 ) This gives a similar parameterization to [ 14 ] that study convergence of gradient optimization of convolutional filters on Gaussian data . We consider the regression task , optimizing with respect to the L2 loss with random parameter initialization and focus on the over-parametrized regime , meaning that m > n , where n is the number of training samples . The neural network function class ( 1.1 ) has been studied in many papers including [ 3 , 15 , 33 , 36 ] along with other similar over-parameterized architectures [ 1 , 14 , 23 ] . An exuberant series of recent works prove that feed-forward ReLU networks converge to zero training error when trained with gradient descent from random initialization . Nonetheless , to the best of our knowledge , there are no proofs that ReLU networks trained with dynamic normalization on general data converge to a global minimum . This is in part because normalization methods completely change the optimization landscape during training . Here we show that neural networks of the form given above converge at linear rate when trained with gradient descent and WeightNorm . The analysis is based on the over-parameterization of the networks , which allows for guaranteed descent while the gradient is non-zero . For regression training , a group of papers studied the trajectory of the networks ’ predictions and showed that they evolve via a “ neural tangent kernel ” ( NTK ) as introduced by Jacot et al . [ 21 ] . The latter paper studies neural network convergence in the continuous limit of infinite width overparameterization , while the works of [ 3 , 15 , 27 , 33 , 36 ] analyze the finite width setting . For finitewidth over-parameterized networks , the training evolution also exhibits a kernel that takes the form of a Gram matrix . In these works , the convergence rate is dictated by the least eigenvalue of the kernel . We build on this fact , and also on the general ideas of the proof of [ 15 ] and the refined work of [ 3 ] . Compared with un-normalized training , we prove that normalized networks follow a modified kernel evolution that features a “ length-direction ” decomposition of the NTK . This leads to two convergence regimes in WeightNorm training and explains the utility of WeightNorm from the perspective of the NTK . In the settings considered , WeightNorm significantly reduces the amount of over-parameterization needed for provable convergence , as compared with un-normalized settings . The decomposition of the NTK also connects to observations of [ 12 ] that discuss “ lazy training ” which refers to a training regime where the weights of the network stay close to their initialization ( see Section 6 ) . Further , we present a more careful analysis that leads to improved over-parameterization bounds as compared with [ 15 ] . In this work we rigorously analyze the dynamics of weight normalization training and its convergence from the perspective of the neural tangent kernel . We discover WeightNorm training has two regimes with distinct behaviors . The main contributions of this work are : • We prove the first general convergence result for dynamically normalized 2-layer ReLU networks trained with gradient descent . Our formulation does not assume the existence of a teacher network and has mild assumptions on the training data . • We explain the utility of normalization methods via a decomposition of the neural tangent kernel . In the analysis we highlight two distinct convergence regimes and give a concrete example of “ lazy training ” for finite-step gradient descent . • It is shown that finite-step gradient descent converges for all weight magnitudes at initialization and we significantly reduce the amount of over-parameterization required for provable convergence as compared with un-normalized training . The paper is organized as follows . In Section 2 we provide background on WeightNorm and derive key evolution dynamics of training in Section 3 . We present and discuss our main results , alongside with the idea of the proof , in Section 4 . We discuss related work in Section 5 , and offer a discussion of our results and their implications to dynamic normalization training and “ lazy training ” in Section 6 . Proofs are presented in the Appendix . 2 WEIGHTNORM . Here we give an overview of the WeightNorm procedure and review some known properties of normalization methods . Notation We use lowercase , lowercase boldface , and uppercase boldface letters to denote scalars , vectors and matrices resp . We denote the Rademacher distribution as U { 1 , −1 } and write N ( µ , Σ ) for a Gaussian with mean µ and covariance Σ . Training points are denoted by x1 . . .xn ∈ Rd and parameters of the first layer by vk ∈ Rd . We use σ ( x ) : = max { x , 0 } , and write ‖ · ‖2 , ‖ · ‖F for the spectral and Frobenius norms for matrices . λmin ( A ) is used to denote the minimum eigenvalue of a matrix A and 〈· , ·〉 denotes the Euclidean inner product . For a vector v denote the ` 2 vector norm as ‖v‖2 and for a positive definite matrix S define the induced vector norm ‖v‖S : = √ v > Sv . The projections of x onto u and u⊥ are defined as xu : = uu > x ‖u‖22 , xu ⊥ : = x ( I− uu > ‖u‖22 ) . Denote the indicator function of event A as 1A and for a weight vector at time t , vk ( t ) , and data point xi we denote 1ik ( t ) : = 1 { vk ( t ) > xi≥ 0 } . WeightNorm procedure For a single neuron σ ( w > x ) , WeightNorm re-parametrizes the weight w ∈ Rd in terms of v ∈ Rd , g ∈ R as w ( v , g ) = g · v ‖v‖2 , σ ( g · v > x ‖v‖2 ) . ( 2.1 ) This decouples the magnitude and direction of each weight vector ( referred as the “ length-direction ” decomposition ) . In comparison , for BatchNorm each output w > x is normalized according to the average statistics in a batch . We can draw the following analogy between WeightNorm and BatchNorm if the inputs xi are centered ( Ex = 0 ) and the covariance matrix is known ( Exx > = S ) . In this case , batch training with BatchNorm amounts to σ ( γ · w > x√ Ex ( w > xx > w ) ) = σ ( γ · w > x√ w > Sw ) = σ ( γ · w > x ‖w‖S ) . ( 2.2 ) From this prospective , WeightNorm is a special case of ( 2.2 ) with S = I [ 22 , 28 ] . Properties of WeightNorm We start by giving an overview of known properties of WeightNorm that will be used to derive the gradient flow dynamics of WeightNorm training . For re-parameterization ( 2.1 ) of a network function f that is initially parameterized with a weight w , the gradient ∇wf relates to the gradients∇vf , ∇gf by the identities ∇vf = g ‖v‖2 ( ∇wf ) v ⊥ , ∇gf = ( ∇wf ) v. This implies that∇vf · v = 0 for each input x and parameter v. For gradient flow , this orthogonality results in ‖v ( 0 ) ‖2 = ‖v ( t ) ‖2 for all t. For gradient descent ( with step size η ) the discretization in conjunction with orthogonality leads to increasing parameter magnitudes during training [ 4 , 19 , 28 ] , as illustrated in Figure 1 , ‖v ( s+ 1 ) ‖22 = ‖v ( s ) ‖22 + η2‖∇vf‖22 ≥ ‖v ( s ) ‖22 . ( 2.3 ) Problem Setup We analyze ( 1.1 ) with WeightNorm training ( 1.2 ) , so that f ( x ; V , c , g ) = 1√ m m∑ k=1 ckσ ( gk · v > k x ‖vk‖2 ) . We take an initialization in the spirit of [ 17 , 28 ] : vk ( 0 ) ∼ N ( 0 , α2I ) , ck ∼ U { −1 , 1 } , and gk ( 0 ) = ‖vk ( 0 ) ‖2/α . ( 2.4 ) Where α is the variance of vk at initialization . The initialization of gk ( 0 ) is therefore taken to be independent of α . We remark that the initialization ( 2.4 ) gives the same initial output distribution as in methods that study the un-normalized network class ( 1.1 ) . The parameters of the network are optimized using the training data { ( x1 , y1 ) , . . . , ( xn , yn ) } with respect to the square loss L ( f ) = 1 2 n∑ i=1 ( f ( xi ) − yi ) 2 = 1 2 ‖f − y‖22 , ( 2.5 ) where f = ( f1 , f2 , . . . , fn ) > = ( f ( x1 ) , f ( x2 ) , . . . , f ( xn ) ) > and y = ( y1 , y2 , . . . , yn ) > .
This paper presents a general proof of the convergence of two-layer ReLU networks with weight normalization trained with gradient descent. Weight normalization re-parameterizes the weights to decouple the directions and lengths of kernels. Depending on the lengths of kernels the training process can be divided into two regimes, corresponding to updates of lengths and directions, respectively. One of the regimes naturally corresponds to lazy training where the directions remain stable. And there are transitions from one regime to the other when the lengths gradually change during the training process.
SP:24573aabc247456e2c8f00de434d586a8c18fb26
On the Dynamics and Convergence of Weight Normalization for Training Neural Networks
1 INTRODUCTION . Dynamic normalization in neural networks is a re-parametrization procedure between the layers that improves stability during training and leads to faster convergence . This approach was popularized with the introduction of Batch Normalization ( BatchNorm ) in [ 20 ] and has led to a plethora of additional normalization methods , notably including Layer Normalization ( LayerNorm ) [ 6 ] and Weight Normalization ( WeightNorm ) [ 28 ] . WeightNorm was proposed as a method that emulates BatchNorm and benefits from similar stability and convergence properties . Moreover , WeightNorm has the advantage of not requiring a batch setting , therefore considerably reducing the computational overhead that is imposed by BatchNorm [ 16 ] . WeightNorm is widely used in training of neural networks and is the focus of this work . Today , normalization methods are ubiquitous in the training of neural nets since in practice they significantly improve the convergence speed and stability in training . Despite the impressive empirical results and massive popularity of dynamic normalization methods , explaining their utility and proving that they converge when training with non-smooth , non-convex loss functions has remained an unsolved problem . In this paper we provide sufficient conditions on the data , initialization , and over-parameterization for dynamically normalized ReLU networks to converge to a global minimum of the loss function , and rigorously illustrate the utility of normalization methods . Consider the class of 2-layer ReLU neural networks f : Rd → R parameterized by ( W , c ) ∈ Rm×d × Rm as f ( x ; W , c ) = 1√ m m∑ k=1 ckσ ( w > k x ) . ( 1.1 ) Here the activation function is the ReLU , σ ( s ) = max { s , 0 } [ 26 ] , m denotes the width of the second layer , and f is normalized accordingly by a factor √ m. We investigate gradient descent training with WeightNorm for ( 1.1 ) , which re-parameterizes the network in terms of ( V , g , c ) ∈ Rm×d×Rm×Rm as f ( x ; V , g , c ) = 1√ m m∑ k=1 ckσ ( gk · v > k x ‖vk‖2 ) . ( 1.2 ) This gives a similar parameterization to [ 14 ] that study convergence of gradient optimization of convolutional filters on Gaussian data . We consider the regression task , optimizing with respect to the L2 loss with random parameter initialization and focus on the over-parametrized regime , meaning that m > n , where n is the number of training samples . The neural network function class ( 1.1 ) has been studied in many papers including [ 3 , 15 , 33 , 36 ] along with other similar over-parameterized architectures [ 1 , 14 , 23 ] . An exuberant series of recent works prove that feed-forward ReLU networks converge to zero training error when trained with gradient descent from random initialization . Nonetheless , to the best of our knowledge , there are no proofs that ReLU networks trained with dynamic normalization on general data converge to a global minimum . This is in part because normalization methods completely change the optimization landscape during training . Here we show that neural networks of the form given above converge at linear rate when trained with gradient descent and WeightNorm . The analysis is based on the over-parameterization of the networks , which allows for guaranteed descent while the gradient is non-zero . For regression training , a group of papers studied the trajectory of the networks ’ predictions and showed that they evolve via a “ neural tangent kernel ” ( NTK ) as introduced by Jacot et al . [ 21 ] . The latter paper studies neural network convergence in the continuous limit of infinite width overparameterization , while the works of [ 3 , 15 , 27 , 33 , 36 ] analyze the finite width setting . For finitewidth over-parameterized networks , the training evolution also exhibits a kernel that takes the form of a Gram matrix . In these works , the convergence rate is dictated by the least eigenvalue of the kernel . We build on this fact , and also on the general ideas of the proof of [ 15 ] and the refined work of [ 3 ] . Compared with un-normalized training , we prove that normalized networks follow a modified kernel evolution that features a “ length-direction ” decomposition of the NTK . This leads to two convergence regimes in WeightNorm training and explains the utility of WeightNorm from the perspective of the NTK . In the settings considered , WeightNorm significantly reduces the amount of over-parameterization needed for provable convergence , as compared with un-normalized settings . The decomposition of the NTK also connects to observations of [ 12 ] that discuss “ lazy training ” which refers to a training regime where the weights of the network stay close to their initialization ( see Section 6 ) . Further , we present a more careful analysis that leads to improved over-parameterization bounds as compared with [ 15 ] . In this work we rigorously analyze the dynamics of weight normalization training and its convergence from the perspective of the neural tangent kernel . We discover WeightNorm training has two regimes with distinct behaviors . The main contributions of this work are : • We prove the first general convergence result for dynamically normalized 2-layer ReLU networks trained with gradient descent . Our formulation does not assume the existence of a teacher network and has mild assumptions on the training data . • We explain the utility of normalization methods via a decomposition of the neural tangent kernel . In the analysis we highlight two distinct convergence regimes and give a concrete example of “ lazy training ” for finite-step gradient descent . • It is shown that finite-step gradient descent converges for all weight magnitudes at initialization and we significantly reduce the amount of over-parameterization required for provable convergence as compared with un-normalized training . The paper is organized as follows . In Section 2 we provide background on WeightNorm and derive key evolution dynamics of training in Section 3 . We present and discuss our main results , alongside with the idea of the proof , in Section 4 . We discuss related work in Section 5 , and offer a discussion of our results and their implications to dynamic normalization training and “ lazy training ” in Section 6 . Proofs are presented in the Appendix . 2 WEIGHTNORM . Here we give an overview of the WeightNorm procedure and review some known properties of normalization methods . Notation We use lowercase , lowercase boldface , and uppercase boldface letters to denote scalars , vectors and matrices resp . We denote the Rademacher distribution as U { 1 , −1 } and write N ( µ , Σ ) for a Gaussian with mean µ and covariance Σ . Training points are denoted by x1 . . .xn ∈ Rd and parameters of the first layer by vk ∈ Rd . We use σ ( x ) : = max { x , 0 } , and write ‖ · ‖2 , ‖ · ‖F for the spectral and Frobenius norms for matrices . λmin ( A ) is used to denote the minimum eigenvalue of a matrix A and 〈· , ·〉 denotes the Euclidean inner product . For a vector v denote the ` 2 vector norm as ‖v‖2 and for a positive definite matrix S define the induced vector norm ‖v‖S : = √ v > Sv . The projections of x onto u and u⊥ are defined as xu : = uu > x ‖u‖22 , xu ⊥ : = x ( I− uu > ‖u‖22 ) . Denote the indicator function of event A as 1A and for a weight vector at time t , vk ( t ) , and data point xi we denote 1ik ( t ) : = 1 { vk ( t ) > xi≥ 0 } . WeightNorm procedure For a single neuron σ ( w > x ) , WeightNorm re-parametrizes the weight w ∈ Rd in terms of v ∈ Rd , g ∈ R as w ( v , g ) = g · v ‖v‖2 , σ ( g · v > x ‖v‖2 ) . ( 2.1 ) This decouples the magnitude and direction of each weight vector ( referred as the “ length-direction ” decomposition ) . In comparison , for BatchNorm each output w > x is normalized according to the average statistics in a batch . We can draw the following analogy between WeightNorm and BatchNorm if the inputs xi are centered ( Ex = 0 ) and the covariance matrix is known ( Exx > = S ) . In this case , batch training with BatchNorm amounts to σ ( γ · w > x√ Ex ( w > xx > w ) ) = σ ( γ · w > x√ w > Sw ) = σ ( γ · w > x ‖w‖S ) . ( 2.2 ) From this prospective , WeightNorm is a special case of ( 2.2 ) with S = I [ 22 , 28 ] . Properties of WeightNorm We start by giving an overview of known properties of WeightNorm that will be used to derive the gradient flow dynamics of WeightNorm training . For re-parameterization ( 2.1 ) of a network function f that is initially parameterized with a weight w , the gradient ∇wf relates to the gradients∇vf , ∇gf by the identities ∇vf = g ‖v‖2 ( ∇wf ) v ⊥ , ∇gf = ( ∇wf ) v. This implies that∇vf · v = 0 for each input x and parameter v. For gradient flow , this orthogonality results in ‖v ( 0 ) ‖2 = ‖v ( t ) ‖2 for all t. For gradient descent ( with step size η ) the discretization in conjunction with orthogonality leads to increasing parameter magnitudes during training [ 4 , 19 , 28 ] , as illustrated in Figure 1 , ‖v ( s+ 1 ) ‖22 = ‖v ( s ) ‖22 + η2‖∇vf‖22 ≥ ‖v ( s ) ‖22 . ( 2.3 ) Problem Setup We analyze ( 1.1 ) with WeightNorm training ( 1.2 ) , so that f ( x ; V , c , g ) = 1√ m m∑ k=1 ckσ ( gk · v > k x ‖vk‖2 ) . We take an initialization in the spirit of [ 17 , 28 ] : vk ( 0 ) ∼ N ( 0 , α2I ) , ck ∼ U { −1 , 1 } , and gk ( 0 ) = ‖vk ( 0 ) ‖2/α . ( 2.4 ) Where α is the variance of vk at initialization . The initialization of gk ( 0 ) is therefore taken to be independent of α . We remark that the initialization ( 2.4 ) gives the same initial output distribution as in methods that study the un-normalized network class ( 1.1 ) . The parameters of the network are optimized using the training data { ( x1 , y1 ) , . . . , ( xn , yn ) } with respect to the square loss L ( f ) = 1 2 n∑ i=1 ( f ( xi ) − yi ) 2 = 1 2 ‖f − y‖22 , ( 2.5 ) where f = ( f1 , f2 , . . . , fn ) > = ( f ( x1 ) , f ( x2 ) , . . . , f ( xn ) ) > and y = ( y1 , y2 , . . . , yn ) > .
Global convergence of NNs is an important research direction in deep learning. There have been significant progresses in this direction since last year. Most noticeable, the Neural tangent kernels (NTK) [1], which shows in the infinite width setting, NTK is deterministic and remains almost constant during gradient descent. NNs are essentially the same as kernel methods. Proofs of global convergence of NNs (without normalization) are built on this intuition.
SP:24573aabc247456e2c8f00de434d586a8c18fb26
DIME: AN INFORMATION-THEORETIC DIFFICULTY MEASURE FOR AI DATASETS
1 INTRODUCTION . Empirical machine learning research relies heavily on comparing performance of algorithms on a few standard benchmark datasets . Moreover , researchers frequently introduce new datasets that they believe to be more challenging than existing benchmarks . However , we lack objective measures of dataset difficulty that are independent of the choices made about algorithm- and model-design . Moreover , it is also hard to compare algorithmic progress across data modalities , such as language and vision . So , for instance , it is difficult to compare the relative progress made on a sentiment analysis benchmark such as the Stanford Sentiment Treebank ( SST ) ( Socher et al. , 2013 ) and an image classification benchmark , like CIFAR-10 ( Krizhevsky , 2009 ) . With these challenges in mind , we propose a model-agnostic and modality-agnostic measure for comparing how difficult it is to perform supervised learning on a given dataset . Intuitively , assuming that dataset examples are sampled i.i.d . from a static true distribution , we argue that the difficulty of a dataset can be decoupled into two relatively independent sources : ( a ) approximation complexity , the number of samples required to approximate the true distribution up to certain accuracy , and ( b ) distributional complexity , the intrinsic difficulty involved in modeling the statistical relationship between the labels and features . We focus our analysis on the second source of the intrinsic difficulty in supervised learning , where both features and labels are available . To provide a model-agnostic measure , we turn to the information-theoretic approach . Indeed , there already exist lower bounds on the lowest possible errors given the distribution of the data . If both the samples and labels are discrete , Fano ’ s inequality suggests the probability of 0-1 error is bounded by terms related to the conditional entropy H ( Y |X ) , where X is the random variable representing the features and Y is the label . When both the features X and label Y are continuous , results on differential entropy also suggest the expected L2 error is lower-bounded . However , in most of the supervised learning datasets where the features are continuous and the labels are discrete , it is unknown how the lowest possible error can be controlled regardless of models . In this paper , we show that even for the hybrid case where labels are discrete and features are continuous , with some additional assumptions , Fano ’ s inequality still holds . Moreover , we show that the lowest possible probability Pe of the 0-1 error for a given data distribution is lower bounded by terms related to a hybrid conditional entropy H ( Y |X ) . We further design an estimator for the lower bound of Pe based on our generalized Fano ’ s inequality . The estimator uses neural networks to approximate the KL divergence based on Donsker-Varadhan representation , which is then used to estimate the hybrid conditional entropy H ( Y |X ) as well as the lower bound of Pe . We emphasize that even though our lower bound is model-agnostic , the proposed estimator is based on a neural network . However , we empirically show that , for most image and natural language datasets , a multilayer perceptron-based estimator produces a measure that effectively captures the difficulty of the data and aligns well with the performance of state-of-the-art models . Related Work : Although conditional entropy and mutual information estimation have been extensively studied , research has focused on purely discrete or continuous data . Nair et al . ( 2006 ) were among the first to study the theoretical properties for the case of mixture of discrete and continuous variables . Ross ( 2014 ) , Gao et al . ( 2017 ) and Beknazaryan et al . ( 2019 ) proposed approaches for estimating mutual information for the mixture case based on density ratio estimation ( e.g. , binning , kNN or kernel methods ) , which is unsatisfactory for high dimensional data such as image and text . We use neural network estimation ( Belghazi et al. , 2018 ) to avoid these issues . More importantly , we are the first to connect the hybrid conditional entropy with the lowest classification error and are able to use it as a difficulty measure for datasets . 2 DESIGNING A DATASET DIFFICULTY MEASURE . For supervised learning across data modalities such as images and text , data samples can usually be viewed as feature-label pairs ( x , y ) where x ∈ X ⊂ Rdx , and y ∈ Y . We focus on classification problems where the labels y are discrete , i.e. , Y ⊂ Z+ . We denote the joint distribution of the feature-label pairs as PXY . The marginal distributions of the features and labels are denoted as PX and PY respectively . We make the following widely adopted assumption from learning theory literature about how samples are generated : Assumption 1 . The feature-label pairs ( x , y ) in the datasets , both training and testing , are sampled i.i.d . from a static distribution ( x , y ) ∼ PXY . Intuitively , there are many possible indicators for the potential difficulty of a dataset : the number of features , the number of classes , the number of samples , the distinguishability of samples across classes , as well as the difference between the data distributions of the training set and the testing set . However , none of these indicators alone can fully describe the relative difficulty of a dataset . From Assumption 1 , if the samples ( x , y ) are sampled i.i.d . from PXY , where Y is discrete , a natural measure that characterizes the difficulty of the data distribution is the best probability of the 0-1 error that can be achieved by any estimator . Definition 1 ( Model-Agnostic Error ) . Pe = inff Px , y∼PXY [ f ( x ) 6= y ] The measure 1 is straightforward , but unfortunately it is hard to compute since it involves evaluations against all possible estimators . However , with mild assumptions , it can be lower bounded by terms related to the conditional entropy , which is much easier to evaluate . 2.1 DISCRETE FEATURES . In the case where the features x ∈ X are discrete , according to Fano ’ s theorem , Pe is lower bounded : Fano ’ s inequality . If both X and Y are discrete random variables , then Pe ≥ H ( Y |X ) −1log |Y| , where |Y| is the cardinality of the label set Y , and H ( Y |X ) is the conditional entropy . However , even though data can be represented using discrete integers , treating the features as discrete random variables leads to the following difficulties : 1 . The cardinality of the feature space becomes extremely large if discrete features are used . For image data , since each pixel is represented as an integer , the ( raw ) feature dimension would become the number of pixels in the image , which can be extremely large . Similarly , for language data , when the sequence length is long , the feature dimension becomes large very quickly . 2 . Given the large feature space , finding a matching set of features between training and testing data from the limited number of training and test samples would be unlikely . As a result , probability mass estimation on each discrete value would be impractical , since we may only see at most one sample for each discrete configuration . 2.2 CONTINUOUS FEATURES . As opposed to treating the features x as discrete random variables , if we view them as i.i.d . samples from a continuous distribution with probability density px , we can estimate the conditional entropy H ( Y |X ) under some smoothness assumptions and also infer the model-agnostic error Pe . However , classical Fano ’ s inequality only holds for discrete random variables . For the case with the continuous features and discrete labels , it has not been shown how Pe can be controlled . In this paper , we prove a generalized version of Fano ’ s inequality that holds for the continuous-featurediscrete-label scenario . Formally , for the continuous-X-discrete-Y case : Definition 2 . Hybrid Conditional Entropy H ( Y |X ) : = EX − |Y|∑ y=1 P ( Y = y|X ) logP ( Y = y|X ) , ( 1 ) where P ( Y = y|X ) : = E [ 1 ( Y = y ) |X ] . This definition of H ( Y |X ) is consistent with the classical definition , in the sense that both of them give the intuition that how much information or uncertainty is left for Y given X . Next , to connect Pe and the hybrid H ( Y |X ) , we introduce an assumption on the function f Definition 3 . Smooth Discretization Property : The function f : X → Y satisfies the smooth discretization property if for every y ∈ Y , almost every x ∈ X ( x a.e . in X ) , f ( x ) = y ⇐⇒ ∃δ > 0 s.t . ∀x̃ ∈ Bδ ( x ) , f ( x̃ ) = y , where Bδ ( x ) : = { x̃ ∈ Ω : ‖x̃− x‖2 < δ } is a δ−neighborhood of x in X . This assumption on the classifier function f is not unnatural considering f maps a continuous variable to a discrete variable . Without this assumption , it would be extremely hard to quantify the population error probability P ( f ( X ) 6= Y ) since the behavior of f may be erratic . Furthermore , in the real data setting , this assumption is always satisfied for every classifier f , as we can always construct a small enough neighborhood of each data point such that they are disjoint and assume f is a constant in each neighborhood . In this sense the assumption on f is pretty minimal . Now we are ready to extend Fano ’ s inequality : Theorem 1 . Fano ’ s Inequality for Continuous Features : Let Pe be the minimum error probability , i.e. , Pe = inf f P ( f ( X ) 6= Y ) where f is any estimator of Y based on the observation X that satisfies the smooth discretization property . Then we have H ( Pe ) + Pe log ( m− 1 ) ≥ H ( Y |X ) , ( 2 ) where H ( Pe ) : = −Pe logPe − ( 1− Pe ) log ( 1− Pe ) . Proof . See appendix B . 3 ESTIMATING THE LOWER BOUND . It is natural to consider Pe defined in Theorem 1 as a measure of dataset difficulty . Unfortunately , direct estimation of Pe is impractical since one has to evaluate the estimation error against all possible estimators . However , theorem 1 provides an alternative towards estimating a lower bound on Pe through estimating the hybrid conditional entropy H ( Y |X ) defined in Equation ( 1 ) . 3.1 CONDITIONAL ENTROPY ESTIMATION . In real applications , direct calculation of hybrid conditional entropy H ( Y |X ) according to Definition 2 is impossible since P ( Y = y|X ) is unknown . However , similar to the conditional entropy for discrete random variables , the hybrid conditional entropy H ( Y |X ) can also be written as H ( Y |X ) = H ( Y ) − |Y|∑ y=1 P ( Y = y ) KL ( X|Y = y||X ) . ( 3 ) Please refer to Appendix A for a detailed proof of Equation ( 3 ) . We also define the hybrid mutual information I ( X ; Y ) , which is compatible with H ( Y |X ) : I ( X ; Y ) = |Y|∑ y=1 P ( Y = y ) KL ( X|Y = y||X ) ( 4 ) In some benchmark datasets with balanced classes ( e.g. , CIFAR-10 and MNIST ) , H ( Y |X ) = log |Y| − 1|Y| ∑|Y| y=1 KL ( X|Y = y||X ) . This indicates that if a dataset has more classes and the features for different classes are closer to each other on average , then H ( Y |X ) would be larger .
The paper proposes a measure of difficulty for datasets. Prior work in this space has often utilized certain indicators like the overlap of samples across different classes etc. [A] While this work defines a model-agnostic error as the measure of difficulty, which should encompass all possible indicators of error. Then, the paper provides a lower bound on this error which can be estimated using neural network [B]
SP:dbb2e549f21492129fac9e6944485440cfe093e0
DIME: AN INFORMATION-THEORETIC DIFFICULTY MEASURE FOR AI DATASETS
1 INTRODUCTION . Empirical machine learning research relies heavily on comparing performance of algorithms on a few standard benchmark datasets . Moreover , researchers frequently introduce new datasets that they believe to be more challenging than existing benchmarks . However , we lack objective measures of dataset difficulty that are independent of the choices made about algorithm- and model-design . Moreover , it is also hard to compare algorithmic progress across data modalities , such as language and vision . So , for instance , it is difficult to compare the relative progress made on a sentiment analysis benchmark such as the Stanford Sentiment Treebank ( SST ) ( Socher et al. , 2013 ) and an image classification benchmark , like CIFAR-10 ( Krizhevsky , 2009 ) . With these challenges in mind , we propose a model-agnostic and modality-agnostic measure for comparing how difficult it is to perform supervised learning on a given dataset . Intuitively , assuming that dataset examples are sampled i.i.d . from a static true distribution , we argue that the difficulty of a dataset can be decoupled into two relatively independent sources : ( a ) approximation complexity , the number of samples required to approximate the true distribution up to certain accuracy , and ( b ) distributional complexity , the intrinsic difficulty involved in modeling the statistical relationship between the labels and features . We focus our analysis on the second source of the intrinsic difficulty in supervised learning , where both features and labels are available . To provide a model-agnostic measure , we turn to the information-theoretic approach . Indeed , there already exist lower bounds on the lowest possible errors given the distribution of the data . If both the samples and labels are discrete , Fano ’ s inequality suggests the probability of 0-1 error is bounded by terms related to the conditional entropy H ( Y |X ) , where X is the random variable representing the features and Y is the label . When both the features X and label Y are continuous , results on differential entropy also suggest the expected L2 error is lower-bounded . However , in most of the supervised learning datasets where the features are continuous and the labels are discrete , it is unknown how the lowest possible error can be controlled regardless of models . In this paper , we show that even for the hybrid case where labels are discrete and features are continuous , with some additional assumptions , Fano ’ s inequality still holds . Moreover , we show that the lowest possible probability Pe of the 0-1 error for a given data distribution is lower bounded by terms related to a hybrid conditional entropy H ( Y |X ) . We further design an estimator for the lower bound of Pe based on our generalized Fano ’ s inequality . The estimator uses neural networks to approximate the KL divergence based on Donsker-Varadhan representation , which is then used to estimate the hybrid conditional entropy H ( Y |X ) as well as the lower bound of Pe . We emphasize that even though our lower bound is model-agnostic , the proposed estimator is based on a neural network . However , we empirically show that , for most image and natural language datasets , a multilayer perceptron-based estimator produces a measure that effectively captures the difficulty of the data and aligns well with the performance of state-of-the-art models . Related Work : Although conditional entropy and mutual information estimation have been extensively studied , research has focused on purely discrete or continuous data . Nair et al . ( 2006 ) were among the first to study the theoretical properties for the case of mixture of discrete and continuous variables . Ross ( 2014 ) , Gao et al . ( 2017 ) and Beknazaryan et al . ( 2019 ) proposed approaches for estimating mutual information for the mixture case based on density ratio estimation ( e.g. , binning , kNN or kernel methods ) , which is unsatisfactory for high dimensional data such as image and text . We use neural network estimation ( Belghazi et al. , 2018 ) to avoid these issues . More importantly , we are the first to connect the hybrid conditional entropy with the lowest classification error and are able to use it as a difficulty measure for datasets . 2 DESIGNING A DATASET DIFFICULTY MEASURE . For supervised learning across data modalities such as images and text , data samples can usually be viewed as feature-label pairs ( x , y ) where x ∈ X ⊂ Rdx , and y ∈ Y . We focus on classification problems where the labels y are discrete , i.e. , Y ⊂ Z+ . We denote the joint distribution of the feature-label pairs as PXY . The marginal distributions of the features and labels are denoted as PX and PY respectively . We make the following widely adopted assumption from learning theory literature about how samples are generated : Assumption 1 . The feature-label pairs ( x , y ) in the datasets , both training and testing , are sampled i.i.d . from a static distribution ( x , y ) ∼ PXY . Intuitively , there are many possible indicators for the potential difficulty of a dataset : the number of features , the number of classes , the number of samples , the distinguishability of samples across classes , as well as the difference between the data distributions of the training set and the testing set . However , none of these indicators alone can fully describe the relative difficulty of a dataset . From Assumption 1 , if the samples ( x , y ) are sampled i.i.d . from PXY , where Y is discrete , a natural measure that characterizes the difficulty of the data distribution is the best probability of the 0-1 error that can be achieved by any estimator . Definition 1 ( Model-Agnostic Error ) . Pe = inff Px , y∼PXY [ f ( x ) 6= y ] The measure 1 is straightforward , but unfortunately it is hard to compute since it involves evaluations against all possible estimators . However , with mild assumptions , it can be lower bounded by terms related to the conditional entropy , which is much easier to evaluate . 2.1 DISCRETE FEATURES . In the case where the features x ∈ X are discrete , according to Fano ’ s theorem , Pe is lower bounded : Fano ’ s inequality . If both X and Y are discrete random variables , then Pe ≥ H ( Y |X ) −1log |Y| , where |Y| is the cardinality of the label set Y , and H ( Y |X ) is the conditional entropy . However , even though data can be represented using discrete integers , treating the features as discrete random variables leads to the following difficulties : 1 . The cardinality of the feature space becomes extremely large if discrete features are used . For image data , since each pixel is represented as an integer , the ( raw ) feature dimension would become the number of pixels in the image , which can be extremely large . Similarly , for language data , when the sequence length is long , the feature dimension becomes large very quickly . 2 . Given the large feature space , finding a matching set of features between training and testing data from the limited number of training and test samples would be unlikely . As a result , probability mass estimation on each discrete value would be impractical , since we may only see at most one sample for each discrete configuration . 2.2 CONTINUOUS FEATURES . As opposed to treating the features x as discrete random variables , if we view them as i.i.d . samples from a continuous distribution with probability density px , we can estimate the conditional entropy H ( Y |X ) under some smoothness assumptions and also infer the model-agnostic error Pe . However , classical Fano ’ s inequality only holds for discrete random variables . For the case with the continuous features and discrete labels , it has not been shown how Pe can be controlled . In this paper , we prove a generalized version of Fano ’ s inequality that holds for the continuous-featurediscrete-label scenario . Formally , for the continuous-X-discrete-Y case : Definition 2 . Hybrid Conditional Entropy H ( Y |X ) : = EX − |Y|∑ y=1 P ( Y = y|X ) logP ( Y = y|X ) , ( 1 ) where P ( Y = y|X ) : = E [ 1 ( Y = y ) |X ] . This definition of H ( Y |X ) is consistent with the classical definition , in the sense that both of them give the intuition that how much information or uncertainty is left for Y given X . Next , to connect Pe and the hybrid H ( Y |X ) , we introduce an assumption on the function f Definition 3 . Smooth Discretization Property : The function f : X → Y satisfies the smooth discretization property if for every y ∈ Y , almost every x ∈ X ( x a.e . in X ) , f ( x ) = y ⇐⇒ ∃δ > 0 s.t . ∀x̃ ∈ Bδ ( x ) , f ( x̃ ) = y , where Bδ ( x ) : = { x̃ ∈ Ω : ‖x̃− x‖2 < δ } is a δ−neighborhood of x in X . This assumption on the classifier function f is not unnatural considering f maps a continuous variable to a discrete variable . Without this assumption , it would be extremely hard to quantify the population error probability P ( f ( X ) 6= Y ) since the behavior of f may be erratic . Furthermore , in the real data setting , this assumption is always satisfied for every classifier f , as we can always construct a small enough neighborhood of each data point such that they are disjoint and assume f is a constant in each neighborhood . In this sense the assumption on f is pretty minimal . Now we are ready to extend Fano ’ s inequality : Theorem 1 . Fano ’ s Inequality for Continuous Features : Let Pe be the minimum error probability , i.e. , Pe = inf f P ( f ( X ) 6= Y ) where f is any estimator of Y based on the observation X that satisfies the smooth discretization property . Then we have H ( Pe ) + Pe log ( m− 1 ) ≥ H ( Y |X ) , ( 2 ) where H ( Pe ) : = −Pe logPe − ( 1− Pe ) log ( 1− Pe ) . Proof . See appendix B . 3 ESTIMATING THE LOWER BOUND . It is natural to consider Pe defined in Theorem 1 as a measure of dataset difficulty . Unfortunately , direct estimation of Pe is impractical since one has to evaluate the estimation error against all possible estimators . However , theorem 1 provides an alternative towards estimating a lower bound on Pe through estimating the hybrid conditional entropy H ( Y |X ) defined in Equation ( 1 ) . 3.1 CONDITIONAL ENTROPY ESTIMATION . In real applications , direct calculation of hybrid conditional entropy H ( Y |X ) according to Definition 2 is impossible since P ( Y = y|X ) is unknown . However , similar to the conditional entropy for discrete random variables , the hybrid conditional entropy H ( Y |X ) can also be written as H ( Y |X ) = H ( Y ) − |Y|∑ y=1 P ( Y = y ) KL ( X|Y = y||X ) . ( 3 ) Please refer to Appendix A for a detailed proof of Equation ( 3 ) . We also define the hybrid mutual information I ( X ; Y ) , which is compatible with H ( Y |X ) : I ( X ; Y ) = |Y|∑ y=1 P ( Y = y ) KL ( X|Y = y||X ) ( 4 ) In some benchmark datasets with balanced classes ( e.g. , CIFAR-10 and MNIST ) , H ( Y |X ) = log |Y| − 1|Y| ∑|Y| y=1 KL ( X|Y = y||X ) . This indicates that if a dataset has more classes and the features for different classes are closer to each other on average , then H ( Y |X ) would be larger .
So this paper is interesting. It's sort of pursuing a similar path as recent works that use neural networks to evaluate (e.g., inception score, FID), notably those that optimize some lower bound of an information measure (e.g., MINE). In this case, the setting is "datasets", and the thing they are trying to quantify is the difficulty of the dataset as expressed by a lower bound to the lowest possible probability of the 0-1 error, which they should is related to the conditional entropy of the underlying input / label random variables (which makes sense). This direction seems very useful, and using neural network optimization to attempt to crack defining "dataset complexity" or "difficulty" seems a worthwhile venture.
SP:dbb2e549f21492129fac9e6944485440cfe093e0
HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled embedding of n-gram statistics
1 INTRODUCTION . Recent work ( Strubell et al. , 2019 ) has brought significant attention by demonstrating potential cost and environmental impact of developing and training state-of-the-art models for Natural Language Processing ( NLP ) tasks . The work suggested several countermeasures for changing the situation . One of them recommends a concerted effort by industry and academia to promote research of more computationally efficient algorithms . The main focus of this paper falls precisely in this domain . In particular , we consider NLP systems using a well-known technique called n-gram statistics . The key idea is that hyperdimensional computing ( Kanerva , 2009 ) allows forming distributed representations of the conventional n-gram statistics ( Joshi et al. , 2016 ) . The use of these distributed representations , in turn , allows trading-off the performance of an NLP system ( e.g. , F1 score ) and its computational resources ( i.e. , time and memory ) . The main contribution of this paper is the systematic study of these tradeoffs on nine machine learning algorithms using several benchmark classification datasets . We demonstrate the usefulness of hyperdimensional computing-based embedding , which is highly time and memory efficient . Our experiments on a well-known dataset ( Braun et al. , 2017 ) for intent classification show that it is possible to reduce memory usage by ∼ 10x and speed-up training by ∼ 5x without compromising the F1 score . Several important use-cases are motivating the efforts towards trading-off the performance of a system against computational resources required to achieve that performance : high-throughput systems with an extremely large number of requests/transactions ( the power of one per cent ) ; resource-constrained systems where computational resources and energy are scarce ( edge computing ) ; green computing systems taking into account the aspects of environmental sustainability when considering the efficiency of algorithms ( AI HLEG , 2019 ) . The paper is structured as follows . Section 2 covers the related work . Section 3 outlines the evaluation and describes the datasets . The methods being used are presented in Section 4 . Section 5 evaluates of the experimental results . Discussion and concluding remarks are presented in Section 6 . 2 RELATED WORK . Commonly , data for NLP tasks are represented in the form of vectors , which are then used as an input to machine learning algorithms . These representations range from dense learnable vectors to extremely sparse non-learnable vectors . Well-known examples of such representations include onehot encodings , count-based vectors , and Term Frequency Inverse Document Frequency ( TF-IDF ) among others . Despite being very useful , non-learnable representations have their disadvantages such as resource inefficiency due to their sparsity and absence of contextual information ( except for TF-IDF ) . Learnable vector representations such as word embeddings ( e.g. , Word2Vec ( Mikolov et al. , 2013 ) or GloVe ( Pennington et al. , 2014 ) ) partially address these issues by obtaining dense vectors in an unsupervised learning fashion . These representations are based on the distributional hypothesis : words located nearby in a vector space should have similar contextual meaning . The idea has been further improved in Joulin et al . ( 2016 ) by representing words with character n-grams . Another efficient way of representing a word is the concept of Byte Pair Encoding , which has been introduced in Gage ( 1994 ) . The disadvantage of the learnable representations , however , is that they require pretraining involving large train corpus as well as have a large memory footprint ( in order of GB ) . As an alternative to word/character embedding , Shridhar et al . ( 2019 ) introduced the idea of Subword Semantic Hashing that uses a hashing method to represent subword tokens , thus , reducing the memory footprint ( in order of MB ) and removing the necessity of pretraining over a large corpus . The approach has demonstrated the state-of-the-art results on three datasets for intent classification . The Subword Semantic Hashing , however , relies on n-gram statistics for extracting the representation vector used as an input to classification algorithms . It is worth noting that the conventional n-gram statistics uses a positional representation where each position in the vector can be attributed to a particular n-gram . The disadvantage of the conventional n-gram statistics is that the size of the vector grows exponentially with n. Nevertheless , it is possible to untie the size of representation from n by using distributed representations ( Hinton et al. , 1986 ) , where the information is distributed across the vectors positions . In particular , Joshi et al . ( 2016 ) suggest how to embed conventional n-gram statistics into a high-dimensional vector ( HD vector ) using the principles of hyperdimensional computing . Hyperdimensional computing also known as Vector Symbolic Architectures ( Plate , 2003 ; Kanerva , 2009 ; Eliasmith , 2013 ) is a family of bio-inspired methods of manipulating and representing information . The method of embedding n-gram statistics into the distributed representation in the form of an HD vector has demonstrated promising results on the task of language identification while being hardware-friendly ( Rahimi et al. , 2016 ) . In Najafabadi et al . ( 2016 ) it was further applied to the classification of news articles into one of eight predefined categories . The method has also shown promising results ( Kleyko et al. , 2019 ) when using HD vectors for training Self-Organizing Maps ( Kohonen , 2001 ) . However there are no previous studies comprehensively exploring tradeoffs achievable with the method on benchmark NLP datasets when using the supervised classifiers . 3 EVALUATION OUTLINE . 3.1 CLASSIFIERS AND PERFORMANCE METRICS . To obtain the results applicable to a broad range of existing machine learning algorithms , we have performed experiments with several conventional classifiers . In particular , the following classifiers were studied : Ridge Classifier , k-Nearest Neighbors ( kNN ) , Multilayer Perceptron ( MLP ) , Passive Aggressive , Random Forest , Linear Support Vector Classifier ( SVC ) , Stochastic Gradient Descent ( SGD ) , Nearest Centroid , and Bernoulli Naive Bayes ( NB ) . All the classifiers are available in the scikit-learn library ( Pedregosa et al. , 2011 ) , which was used in the experiments . Since the main focus of this paper is the tradeoff between classification performance and computational resources , we have to define metrics for both aspects . The quality of the classification performance of a model will be measured by a simple and well-known metric – F1 score ( please see ( Fawcett , 2006 ) ) . The computational resources will be characterized by three metrics : the time it takes to train a model , the time it takes to test the trained model , and the memory , where the memory is defined as the sum of the size of input feature vectors for train and test splits as well as the size of the trained model . To avoid the dependencies such as particular specifications of a computer and dataset size , the train/test times and memory are reported as relative values ( i.e. , train/test speed-up and memory reduction ) , where the reference is the value obtained for the case of the conventional n-gram statistics.1 3.2 DATASETS . Four different datasets were used to obtain the empirical results reported in this paper : the Chatbot Corpus ( Chatbot ) , the Ask Ubuntu Corpus ( AskUbuntu ) , the Web Applications Corpus ( WebApplication ) , and the 20 News Groups Corpus ( 20NewsGroups ) . The first three are referred to as small datasets . The Chatbot dataset comprises questions posed to a Telegram chatbot . The chatbot , in turn , replied the questions of the public transport of Munich . The AskUbuntu and WebApplication datasets are questions and answers from the StackExchange . The 20NewsGroups dataset comprises news posts labelled into several categories . All datasets have predetermined train and test splits . The first three datasets ( Braun et al. , 2017 ) are available on GitHub.2 The Chatbot dataset consists of two intents : the ( Departure Time and Find Connection ) with 206 questions . The corpus has a total of five different entity types ( StationStart , StationDest , Criterion , Vehicle , Line ) , which were not used in our benchmarks , as the results were only for intent classification . The samples come in English . Despite this , the train station names are in German , which is evident from the text where the German letters appear ( ä , ö , ü , ß ) . The dataset has the following data sample distribution ( train/test ) : Departure Time ( 43/35 ) ; Find Connection ( 57/71 ) . The AskUbuntu dataset comprises five intents with the following data sample distribution ( train/test ) : Make Update ( 10/37 ) ; Setup Printer ( 10/13 ) ; Shutdown Computer ( 13/14 ) ; Software Recommendation ( 17/40 ) ; None ( 3/5 ) . Thus , it includes 162 samples in total . The samples were gathered directly from the AskUbuntu platform . Only questions with the highest scores and upvotes were considered . For the task of mapping the correct intent to the question , the Amazon Mechanical Turk was employed . Beyond the questions labelled with their intent , this dataset contains also some extra information such as author , page URL with the question , entities , answer , and the answer ’ s author . It is worth noting that none of these data was used in the experiments . The WebApplication dataset comprises 89 text samples of eight intents with the following distribution ( train/test ) : Change Password ( 2/6 ) ; Delete Account ( 7/10 ) ; Download Video ( 1/0 ) ; Export Data ( 2/3 ) ; Filter Spam ( 6/14 ) ; Find Alternative ( 7/16 ) ; Sync Accounts ( 3/6 ) ; None ( 2/4 ) . The 20NewsGroups dataset has been originally collected by Ken Lang . It comprises of 20 categories ( for details please see Table 7 in the Appendix ) . Each category has exactly 18 , 846 text samples . Moreover , the samples of each category are split neatly into the train ( 11 , 314 samples ) and test ( 7 , 532 samples ) sets . The dataset comes already prepackaged with the scikitlearn library for Python . 4 METHODS . 4.1 CONVENTIONAL n-GRAM STATISTICS An empty vector s stores n-gram statistics for an input textD . D consists of symbols from the alphabet of size a ; ith position in s keeps the counter of the corresponding n-gram Ai = 〈S1 , S2 , . . . , Sn , 〉 from the set A of all unique n-grams ; Sj corresponds to a symbol in jth position of Ai . The dimensionality of s equals the total number of n-grams in A and calculated as an . Usually , s is obtained via a single pass-through D using the overlapping sliding window of size n. The value of a position in s ( i.e. , counter ) corresponding to a n-gram observed in the current window is incremented by one . In other words , s summarizes how many times each n-gram in A was observed in D. 1 It is worth noting that the speed-ups reported in Section 5 do not include the time it takes to obtain the corresponding HD vectors . Please see the discussion of this issue in Section 6 . 2Under the Creative Commons CC BY-SA 3.0 license : https : //github.com/sebischair/ NLU-Evaluation-Corpora 4.2 WORD EMBEDDINGS WITH SUBWORD INFORMATION . Work by Bojanowski et al . ( 2017 ) demonstrated that words ’ representations can be formed via learning character n-grams , which are then summed up to represent words . This method ( FastText ) has an advantage over the conventional word embeddings since unseen words could be better approximated as it is highly likely that some of their n-gram subwords have already appeared in other words . Therefore , each word w is represented as a bag of its character n-gram . Special boundary symbols “ < ” and “ > ” are added at the beginning and the end of each word . The word w itself is added to the set of its n-grams , to learn a representation for each word along with character n-grams . Taking the word have and n = 3 as an example , ( have ) = [ < ha , hav , ave , ve > , have ] . Formally , for a given word w , Nw ⊂ { 1 , . . . , G } denotes the set of G n-grams appearing in w. Each n-gram g has an associated vector representation zg . Word w is represented as the sum of the vector representations of its n-grams . A scoring function g is defined for each word that is represented as a set of respective n-grams and the context word ( denoted as c ) , as : g ( w , c ) = ∑ g∈Nw z > g vc , where vc is the vector representation of the context word c. Practically , a word is represented by its index in the word dictionary and a set of n-grams it contains .
This paper introduces a technique to project n gram statistic vectors into a lower dimensional space in order to improve memory efficiency and lower training time. The paper is motivated by the important problem of trying to improve efficiency of existing language models which can be extremely resource intensive. The authors then compare the performance of n gram statistics with HD vectors on 4 datasets to demonstrate that embedding into HD vectors can preserve performance while reducing resource utilization.
SP:20b1c29036733dee134d3dfa245e574aa82b7d3d
HyperEmbed: Tradeoffs Between Resources and Performance in NLP Tasks with Hyperdimensional Computing enabled embedding of n-gram statistics
1 INTRODUCTION . Recent work ( Strubell et al. , 2019 ) has brought significant attention by demonstrating potential cost and environmental impact of developing and training state-of-the-art models for Natural Language Processing ( NLP ) tasks . The work suggested several countermeasures for changing the situation . One of them recommends a concerted effort by industry and academia to promote research of more computationally efficient algorithms . The main focus of this paper falls precisely in this domain . In particular , we consider NLP systems using a well-known technique called n-gram statistics . The key idea is that hyperdimensional computing ( Kanerva , 2009 ) allows forming distributed representations of the conventional n-gram statistics ( Joshi et al. , 2016 ) . The use of these distributed representations , in turn , allows trading-off the performance of an NLP system ( e.g. , F1 score ) and its computational resources ( i.e. , time and memory ) . The main contribution of this paper is the systematic study of these tradeoffs on nine machine learning algorithms using several benchmark classification datasets . We demonstrate the usefulness of hyperdimensional computing-based embedding , which is highly time and memory efficient . Our experiments on a well-known dataset ( Braun et al. , 2017 ) for intent classification show that it is possible to reduce memory usage by ∼ 10x and speed-up training by ∼ 5x without compromising the F1 score . Several important use-cases are motivating the efforts towards trading-off the performance of a system against computational resources required to achieve that performance : high-throughput systems with an extremely large number of requests/transactions ( the power of one per cent ) ; resource-constrained systems where computational resources and energy are scarce ( edge computing ) ; green computing systems taking into account the aspects of environmental sustainability when considering the efficiency of algorithms ( AI HLEG , 2019 ) . The paper is structured as follows . Section 2 covers the related work . Section 3 outlines the evaluation and describes the datasets . The methods being used are presented in Section 4 . Section 5 evaluates of the experimental results . Discussion and concluding remarks are presented in Section 6 . 2 RELATED WORK . Commonly , data for NLP tasks are represented in the form of vectors , which are then used as an input to machine learning algorithms . These representations range from dense learnable vectors to extremely sparse non-learnable vectors . Well-known examples of such representations include onehot encodings , count-based vectors , and Term Frequency Inverse Document Frequency ( TF-IDF ) among others . Despite being very useful , non-learnable representations have their disadvantages such as resource inefficiency due to their sparsity and absence of contextual information ( except for TF-IDF ) . Learnable vector representations such as word embeddings ( e.g. , Word2Vec ( Mikolov et al. , 2013 ) or GloVe ( Pennington et al. , 2014 ) ) partially address these issues by obtaining dense vectors in an unsupervised learning fashion . These representations are based on the distributional hypothesis : words located nearby in a vector space should have similar contextual meaning . The idea has been further improved in Joulin et al . ( 2016 ) by representing words with character n-grams . Another efficient way of representing a word is the concept of Byte Pair Encoding , which has been introduced in Gage ( 1994 ) . The disadvantage of the learnable representations , however , is that they require pretraining involving large train corpus as well as have a large memory footprint ( in order of GB ) . As an alternative to word/character embedding , Shridhar et al . ( 2019 ) introduced the idea of Subword Semantic Hashing that uses a hashing method to represent subword tokens , thus , reducing the memory footprint ( in order of MB ) and removing the necessity of pretraining over a large corpus . The approach has demonstrated the state-of-the-art results on three datasets for intent classification . The Subword Semantic Hashing , however , relies on n-gram statistics for extracting the representation vector used as an input to classification algorithms . It is worth noting that the conventional n-gram statistics uses a positional representation where each position in the vector can be attributed to a particular n-gram . The disadvantage of the conventional n-gram statistics is that the size of the vector grows exponentially with n. Nevertheless , it is possible to untie the size of representation from n by using distributed representations ( Hinton et al. , 1986 ) , where the information is distributed across the vectors positions . In particular , Joshi et al . ( 2016 ) suggest how to embed conventional n-gram statistics into a high-dimensional vector ( HD vector ) using the principles of hyperdimensional computing . Hyperdimensional computing also known as Vector Symbolic Architectures ( Plate , 2003 ; Kanerva , 2009 ; Eliasmith , 2013 ) is a family of bio-inspired methods of manipulating and representing information . The method of embedding n-gram statistics into the distributed representation in the form of an HD vector has demonstrated promising results on the task of language identification while being hardware-friendly ( Rahimi et al. , 2016 ) . In Najafabadi et al . ( 2016 ) it was further applied to the classification of news articles into one of eight predefined categories . The method has also shown promising results ( Kleyko et al. , 2019 ) when using HD vectors for training Self-Organizing Maps ( Kohonen , 2001 ) . However there are no previous studies comprehensively exploring tradeoffs achievable with the method on benchmark NLP datasets when using the supervised classifiers . 3 EVALUATION OUTLINE . 3.1 CLASSIFIERS AND PERFORMANCE METRICS . To obtain the results applicable to a broad range of existing machine learning algorithms , we have performed experiments with several conventional classifiers . In particular , the following classifiers were studied : Ridge Classifier , k-Nearest Neighbors ( kNN ) , Multilayer Perceptron ( MLP ) , Passive Aggressive , Random Forest , Linear Support Vector Classifier ( SVC ) , Stochastic Gradient Descent ( SGD ) , Nearest Centroid , and Bernoulli Naive Bayes ( NB ) . All the classifiers are available in the scikit-learn library ( Pedregosa et al. , 2011 ) , which was used in the experiments . Since the main focus of this paper is the tradeoff between classification performance and computational resources , we have to define metrics for both aspects . The quality of the classification performance of a model will be measured by a simple and well-known metric – F1 score ( please see ( Fawcett , 2006 ) ) . The computational resources will be characterized by three metrics : the time it takes to train a model , the time it takes to test the trained model , and the memory , where the memory is defined as the sum of the size of input feature vectors for train and test splits as well as the size of the trained model . To avoid the dependencies such as particular specifications of a computer and dataset size , the train/test times and memory are reported as relative values ( i.e. , train/test speed-up and memory reduction ) , where the reference is the value obtained for the case of the conventional n-gram statistics.1 3.2 DATASETS . Four different datasets were used to obtain the empirical results reported in this paper : the Chatbot Corpus ( Chatbot ) , the Ask Ubuntu Corpus ( AskUbuntu ) , the Web Applications Corpus ( WebApplication ) , and the 20 News Groups Corpus ( 20NewsGroups ) . The first three are referred to as small datasets . The Chatbot dataset comprises questions posed to a Telegram chatbot . The chatbot , in turn , replied the questions of the public transport of Munich . The AskUbuntu and WebApplication datasets are questions and answers from the StackExchange . The 20NewsGroups dataset comprises news posts labelled into several categories . All datasets have predetermined train and test splits . The first three datasets ( Braun et al. , 2017 ) are available on GitHub.2 The Chatbot dataset consists of two intents : the ( Departure Time and Find Connection ) with 206 questions . The corpus has a total of five different entity types ( StationStart , StationDest , Criterion , Vehicle , Line ) , which were not used in our benchmarks , as the results were only for intent classification . The samples come in English . Despite this , the train station names are in German , which is evident from the text where the German letters appear ( ä , ö , ü , ß ) . The dataset has the following data sample distribution ( train/test ) : Departure Time ( 43/35 ) ; Find Connection ( 57/71 ) . The AskUbuntu dataset comprises five intents with the following data sample distribution ( train/test ) : Make Update ( 10/37 ) ; Setup Printer ( 10/13 ) ; Shutdown Computer ( 13/14 ) ; Software Recommendation ( 17/40 ) ; None ( 3/5 ) . Thus , it includes 162 samples in total . The samples were gathered directly from the AskUbuntu platform . Only questions with the highest scores and upvotes were considered . For the task of mapping the correct intent to the question , the Amazon Mechanical Turk was employed . Beyond the questions labelled with their intent , this dataset contains also some extra information such as author , page URL with the question , entities , answer , and the answer ’ s author . It is worth noting that none of these data was used in the experiments . The WebApplication dataset comprises 89 text samples of eight intents with the following distribution ( train/test ) : Change Password ( 2/6 ) ; Delete Account ( 7/10 ) ; Download Video ( 1/0 ) ; Export Data ( 2/3 ) ; Filter Spam ( 6/14 ) ; Find Alternative ( 7/16 ) ; Sync Accounts ( 3/6 ) ; None ( 2/4 ) . The 20NewsGroups dataset has been originally collected by Ken Lang . It comprises of 20 categories ( for details please see Table 7 in the Appendix ) . Each category has exactly 18 , 846 text samples . Moreover , the samples of each category are split neatly into the train ( 11 , 314 samples ) and test ( 7 , 532 samples ) sets . The dataset comes already prepackaged with the scikitlearn library for Python . 4 METHODS . 4.1 CONVENTIONAL n-GRAM STATISTICS An empty vector s stores n-gram statistics for an input textD . D consists of symbols from the alphabet of size a ; ith position in s keeps the counter of the corresponding n-gram Ai = 〈S1 , S2 , . . . , Sn , 〉 from the set A of all unique n-grams ; Sj corresponds to a symbol in jth position of Ai . The dimensionality of s equals the total number of n-grams in A and calculated as an . Usually , s is obtained via a single pass-through D using the overlapping sliding window of size n. The value of a position in s ( i.e. , counter ) corresponding to a n-gram observed in the current window is incremented by one . In other words , s summarizes how many times each n-gram in A was observed in D. 1 It is worth noting that the speed-ups reported in Section 5 do not include the time it takes to obtain the corresponding HD vectors . Please see the discussion of this issue in Section 6 . 2Under the Creative Commons CC BY-SA 3.0 license : https : //github.com/sebischair/ NLU-Evaluation-Corpora 4.2 WORD EMBEDDINGS WITH SUBWORD INFORMATION . Work by Bojanowski et al . ( 2017 ) demonstrated that words ’ representations can be formed via learning character n-grams , which are then summed up to represent words . This method ( FastText ) has an advantage over the conventional word embeddings since unseen words could be better approximated as it is highly likely that some of their n-gram subwords have already appeared in other words . Therefore , each word w is represented as a bag of its character n-gram . Special boundary symbols “ < ” and “ > ” are added at the beginning and the end of each word . The word w itself is added to the set of its n-grams , to learn a representation for each word along with character n-grams . Taking the word have and n = 3 as an example , ( have ) = [ < ha , hav , ave , ve > , have ] . Formally , for a given word w , Nw ⊂ { 1 , . . . , G } denotes the set of G n-grams appearing in w. Each n-gram g has an associated vector representation zg . Word w is represented as the sum of the vector representations of its n-grams . A scoring function g is defined for each word that is represented as a set of respective n-grams and the context word ( denoted as c ) , as : g ( w , c ) = ∑ g∈Nw z > g vc , where vc is the vector representation of the context word c. Practically , a word is represented by its index in the word dictionary and a set of n-grams it contains .
This paper proposes the use of hyperdimensional (HD) vectors to represent n-gram statistics. The HD vectors are first generated from the whole corpus. Then, it is aggregated or bundled to a vector for each sample as an input of a classifier training. The evaluation is conducted on four datasets: Chatbot, AskUbuntu, WebApplication and 20 News Group using a bunch of classifier including KNN, Random Forest, MLP etc.
SP:20b1c29036733dee134d3dfa245e574aa82b7d3d
Towards Controllable and Interpretable Face Completion via Structure-Aware and Frequency-Oriented Attentive GANs
Face completion is a challenging conditional image synthesis task . This paper proposes controllable and interpretable high-resolution and fast face completion by learning generative adversarial networks ( GANs ) progressively from low resolution to high resolution . We present structure-aware and frequency-oriented attentive GANs . The proposed structure-aware component leverages off-the-shelf facial landmark detectors and proposes a simple yet effective method of integrating the detected landmarks in generative learning . It facilitates facial expression transfer together with facial attributes control , and helps regularize the structural consistency in progressive training . The proposed frequency-oriented attentive module ( FOAM ) encourages GANs to attend much more to finer details in the coarse-to-fine progressive training , thus enabling progressive attention to face structures . The learned FOAMs show a strong pattern of switching their attention from low-frequency to high-frequency signals . In experiments , the proposed method is tested on the CelebA-HQ benchmark . Experiment results show that our approach outperforms state-of-the-art face completion methods . The proposed method is also fast with mean inference time of 0.54 seconds for images at 1024× 1024 resolution ( using a Titan Xp GPU ) . 1 INTRODUCTION . Conditional image synthesis aims to learn the underlying distribution governing the data generation with respect to the given conditions/context , which is also able to synthesize novel content . Much progress ( Iizuka et al. , 2017 ; Yeh et al. , 2017 ; Li et al. , 2017 ; Yang et al. , 2016 ; Denton et al. , 2016 ; Pathak et al. , 2016 ; Yu et al. , 2018 ; Liu et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2018 ) has been made since the generative adversarial networks ( GANs ) were proposed ( Goodfellow et al. , 2014 ) . Despite the recent remarkable progress , learning controllable and interpretable GANs for high-fidelity image synthesis at high resolutions remain an open problem . We are interested in controllable and interpretable GANs . We take a step forward by focusing on high-resolution and fast face completion tasks in this paper . Face completion is to replace target regions , either missing or unwanted , of face images with synthetic content so that the completed images look natural , realistic , and appealing . State-of-the-art face completion approaches using GANs largely focus on generating random realistic content . However , users may want to complete the missing parts with certain properties ( e.g . expressions ) . Controllability is entailed . Existing face completion approaches are usually only able to complete faces at relatively low resolutions ( e.g . 176 × 216 ( Iizuka et al. , 2017 ) and 256 × 256 ( Yu et al. , 2018 ) ) . To facilitate high-resolution image synthesis , the training methodology of growing GANs progressively ( Karras et al. , 2017 ) is widely used . For face completion tasks , one issue of applying progressive training is how to avoid distorting the learned coarse structures when the network is growing to a higher resolution . Interpretability is thus entailed to guide GANs in the coarse-to-fine pipeline . In addition , most existing approaches ( Iizuka et al. , 2017 ; Yeh et al. , 2017 ; Li et al. , 2017 ) require post-processing ( e.g . Poisson Blending ( Pérez et al. , 2003 ) ) , complex inference process ( e.g . thousands of optimization iterations ( Yeh et al. , 2017 ) or repeatedly feeding an incomplete image to CNNs at multiple scales ( Yang et al. , 2016 ) ) during test . We present structure-aware and frequency-oriented attentive GANs that are progressively trained for high-resolution and fast face completion using a fast single forward step in inference without any post-processing . By controllable , it means that the completed face images can have different facial attributes ( e.g. , smiling vs not smiling ) and/or facial expressions transferred from a given source actor . By interpretable , it means that the coarse-to-fine generation process in progressive training is rationalized . We utilize facial landmarks as backbone guidance of face structures and propose a straightforward method of integrating them in our system . We design a novel Frequency-Oriented Attention Module ( FOAM ) to induce the model to attend to finer details ( i.e . higher-frequency content , see Fig . 1 ) . We observe significant improvement of the completion quality by the FOAM against the exactly same system only without FOAM . A conditional version of our network is designed so that the appearance properties ( e.g . male or female ) , and facial expressions of the synthesized faces can be controlled . Moreover , we design a set of loss functions inducing the network to blend the synthesized content with the contexts in a realistic way . Our method was compared with state-of-the-art approaches on a high-resolution face dataset CelebA-HQ ( Karras et al. , 2017 ) . Both the evaluations and a pilot user study showed that our approach completed face images significantly more naturally than existing methods . 2 RELATED WORK . Recent learning based methods have shown the capability of CNNs to complete large missing content . Based on existing GANs , the Context Encoder ( CE ) ( Pathak et al. , 2016 ) encodes the contexts of masked images to latent representations , and then decodes them to natural content images , which are pasted into the original contexts for completion . However , the synthesized content of CE is often blurry and has inconsistent boundaries . Given a trained generative model , Yeh et al . ( Yeh et al. , 2017 ) propose a framework to find the most plausible latent representations of contexts to complete masked images . The Generative Face Completion model ( GFC ) ( Li et al. , 2017 ) and the Global and Local Consistent model ( GL ) ( Iizuka et al. , 2017 ) use both global and local discriminators , combined with post-processing , to complete images more coherently . Built on GL , Yu et al . ( Yu et al. , 2018 ) design a contextual attention layer ( CTX ) to help the model borrow contextual information from distant locations . Liu et al . ( Liu et al. , 2018 ) incorporates partial convolutions to handle irregular masks . Unfortunately , these approaches can only complete face images in relatively low resolutions ( e.g . 176× 216 ( Iizuka et al. , 2017 ) and 256× 256 ( Yu et al. , 2018 ) ) . Yang et al . ( Yang et al. , 2016 ) combine a global content network and a texture network , and the networks are trained at multiple scales repeatedly to complete high-resolution images ( 512× 512 ) . But , they assume that the missing content always shares some similar textures with the context , which is improbable for the face completion task . 3 THE PROPOSED METHOD 3.1 PROBLEM FORMULATION , Denote by Λ an image lattice ( e.g. , 1024×1024 pixels ) . Let IΛ be a face color image defined on the lattice Λ. Denote by Λt and Λctx the target region to complete and the remaining context region respectively ( note that the target region is not necessarily a single connected component , and the two parts form a partition of the lattice ) . IΛt is masked out with the same gray pixel value . LetMΛ be a binary mask image with all pixels inMΛt being 1 and pixels inMΛctx being 0 . For simplicity , we will omit the subscripts Λ , Λt and Λctx when the text context is clear . Unlike existing approaches ( Pathak et al. , 2016 ; Li et al. , 2017 ; Iizuka et al. , 2017 ) which first utilize unconditional image synthesis to generate the target region image and then blend them with context using using sophisticated post-processing , we address the completion problem as a coherent conditional image generation process . As illustrated in Fig . 2 , given an observed image Iobs with the target region IobsΛt masked out from a ground-truth uncorrupted image Igt , the objective of the proposed face completion is to synthesize an image Isyn that looks natural and realistic , and to enable a controllable generation process in terms of a given facial attribute vector , denoted by A ( such as male vs female , and smiling vs not smiling and for simplicity we use binary attribute vector in this paper ) and/or a given facial expression encoded by facial landmark , denoted by L. Denote by XG = ( Iobs , M , A , L ) the input of the generator G ( · ) that realizes the completion . We have , Isyn = G ( XG ; θG ) , subject to I syn Λctx ≈ IobsΛctx , ( 1 ) where θG collects all parameters of the generator and ≈ represents that the two context regions IsynΛctx and IobsΛctx need to be kept very similar . Structure-Aware Completion . As illustrated in Fig . 3 ( left ) , to enable transferring facial expressions in completion , we leverage the off-the-shelf state-of-the-art facial landmark detector , Face Alignment Network ( FAN ) ( Bulat & Tzimiropoulos , 2017 ) which achieved very good results for faces in the wild . Motivated by this , we also want to integrate the landmark information in completion for faces without facial expression trans- fer required . Recent works ( Isola et al. , 2016 ; Wang et al. , 2017 ; Zhu et al. , 2017 ; Sangkloy et al. , 2017 ; Xian et al. , 2017 ; Chen & Hays , 2018 ) have shown the capability of GANs to translate sketches to photo-realistic images . We choose facial landmarks as an abstract representation of face structures in general . As illustrated in Fig . 3 ( right ) , we first train a simple face completion model at the resolution of 256 × 256 using reconstruction loss ( Section 3.3 ) only . Given an image 1 , we use the trained model to generate a blurry completed image from which the landmarks are extracted with FAN ( we observed that FAN can compute sufficiently good landmarks from blurry completed images ) . Not only can this unify the generation process for different controllable settings ( since the inputs to the generator are kept the same between with and without facial expression transfer ) , 1The coarse completion model is only needed for testing . In training , we can extract landmarks from uncorrupted face images at the same resolution . but it also makes the completion process structure-aware . Since faces have very regular structures ( e.g . the eyes are always above a nose ) , when some facial components are occluded , it is possible to predict which parts are missing . Given a corrupted image , the quality of synthesized image can be further improved if the model is able to “ draw ” a sketch of the face first , which provides backbone guidance for image completion . 3.2 LEARNING WITH THE FOAM BETWEEN PROGRESSIVE STAGES . On top of GANS ( Goodfellow et al. , 2014 ) , the framework of Context Encoder ( CE ) ( Pathak et al. , 2016 ) is adopted , so the generation process of our model is conditioned on the contextual information . The framework of training GANs progressively ( Karras et al. , 2017 ) is also adopted to facilitate a high-resolution completion model . This starts with the lowest resolution ( such as 4× 4 ) . After running a certain number of iterations , higher resolution layers are added to both the generator and discriminator simultaneously until the network is grown to a desired resolution ( such as 1024×1024 ) . We present details of the proposed FOAM to stabilize and rationalize the progressive training . Denote by Gr and Dr the generator and discriminator at a resolution level r , respectively , where r ∈ { 1 , · · · , R } is the index of resolution ( e.g. , r = 1 represents 4 × 4 and r = R = 9 represents 1024 × 1024 ) . The final stage generator GR ( ) will be used as the generator G in Eqn . 1 in testing . The observed masked image , its corresponding binary mask , and the facial landmarks are re-sized to Iobsr , Mr and Lr for each resolution respectively . In our model , both Gr and Dr are conditioned on facial landmarks . We attach the resolution index to the input and rewrite Eqn . 1 as , Isynr = Gr ( X Gr ; ΘGr ) , subject to I syn r , Λctx ≈ Iobsr , Λctx , ( 2 ) where XGr = ( Iobsr , Mr , A , Lr ) . For the discriminator Dr , its input is X Dr = ( Ir , Lr ) where Ir represents either an uncorrupted image or a image synthesized by Gr . Dr has two branches which share a common backbone and predict the fake vs real classification and the attribute estimation  respectively . The loss functions for training are defined in Section 3.3 . During progressive training , to avoid sudden changes to the trained parameters of Gr−1 , the added layers ( i.e . the higher resolution components ) need to be faded into the networks smoothly during a growing stage . Since the parameters of added layers are initialized randomly , these layers may generate noise that distorts the coarser structures learned by Gr−1 if they are merged with Gr−1 directly . To reduce this effect , Karras et al . ( Karras et al. , 2017 ) use a linear combination of the higher and lower resolution branches . The synthesized image Îsyn is computed by Îsyn = αIsynr + ( 1− α ) Ĩ syn r−1 , ( 3 ) in which Isynr and Ĩ syn r−1 are the output images from the higher and lower resolution branches respectively ( Ĩsynr−1 is up-sampled from I syn r−1 to match the resolution of r ) . α is a weight increasing linearly from zero to one during the growing stage . Therefore , at the beginning , the added layers have no impact on the network . During training , the influence of the higher resolution branch increases linearly while the weight of the lower-resolution branch decreases . In the end when α = 1 , the synthesized image depends only on the higher resolution branch ( i.e . Îsyn = Isynr ) and the lower resolution branch can simply be removed . Because of this , once the training is complete , a cor- rupted image only needs to be fed to a single branch for image completion , and this process does not depend on any inputs or networks of lower resolutions . The FOAM . Eqn . 3 is equivalent to applying “ all-pass filters ” to the higher and lower resolution branches , since all the pixels in images are assigned the same weight ( i.e . α or 1− α ) regardless of their locations . Although this linear combination ( Eqn . 3 ) has been shown effective for reducing the impact of noise generated during the growing stage , we observe that it does not work well for highresolution face completion , as shown in Fig . 4 . The coarse structures that have been learned well at lower resolutions are still vulnerable to being distorted during the joint training ( i.e. , 0 < α < 1 ) . The intuitive idea of the proposed FOAM is to encourage the generator to focus more on learning finer details during the growing stage , which is enabled by changing the “ all-pass filters ” reflected in Eqn . 3 to attentive “ band-pass filters ” that learn to protect what has been learned well in the previous stages and to update finer details as needed under the guidance of the loss functions . Existing approaches ( Gregor et al. , 2015 ; Yu et al. , 2018 ) use spatial attention mechanisms to encourage networks to attend to selected parts of images ( e.g . a rectangular region ) . As illustrated in Fig . 1 , we observe that the FOAM filters indeed act like “ band-pass filters ” and show a strong pattern of switching its attention from coarse structures ( i.e . the low-frequency information ) to finer details ( i.e . the high-frequency information ) as the resolution increases . But , we note that different from regular band-pass filters , the filters learned by FOAM are predicted based on image semantics through the objective function ( see Equation 14 ) . This makes them sensitive to locations inferred on-the-fly in a coarse-to-fine manner . For instance , the model learns to pay more attention to eye regions where the rich details aggregate , especially at high resolutions . With the help of FOAM , the model is capable of learning meaningful and interpretable filters automatically . As illustrated in Fig . 5 , the proposed FOAM consists of a read and a write operation . In the read operation , only information that is important in Iobsr but does not exist in I obs r−1 will be allowed to enter the network . Similarly , in the write operation , only when the added layers produce information that can help reduce the overall loss , will it be allowed to add to the synthesized image Îsyn . The read and write operations , which are like two gates in a circuit , are controlled by the read and write filters learned by our model , respectively ( denoted by Fread and Fwrite ) . Fread is predicted from the lower resolution branch and computed by , Fread = ToFilter ( Gfixedr−1 ( X Gr−1 ) ) , ( 4 ) using a trained generator Gfixedr−1 with fixed weights and a small trainable network ToFilter . Similarly , Fwrite is predicted from the last feature maps of the higher resolution branch . The value in the filters represents the weight . Fread helps extract the most valuable information in the contexts of Iobsr and Iobsr−1 . The read operation is implemented by , Îobsr = Fread ( 1−Mr ) Iobsr , Îobsr−1 = Downsample ( ( 1− Fread ) ( 1−Mr ) Ĩobsr−1 ) , ( 5 ) where denotes element-wise multiplication . Ĩobsr−1 is up-sampled from Iobsr−1 to match the resolution of level r. Similar to Eqn . 3 , Fread and ( 1 − Fread ) are assigned to the higher and lower resolution branches , respectively . The write filter Fwrite combines the outputs from two branches ( i.e . I syn r and the up-sampled Ĩsynr−1 ) to generate the final completed image Î syn r . Fwrite helps extract the most valuable information in the contexts of Isynr and Ĩ syn r−1 . The write operation is defined by , Îsynr = ( I syn r · α+ Ĩ syn r−1 · ( 1− α ) ) ( 1−Mr ) + ( Fwrite Isynr + ( 1− Fwrite ) Ĩ syn r−1 ) Mr , ( 6 ) so , only the target region of Îsynr is controlled by Fwrite . The context region is a linear combination of the contexts of Isynr and Ĩ syn r−1 . To facilitate fast face completion in testing , we further design transformation functions to adjust the value ranges of Fread and Fwrite , so the lower resolution branches and FOAMs can both be safely removed when the growing process is done . Similar to the vanilla progressive training method , a testing image only needs to go through the final stage for completion . To that end , a transformation function ( Eqn . 7 ) is used to adjust the upper and lower bounds of the dynamic value ranges of the read and write filters . For instance , the transformed F̂read starts as an all-zero filter , is adjusted by a trainable ToFilter at the growing stages , and eventually increased to all ones . The transformed filters F̂read and F̂write are defined by , F̂read = β · Fread + γ , F̂write = β · Fwrite + γ , ( 7 ) where the parameters are computed by β : { 2α , 2− 2α , γ : { 0 , α ≤ 0.5 2α− 1 , 0.5 < α ≤ 1.0 ( 8 ) in which α is a weight increasing linearly from zero to one proportional to the number of seen images during growing . Eqn . 7 will be actually used in the read operation , Eqn . 5 and the write operation , Eqn . 6 .
This paper proposes a face completion network that synthesizes the missing part in the face images with GANs. Using facial landmarks and facial attributes, face completion became controllable as both are used as conditional information (input) for the generation (synthesis). Moreover, the proposed Frequency-Oriented Attention Module (FOAM) enables an interpretable coarse-to-fine progressive generative process. The proposed methods show significant improvement in the completion quality.
SP:fc9bd3cf5e1fc8affc1e1d1b183eb4bdd92ddf1d
Towards Controllable and Interpretable Face Completion via Structure-Aware and Frequency-Oriented Attentive GANs
Face completion is a challenging conditional image synthesis task . This paper proposes controllable and interpretable high-resolution and fast face completion by learning generative adversarial networks ( GANs ) progressively from low resolution to high resolution . We present structure-aware and frequency-oriented attentive GANs . The proposed structure-aware component leverages off-the-shelf facial landmark detectors and proposes a simple yet effective method of integrating the detected landmarks in generative learning . It facilitates facial expression transfer together with facial attributes control , and helps regularize the structural consistency in progressive training . The proposed frequency-oriented attentive module ( FOAM ) encourages GANs to attend much more to finer details in the coarse-to-fine progressive training , thus enabling progressive attention to face structures . The learned FOAMs show a strong pattern of switching their attention from low-frequency to high-frequency signals . In experiments , the proposed method is tested on the CelebA-HQ benchmark . Experiment results show that our approach outperforms state-of-the-art face completion methods . The proposed method is also fast with mean inference time of 0.54 seconds for images at 1024× 1024 resolution ( using a Titan Xp GPU ) . 1 INTRODUCTION . Conditional image synthesis aims to learn the underlying distribution governing the data generation with respect to the given conditions/context , which is also able to synthesize novel content . Much progress ( Iizuka et al. , 2017 ; Yeh et al. , 2017 ; Li et al. , 2017 ; Yang et al. , 2016 ; Denton et al. , 2016 ; Pathak et al. , 2016 ; Yu et al. , 2018 ; Liu et al. , 2018 ; Brock et al. , 2018 ; Karras et al. , 2018 ) has been made since the generative adversarial networks ( GANs ) were proposed ( Goodfellow et al. , 2014 ) . Despite the recent remarkable progress , learning controllable and interpretable GANs for high-fidelity image synthesis at high resolutions remain an open problem . We are interested in controllable and interpretable GANs . We take a step forward by focusing on high-resolution and fast face completion tasks in this paper . Face completion is to replace target regions , either missing or unwanted , of face images with synthetic content so that the completed images look natural , realistic , and appealing . State-of-the-art face completion approaches using GANs largely focus on generating random realistic content . However , users may want to complete the missing parts with certain properties ( e.g . expressions ) . Controllability is entailed . Existing face completion approaches are usually only able to complete faces at relatively low resolutions ( e.g . 176 × 216 ( Iizuka et al. , 2017 ) and 256 × 256 ( Yu et al. , 2018 ) ) . To facilitate high-resolution image synthesis , the training methodology of growing GANs progressively ( Karras et al. , 2017 ) is widely used . For face completion tasks , one issue of applying progressive training is how to avoid distorting the learned coarse structures when the network is growing to a higher resolution . Interpretability is thus entailed to guide GANs in the coarse-to-fine pipeline . In addition , most existing approaches ( Iizuka et al. , 2017 ; Yeh et al. , 2017 ; Li et al. , 2017 ) require post-processing ( e.g . Poisson Blending ( Pérez et al. , 2003 ) ) , complex inference process ( e.g . thousands of optimization iterations ( Yeh et al. , 2017 ) or repeatedly feeding an incomplete image to CNNs at multiple scales ( Yang et al. , 2016 ) ) during test . We present structure-aware and frequency-oriented attentive GANs that are progressively trained for high-resolution and fast face completion using a fast single forward step in inference without any post-processing . By controllable , it means that the completed face images can have different facial attributes ( e.g. , smiling vs not smiling ) and/or facial expressions transferred from a given source actor . By interpretable , it means that the coarse-to-fine generation process in progressive training is rationalized . We utilize facial landmarks as backbone guidance of face structures and propose a straightforward method of integrating them in our system . We design a novel Frequency-Oriented Attention Module ( FOAM ) to induce the model to attend to finer details ( i.e . higher-frequency content , see Fig . 1 ) . We observe significant improvement of the completion quality by the FOAM against the exactly same system only without FOAM . A conditional version of our network is designed so that the appearance properties ( e.g . male or female ) , and facial expressions of the synthesized faces can be controlled . Moreover , we design a set of loss functions inducing the network to blend the synthesized content with the contexts in a realistic way . Our method was compared with state-of-the-art approaches on a high-resolution face dataset CelebA-HQ ( Karras et al. , 2017 ) . Both the evaluations and a pilot user study showed that our approach completed face images significantly more naturally than existing methods . 2 RELATED WORK . Recent learning based methods have shown the capability of CNNs to complete large missing content . Based on existing GANs , the Context Encoder ( CE ) ( Pathak et al. , 2016 ) encodes the contexts of masked images to latent representations , and then decodes them to natural content images , which are pasted into the original contexts for completion . However , the synthesized content of CE is often blurry and has inconsistent boundaries . Given a trained generative model , Yeh et al . ( Yeh et al. , 2017 ) propose a framework to find the most plausible latent representations of contexts to complete masked images . The Generative Face Completion model ( GFC ) ( Li et al. , 2017 ) and the Global and Local Consistent model ( GL ) ( Iizuka et al. , 2017 ) use both global and local discriminators , combined with post-processing , to complete images more coherently . Built on GL , Yu et al . ( Yu et al. , 2018 ) design a contextual attention layer ( CTX ) to help the model borrow contextual information from distant locations . Liu et al . ( Liu et al. , 2018 ) incorporates partial convolutions to handle irregular masks . Unfortunately , these approaches can only complete face images in relatively low resolutions ( e.g . 176× 216 ( Iizuka et al. , 2017 ) and 256× 256 ( Yu et al. , 2018 ) ) . Yang et al . ( Yang et al. , 2016 ) combine a global content network and a texture network , and the networks are trained at multiple scales repeatedly to complete high-resolution images ( 512× 512 ) . But , they assume that the missing content always shares some similar textures with the context , which is improbable for the face completion task . 3 THE PROPOSED METHOD 3.1 PROBLEM FORMULATION , Denote by Λ an image lattice ( e.g. , 1024×1024 pixels ) . Let IΛ be a face color image defined on the lattice Λ. Denote by Λt and Λctx the target region to complete and the remaining context region respectively ( note that the target region is not necessarily a single connected component , and the two parts form a partition of the lattice ) . IΛt is masked out with the same gray pixel value . LetMΛ be a binary mask image with all pixels inMΛt being 1 and pixels inMΛctx being 0 . For simplicity , we will omit the subscripts Λ , Λt and Λctx when the text context is clear . Unlike existing approaches ( Pathak et al. , 2016 ; Li et al. , 2017 ; Iizuka et al. , 2017 ) which first utilize unconditional image synthesis to generate the target region image and then blend them with context using using sophisticated post-processing , we address the completion problem as a coherent conditional image generation process . As illustrated in Fig . 2 , given an observed image Iobs with the target region IobsΛt masked out from a ground-truth uncorrupted image Igt , the objective of the proposed face completion is to synthesize an image Isyn that looks natural and realistic , and to enable a controllable generation process in terms of a given facial attribute vector , denoted by A ( such as male vs female , and smiling vs not smiling and for simplicity we use binary attribute vector in this paper ) and/or a given facial expression encoded by facial landmark , denoted by L. Denote by XG = ( Iobs , M , A , L ) the input of the generator G ( · ) that realizes the completion . We have , Isyn = G ( XG ; θG ) , subject to I syn Λctx ≈ IobsΛctx , ( 1 ) where θG collects all parameters of the generator and ≈ represents that the two context regions IsynΛctx and IobsΛctx need to be kept very similar . Structure-Aware Completion . As illustrated in Fig . 3 ( left ) , to enable transferring facial expressions in completion , we leverage the off-the-shelf state-of-the-art facial landmark detector , Face Alignment Network ( FAN ) ( Bulat & Tzimiropoulos , 2017 ) which achieved very good results for faces in the wild . Motivated by this , we also want to integrate the landmark information in completion for faces without facial expression trans- fer required . Recent works ( Isola et al. , 2016 ; Wang et al. , 2017 ; Zhu et al. , 2017 ; Sangkloy et al. , 2017 ; Xian et al. , 2017 ; Chen & Hays , 2018 ) have shown the capability of GANs to translate sketches to photo-realistic images . We choose facial landmarks as an abstract representation of face structures in general . As illustrated in Fig . 3 ( right ) , we first train a simple face completion model at the resolution of 256 × 256 using reconstruction loss ( Section 3.3 ) only . Given an image 1 , we use the trained model to generate a blurry completed image from which the landmarks are extracted with FAN ( we observed that FAN can compute sufficiently good landmarks from blurry completed images ) . Not only can this unify the generation process for different controllable settings ( since the inputs to the generator are kept the same between with and without facial expression transfer ) , 1The coarse completion model is only needed for testing . In training , we can extract landmarks from uncorrupted face images at the same resolution . but it also makes the completion process structure-aware . Since faces have very regular structures ( e.g . the eyes are always above a nose ) , when some facial components are occluded , it is possible to predict which parts are missing . Given a corrupted image , the quality of synthesized image can be further improved if the model is able to “ draw ” a sketch of the face first , which provides backbone guidance for image completion . 3.2 LEARNING WITH THE FOAM BETWEEN PROGRESSIVE STAGES . On top of GANS ( Goodfellow et al. , 2014 ) , the framework of Context Encoder ( CE ) ( Pathak et al. , 2016 ) is adopted , so the generation process of our model is conditioned on the contextual information . The framework of training GANs progressively ( Karras et al. , 2017 ) is also adopted to facilitate a high-resolution completion model . This starts with the lowest resolution ( such as 4× 4 ) . After running a certain number of iterations , higher resolution layers are added to both the generator and discriminator simultaneously until the network is grown to a desired resolution ( such as 1024×1024 ) . We present details of the proposed FOAM to stabilize and rationalize the progressive training . Denote by Gr and Dr the generator and discriminator at a resolution level r , respectively , where r ∈ { 1 , · · · , R } is the index of resolution ( e.g. , r = 1 represents 4 × 4 and r = R = 9 represents 1024 × 1024 ) . The final stage generator GR ( ) will be used as the generator G in Eqn . 1 in testing . The observed masked image , its corresponding binary mask , and the facial landmarks are re-sized to Iobsr , Mr and Lr for each resolution respectively . In our model , both Gr and Dr are conditioned on facial landmarks . We attach the resolution index to the input and rewrite Eqn . 1 as , Isynr = Gr ( X Gr ; ΘGr ) , subject to I syn r , Λctx ≈ Iobsr , Λctx , ( 2 ) where XGr = ( Iobsr , Mr , A , Lr ) . For the discriminator Dr , its input is X Dr = ( Ir , Lr ) where Ir represents either an uncorrupted image or a image synthesized by Gr . Dr has two branches which share a common backbone and predict the fake vs real classification and the attribute estimation  respectively . The loss functions for training are defined in Section 3.3 . During progressive training , to avoid sudden changes to the trained parameters of Gr−1 , the added layers ( i.e . the higher resolution components ) need to be faded into the networks smoothly during a growing stage . Since the parameters of added layers are initialized randomly , these layers may generate noise that distorts the coarser structures learned by Gr−1 if they are merged with Gr−1 directly . To reduce this effect , Karras et al . ( Karras et al. , 2017 ) use a linear combination of the higher and lower resolution branches . The synthesized image Îsyn is computed by Îsyn = αIsynr + ( 1− α ) Ĩ syn r−1 , ( 3 ) in which Isynr and Ĩ syn r−1 are the output images from the higher and lower resolution branches respectively ( Ĩsynr−1 is up-sampled from I syn r−1 to match the resolution of r ) . α is a weight increasing linearly from zero to one during the growing stage . Therefore , at the beginning , the added layers have no impact on the network . During training , the influence of the higher resolution branch increases linearly while the weight of the lower-resolution branch decreases . In the end when α = 1 , the synthesized image depends only on the higher resolution branch ( i.e . Îsyn = Isynr ) and the lower resolution branch can simply be removed . Because of this , once the training is complete , a cor- rupted image only needs to be fed to a single branch for image completion , and this process does not depend on any inputs or networks of lower resolutions . The FOAM . Eqn . 3 is equivalent to applying “ all-pass filters ” to the higher and lower resolution branches , since all the pixels in images are assigned the same weight ( i.e . α or 1− α ) regardless of their locations . Although this linear combination ( Eqn . 3 ) has been shown effective for reducing the impact of noise generated during the growing stage , we observe that it does not work well for highresolution face completion , as shown in Fig . 4 . The coarse structures that have been learned well at lower resolutions are still vulnerable to being distorted during the joint training ( i.e. , 0 < α < 1 ) . The intuitive idea of the proposed FOAM is to encourage the generator to focus more on learning finer details during the growing stage , which is enabled by changing the “ all-pass filters ” reflected in Eqn . 3 to attentive “ band-pass filters ” that learn to protect what has been learned well in the previous stages and to update finer details as needed under the guidance of the loss functions . Existing approaches ( Gregor et al. , 2015 ; Yu et al. , 2018 ) use spatial attention mechanisms to encourage networks to attend to selected parts of images ( e.g . a rectangular region ) . As illustrated in Fig . 1 , we observe that the FOAM filters indeed act like “ band-pass filters ” and show a strong pattern of switching its attention from coarse structures ( i.e . the low-frequency information ) to finer details ( i.e . the high-frequency information ) as the resolution increases . But , we note that different from regular band-pass filters , the filters learned by FOAM are predicted based on image semantics through the objective function ( see Equation 14 ) . This makes them sensitive to locations inferred on-the-fly in a coarse-to-fine manner . For instance , the model learns to pay more attention to eye regions where the rich details aggregate , especially at high resolutions . With the help of FOAM , the model is capable of learning meaningful and interpretable filters automatically . As illustrated in Fig . 5 , the proposed FOAM consists of a read and a write operation . In the read operation , only information that is important in Iobsr but does not exist in I obs r−1 will be allowed to enter the network . Similarly , in the write operation , only when the added layers produce information that can help reduce the overall loss , will it be allowed to add to the synthesized image Îsyn . The read and write operations , which are like two gates in a circuit , are controlled by the read and write filters learned by our model , respectively ( denoted by Fread and Fwrite ) . Fread is predicted from the lower resolution branch and computed by , Fread = ToFilter ( Gfixedr−1 ( X Gr−1 ) ) , ( 4 ) using a trained generator Gfixedr−1 with fixed weights and a small trainable network ToFilter . Similarly , Fwrite is predicted from the last feature maps of the higher resolution branch . The value in the filters represents the weight . Fread helps extract the most valuable information in the contexts of Iobsr and Iobsr−1 . The read operation is implemented by , Îobsr = Fread ( 1−Mr ) Iobsr , Îobsr−1 = Downsample ( ( 1− Fread ) ( 1−Mr ) Ĩobsr−1 ) , ( 5 ) where denotes element-wise multiplication . Ĩobsr−1 is up-sampled from Iobsr−1 to match the resolution of level r. Similar to Eqn . 3 , Fread and ( 1 − Fread ) are assigned to the higher and lower resolution branches , respectively . The write filter Fwrite combines the outputs from two branches ( i.e . I syn r and the up-sampled Ĩsynr−1 ) to generate the final completed image Î syn r . Fwrite helps extract the most valuable information in the contexts of Isynr and Ĩ syn r−1 . The write operation is defined by , Îsynr = ( I syn r · α+ Ĩ syn r−1 · ( 1− α ) ) ( 1−Mr ) + ( Fwrite Isynr + ( 1− Fwrite ) Ĩ syn r−1 ) Mr , ( 6 ) so , only the target region of Îsynr is controlled by Fwrite . The context region is a linear combination of the contexts of Isynr and Ĩ syn r−1 . To facilitate fast face completion in testing , we further design transformation functions to adjust the value ranges of Fread and Fwrite , so the lower resolution branches and FOAMs can both be safely removed when the growing process is done . Similar to the vanilla progressive training method , a testing image only needs to go through the final stage for completion . To that end , a transformation function ( Eqn . 7 ) is used to adjust the upper and lower bounds of the dynamic value ranges of the read and write filters . For instance , the transformed F̂read starts as an all-zero filter , is adjusted by a trainable ToFilter at the growing stages , and eventually increased to all ones . The transformed filters F̂read and F̂write are defined by , F̂read = β · Fread + γ , F̂write = β · Fwrite + γ , ( 7 ) where the parameters are computed by β : { 2α , 2− 2α , γ : { 0 , α ≤ 0.5 2α− 1 , 0.5 < α ≤ 1.0 ( 8 ) in which α is a weight increasing linearly from zero to one proportional to the number of seen images during growing . Eqn . 7 will be actually used in the read operation , Eqn . 5 and the write operation , Eqn . 6 .
This paper proposes controllable and interpretable high-resolution and fast face completion by learning generative adversarial networks (GANs) progressively from low resolution to high resolution. It combines the masks, landmarks, corrupted images as inputs to generate completed images in high-resolution. The proposed frequency-oriented attentive module (FOAM) encourages GANs to highlight much more to finer details in the coarse-to-fine progressive training, thus enabling progressive attention to face structures.
SP:fc9bd3cf5e1fc8affc1e1d1b183eb4bdd92ddf1d
CLAREL: classification via retrieval loss for zero-shot learning
1 INTRODUCTION . Deep learning-based approaches have demonstrated superior flexibility and generalization capabilities in information processing on a wide variety of tasks , such as vision , speech and language ( LeCun et al. , 2015 ) . However , it has been widely realized that the transfer of deep representations to real-world applications is challenging due to the typical reliance on massive hand-labeled datasets . Learning in the low-labeled data regime , especially in the zero-shot ( Wang et al. , 2019 ) and the few-shot ( Wang & Yao , 2019 ) setups , have recently received significant attention in the literature . In the problem of zero-shot learning ( ZSL ) , the objective is to recognize categories that have not been seen during the training ( Larochelle et al. , 2008 ) . This is typically done by relying on anchor embeddings learned in one modality as prototypes and by associating a query embedding from the other modality with the closest prototype . In the generalized ZSL ( GZSL ) case ( Xian et al. , 2018c ) , the objective is more challenging as recognition is performed in the joint space of seen and unseen categories . ZSL , as well as its generalized counterpart , provide a viable framework to learn cross-modal representations that are flexible and adaptive . For example , in this paradigm , the adaptation to a new classification task based on text/image representation space alignment could be as easy as defining/appending/modifying a set of text sentences to define classes of new classifiers . This is an especially relevant problem as machine learning is challenged with the long tail of classes , and the idea of learning from pairs of images and sentences , abundant on the web , looks like a natural solution . Therefore , in this paper we specifically target the fine-grained scenario of paired images and their respective text descriptions . The uniqueness of this scenario is in the fact that the co-occurance of image and text provides a rich source of information . The ways of leveraging this source have not been sufficiently explored in the context of GZSL . Although we focus exclusively on the GZSL recognition setup in this paper , we believe that the research in this direction has potential to enable zero-shot flexibility in a wider array of high-level tasks such as segmentation or conditional image generation ( Zhang et al. , 2018 ) . The contributions of this work can be characterized under the following two themes . Instance-based training loss . Most prominent zero-shot learning approaches rely heavily on classlevel modality alignment ( Xian et al. , 2018c ) . We propose a new composite loss function that balances instance-based pairwise image/text retrieval loss and the usual classifier loss . The retrieval loss term does not use class labels . We demonstrate that the class-level information is important , but in the fine-grained text/image pairing scenarios , most of the GZSL accuracy can be extracted from the instance-based retrieval loss . To the best of our knowledge , this type of training has not been used in the GZSL literature . Its impressive performance opens up new promising research directions . Metric space rescaling . Metric-based ZSL approaches rely on distances between prototypes and query embeddings during inference . They are known to suffer from imbalanced performance on seen and unseen classes ( Liu et al. , 2018 ) . Previous work proposed to use a heuristic trick , calibrated stacking ( Chao et al. , 2016 ) or calibration ( Das & Lee , 2019 ) , to solve the problem . We refer to this technique as metric rescaling in our work , and provide a sound probabilistic justification for it . 2 PROPOSED METHOD . In this paper , we specifically target the fine-grained visual description scenario , as defined by Reed et al . ( 2016 ) . In this setting , the dataset consists of a number of images from a given set of classes and each image is accompanied by a number of textual descriptions . The task is to learn a joint representation space for images and texts that can be used for zero-shot recognition . An instance of the zero-shot multimodal representation learning problem can then be defined as follows . Given a training set S = { ( vn , tn , yn ) | vn ∈ V , tn ∈ T , yn ∈ Y , n = 1 . . . N } of image , text and label tuples , we are interested in finding representations fφ : V → Z of image , parameterized by φ , and fθ : T → Z of text , parameterized by θ , in a common embedding space Z . Furthermore , GZSL problem is defined using the sets of seen Ytr and unseen Yts classes , such that Y = Ytr ∪ Yts and Ytr∩Yts = ∅ . The training set will then only contain the seen classes , i.e . Str = { ( vn , tn , yn ) | vn ∈ V , tn ∈ T , yn ∈ Ytr } and the task is to build a classifier function g : Z ×Z → Y . This is different from the ZSL scenario focusing on g : Z × Z → Yts . To build g , most approaches to joint representation learning rely on class labeling to train a representation . For example , all the methods reviewed by Xian et al . ( 2018c ) require the access to class labels at train time . We hypothesise that in the fine-grained learning scenario , such as the one described by Reed et al . ( 2016 ) , a lot of information can be extracted simply from pairwise image/text cooccurrences . The class labels really only become critically necessary when we define class prototypes , i.e . at zero-shot test time . Following this intuition , we define a composite loss function that relies both on the pairwise relationships and on the class labels . The high-level description of the proposed framework is depicted in Figure 1 . The framework enables us , among other things , to experiment with the effects of train-time availability of class labels on the quality of zero-shot representations . The framework is based on projecting texts and images into a common space and then learning a representation based on a mixture of four loss functions : a pairwise text retrieval loss , a pairwise image retrieval loss , a text classifier loss and an image classifier loss ( see Algorithm 1 ) . Algorithm 1 Loss calculation for a single optimization iteration of the proposed method . N is the number of instances in the training set Str , B is the number of instances per batch , C is the number of classes in the train set . RANDOMSAMPLE ( S , B ) denotes a set of B elements chosen uniformly at random from a set S , without replacement . Input : Training set Str = { ( v1 , t1 , y1 ) , . . . , ( vN , tN , yN ) } , λ ∈ [ 0 , 1 ] , κ ∈ [ 0 , 1 ] . Output : The loss J ( φ , θ ) for a randomly sampled training batch . I ← RANDOMSAMPLE ( { 1 , . . . , N } , B ) . Select B instance indices for batch JTC ( θ ) , JIC ( φ ) ← 0 , 0 . Initialize classification losses for i in I do zvi , zti ← fφ ( vi ) , fθ ( ti ) . Embed images and texts pI ← softmax ( WIzvi + bI ) . Image classifier probabilities pT ← softmax ( WT zti + bT ) . Text classifier probabilities JTC ( θ ) ← JTC ( θ ) + 1B crossentropy ( pT , yi ) . Text classification loss JIC ( φ ) ← JIC ( φ ) + 1B crossentropy ( pI , yi ) . Image classification loss end for JTR ( φ , θ ) , JIR ( φ , θ ) ← 0 , 0 . Initialize retrieval losses for i in I do JTR ( φ , θ ) ← JTR ( φ , θ ) + 1 B d ( zvi , zti ) + log∑ j∈I exp ( −d ( zvi , ztj ) ) . Text retrieval loss JIR ( φ , θ ) ← JIR ( φ , θ ) + 1 B d ( zvi , zti ) + log∑ j∈I exp ( −d ( zti , zvj ) ) . Image retrieval loss end for J ( φ , θ ) ← λJTR ( φ , θ ) + ( 1− λ ) JIR ( φ , θ ) . Add retrieval loss to the total loss J ( φ , θ ) ← ( 1− κ ) J ( φ , θ ) + κ2 ( JTC ( θ ) + JIC ( φ ) ) . Add classification loss to the total loss 2.1 RETRIEVAL LOSS FUNCTION . Pairwise cross-modal loss function is based solely on the pairwise relationships between texts and images . We choose to use the metric learning approach to capture the relationship between images and texts . Now , suppose d is a metric d : Z × Z → R+ , vi is an image and τ = { tj′ } is a collection of arbitrary texts sampled uniformly at random , of which text tj belongs to vi . We propose the following model for the probability of image vi and text tj to belong to the same object instance : pφ , θ ( i = j|vi , tj , τ ) = exp ( −d ( fφ ( vi ) , fθ ( tj ) ) ) ∑ tj′∈τ exp ( −d ( fφ ( vi ) , fθ ( tj′ ) ) ) . ( 1 ) The learning is then based on the following cross-entropy loss defined on the batch of size B : JTR ( φ , θ ) = − 1 B B∑ i , j=1 ` i , j log pφ , θ ( i = j|vi , tj , { tj′ } Bj′=1 ) , ( 2 ) where ` i , j is a binary indicator of the true match ( ` i , j = 1 , if i = j and 0 otherwise ) . Note that the expression above has the interpretation of the text retrieval loss . It attains its smallest value when for each image in the batch we manage to assign probability 1 to its respective text and 0 to all other texts . This can be further expanded as : JTR ( φ , θ ) = 1 B B∑ i=1 Ñ d ( fφ ( vi ) , fθ ( ti ) ) + log [ ∑ tj′∈τ exp ( −d ( fφ ( vi ) , fθ ( tj′ ) ) ) ] é . ( 3 ) Exchanging the order of image and text in the probability model ( 1 ) leads to the image retrieval loss , JIR ( φ , θ ) . The two losses are mixed using parameter λ ∈ [ 0 , 1 ] as shown in Algorithm 1 . The pairwise retrieval loss functions are responsible for the modality alignment . In addition to those , we propose to include , as mentioned above , the usual image and text classifier losses . These losses are responsible for reducing the intraclass variability of representations . The classifier losses are added to the retrieval losses using a mixing parameter κ ∈ [ 0 , 1 ] as shown in Algorithm 1 .
This paper tackles zero-shot and generalised zero-shot learning by using the per-image semantic information. An instance-based loss is introduced to align images and their corresponding text in the same embedding space. To solve the extreme imbalanced issue of generalized zero-shot learning, the authors propose to scale the prediction scores of seen classes by a constant factor. They demonstrate technic contributions on CUB and Flowers datasets and the results achieve the state-of-the-art.
SP:9824d6fb46e2ad9c55ae3e55171792ad83c14d8b
CLAREL: classification via retrieval loss for zero-shot learning
1 INTRODUCTION . Deep learning-based approaches have demonstrated superior flexibility and generalization capabilities in information processing on a wide variety of tasks , such as vision , speech and language ( LeCun et al. , 2015 ) . However , it has been widely realized that the transfer of deep representations to real-world applications is challenging due to the typical reliance on massive hand-labeled datasets . Learning in the low-labeled data regime , especially in the zero-shot ( Wang et al. , 2019 ) and the few-shot ( Wang & Yao , 2019 ) setups , have recently received significant attention in the literature . In the problem of zero-shot learning ( ZSL ) , the objective is to recognize categories that have not been seen during the training ( Larochelle et al. , 2008 ) . This is typically done by relying on anchor embeddings learned in one modality as prototypes and by associating a query embedding from the other modality with the closest prototype . In the generalized ZSL ( GZSL ) case ( Xian et al. , 2018c ) , the objective is more challenging as recognition is performed in the joint space of seen and unseen categories . ZSL , as well as its generalized counterpart , provide a viable framework to learn cross-modal representations that are flexible and adaptive . For example , in this paradigm , the adaptation to a new classification task based on text/image representation space alignment could be as easy as defining/appending/modifying a set of text sentences to define classes of new classifiers . This is an especially relevant problem as machine learning is challenged with the long tail of classes , and the idea of learning from pairs of images and sentences , abundant on the web , looks like a natural solution . Therefore , in this paper we specifically target the fine-grained scenario of paired images and their respective text descriptions . The uniqueness of this scenario is in the fact that the co-occurance of image and text provides a rich source of information . The ways of leveraging this source have not been sufficiently explored in the context of GZSL . Although we focus exclusively on the GZSL recognition setup in this paper , we believe that the research in this direction has potential to enable zero-shot flexibility in a wider array of high-level tasks such as segmentation or conditional image generation ( Zhang et al. , 2018 ) . The contributions of this work can be characterized under the following two themes . Instance-based training loss . Most prominent zero-shot learning approaches rely heavily on classlevel modality alignment ( Xian et al. , 2018c ) . We propose a new composite loss function that balances instance-based pairwise image/text retrieval loss and the usual classifier loss . The retrieval loss term does not use class labels . We demonstrate that the class-level information is important , but in the fine-grained text/image pairing scenarios , most of the GZSL accuracy can be extracted from the instance-based retrieval loss . To the best of our knowledge , this type of training has not been used in the GZSL literature . Its impressive performance opens up new promising research directions . Metric space rescaling . Metric-based ZSL approaches rely on distances between prototypes and query embeddings during inference . They are known to suffer from imbalanced performance on seen and unseen classes ( Liu et al. , 2018 ) . Previous work proposed to use a heuristic trick , calibrated stacking ( Chao et al. , 2016 ) or calibration ( Das & Lee , 2019 ) , to solve the problem . We refer to this technique as metric rescaling in our work , and provide a sound probabilistic justification for it . 2 PROPOSED METHOD . In this paper , we specifically target the fine-grained visual description scenario , as defined by Reed et al . ( 2016 ) . In this setting , the dataset consists of a number of images from a given set of classes and each image is accompanied by a number of textual descriptions . The task is to learn a joint representation space for images and texts that can be used for zero-shot recognition . An instance of the zero-shot multimodal representation learning problem can then be defined as follows . Given a training set S = { ( vn , tn , yn ) | vn ∈ V , tn ∈ T , yn ∈ Y , n = 1 . . . N } of image , text and label tuples , we are interested in finding representations fφ : V → Z of image , parameterized by φ , and fθ : T → Z of text , parameterized by θ , in a common embedding space Z . Furthermore , GZSL problem is defined using the sets of seen Ytr and unseen Yts classes , such that Y = Ytr ∪ Yts and Ytr∩Yts = ∅ . The training set will then only contain the seen classes , i.e . Str = { ( vn , tn , yn ) | vn ∈ V , tn ∈ T , yn ∈ Ytr } and the task is to build a classifier function g : Z ×Z → Y . This is different from the ZSL scenario focusing on g : Z × Z → Yts . To build g , most approaches to joint representation learning rely on class labeling to train a representation . For example , all the methods reviewed by Xian et al . ( 2018c ) require the access to class labels at train time . We hypothesise that in the fine-grained learning scenario , such as the one described by Reed et al . ( 2016 ) , a lot of information can be extracted simply from pairwise image/text cooccurrences . The class labels really only become critically necessary when we define class prototypes , i.e . at zero-shot test time . Following this intuition , we define a composite loss function that relies both on the pairwise relationships and on the class labels . The high-level description of the proposed framework is depicted in Figure 1 . The framework enables us , among other things , to experiment with the effects of train-time availability of class labels on the quality of zero-shot representations . The framework is based on projecting texts and images into a common space and then learning a representation based on a mixture of four loss functions : a pairwise text retrieval loss , a pairwise image retrieval loss , a text classifier loss and an image classifier loss ( see Algorithm 1 ) . Algorithm 1 Loss calculation for a single optimization iteration of the proposed method . N is the number of instances in the training set Str , B is the number of instances per batch , C is the number of classes in the train set . RANDOMSAMPLE ( S , B ) denotes a set of B elements chosen uniformly at random from a set S , without replacement . Input : Training set Str = { ( v1 , t1 , y1 ) , . . . , ( vN , tN , yN ) } , λ ∈ [ 0 , 1 ] , κ ∈ [ 0 , 1 ] . Output : The loss J ( φ , θ ) for a randomly sampled training batch . I ← RANDOMSAMPLE ( { 1 , . . . , N } , B ) . Select B instance indices for batch JTC ( θ ) , JIC ( φ ) ← 0 , 0 . Initialize classification losses for i in I do zvi , zti ← fφ ( vi ) , fθ ( ti ) . Embed images and texts pI ← softmax ( WIzvi + bI ) . Image classifier probabilities pT ← softmax ( WT zti + bT ) . Text classifier probabilities JTC ( θ ) ← JTC ( θ ) + 1B crossentropy ( pT , yi ) . Text classification loss JIC ( φ ) ← JIC ( φ ) + 1B crossentropy ( pI , yi ) . Image classification loss end for JTR ( φ , θ ) , JIR ( φ , θ ) ← 0 , 0 . Initialize retrieval losses for i in I do JTR ( φ , θ ) ← JTR ( φ , θ ) + 1 B d ( zvi , zti ) + log∑ j∈I exp ( −d ( zvi , ztj ) ) . Text retrieval loss JIR ( φ , θ ) ← JIR ( φ , θ ) + 1 B d ( zvi , zti ) + log∑ j∈I exp ( −d ( zti , zvj ) ) . Image retrieval loss end for J ( φ , θ ) ← λJTR ( φ , θ ) + ( 1− λ ) JIR ( φ , θ ) . Add retrieval loss to the total loss J ( φ , θ ) ← ( 1− κ ) J ( φ , θ ) + κ2 ( JTC ( θ ) + JIC ( φ ) ) . Add classification loss to the total loss 2.1 RETRIEVAL LOSS FUNCTION . Pairwise cross-modal loss function is based solely on the pairwise relationships between texts and images . We choose to use the metric learning approach to capture the relationship between images and texts . Now , suppose d is a metric d : Z × Z → R+ , vi is an image and τ = { tj′ } is a collection of arbitrary texts sampled uniformly at random , of which text tj belongs to vi . We propose the following model for the probability of image vi and text tj to belong to the same object instance : pφ , θ ( i = j|vi , tj , τ ) = exp ( −d ( fφ ( vi ) , fθ ( tj ) ) ) ∑ tj′∈τ exp ( −d ( fφ ( vi ) , fθ ( tj′ ) ) ) . ( 1 ) The learning is then based on the following cross-entropy loss defined on the batch of size B : JTR ( φ , θ ) = − 1 B B∑ i , j=1 ` i , j log pφ , θ ( i = j|vi , tj , { tj′ } Bj′=1 ) , ( 2 ) where ` i , j is a binary indicator of the true match ( ` i , j = 1 , if i = j and 0 otherwise ) . Note that the expression above has the interpretation of the text retrieval loss . It attains its smallest value when for each image in the batch we manage to assign probability 1 to its respective text and 0 to all other texts . This can be further expanded as : JTR ( φ , θ ) = 1 B B∑ i=1 Ñ d ( fφ ( vi ) , fθ ( ti ) ) + log [ ∑ tj′∈τ exp ( −d ( fφ ( vi ) , fθ ( tj′ ) ) ) ] é . ( 3 ) Exchanging the order of image and text in the probability model ( 1 ) leads to the image retrieval loss , JIR ( φ , θ ) . The two losses are mixed using parameter λ ∈ [ 0 , 1 ] as shown in Algorithm 1 . The pairwise retrieval loss functions are responsible for the modality alignment . In addition to those , we propose to include , as mentioned above , the usual image and text classifier losses . These losses are responsible for reducing the intraclass variability of representations . The classifier losses are added to the retrieval losses using a mixing parameter κ ∈ [ 0 , 1 ] as shown in Algorithm 1 .
The paper proposes to to use four different losses to train a joint text-image embedding space for zero-shot learning. The four losses consist of a classification loss given text descriptions, a classification loss given images, two contrastive losses given pairs of text and images. The paper also discusses how to balance seen and unseen classes, and it seems that, empirically, embeddings for seen classes are closer together while the embeddings for the unseen classes are further apart. A scaling factor that makes sure these distances are comparable for seen and unseen classes is introduced, and the paper gives an explanation for such a scaling. The final performance on the CUB and FLOWERS data set is impressive.
SP:9824d6fb46e2ad9c55ae3e55171792ad83c14d8b
GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation
1 INTRODUCTION . Learning from graph-structured data has seen explosive growth over the last few years , as graphs are a convenient formalism to model the broad class of data that has objects ( treated as vertices ) with some known relationships ( treated as edges ) . Example usages include reasoning about physical and biological systems , knowledge bases , computer programs , and relational reasoning in computer vision tasks . This graph construction is a highly complex form of feature engineering , mapping the knowledge of a domain expert into a graph structure which can be consumed and exploited by high-capacity neural network models . Many neural graph learning methods can be summarised as neural message passing ( Gilmer et al. , 2017 ) : nodes are initialised with some representation and then exchange information by transforming their current state ( in practice with a single linear layer ) and sending it as a message to all neighbours in the graph . At each node , messages are aggregated in some way and then used to update the associated node representation . In this setting , the message is entirely determined by the source node ( and potentially the edge type ) and the target node is not taken into consideration . A ( partial ) exception to this is the family of Graph Attention Networks ( Veličković et al. , 2018 ) , where the agreement between source and target representation of an edge is used to determine the weight of the message in an attention architecture . However , this weight is applied to all dimensions of the message at the same time . A simple consequence of this observation may be to simply compute messages from the pair of source and target node state . However , the linear layer commonly used to compute messages would only allow additive interactions between the representations of source and target nodes . More complex transformation functions are often impractical , as computation in GNN implementations is dominated by the message transformation function . However , this need for non-trivial interaction between different information sources is a common problem in neural network design . A recent trend has been the use of hypernetworks ( Ha et al. , 2017 ) , neural networks that compute the weights of other networks . In this setting , interaction between two signal sources is achieved by using one of them as the input to a hypernetwork and the other as input to the computed network . While an intellectually pleasing approach , it is often impractical because the prediction of weights of non-trivial neural networks is computationally expensive . Approaches to mitigate this exist ( e.g. , Wu et al . ( 2019 ) handle this in natural language processing ) , but are often domain-specific . A more general mitigation method is to restrict the structure of the computed network . Recently , “ feature-wise linear modulations ” ( FiLM ) were introduced in the visual question answering domain ( Perez et al. , 2017 ) . Here , the hypernetwork is fed with an encoding of a question and produces an element-wise affine function that is applied to the features extracted from a picture . This can be adapted to the graph message passing domain by using the representation of the target node to compute the affine function . This compromise between expressiveness and computational feasibility has been very effective in some domains and the results presented in this article indicate that it is also a good fit for the graph domain . This article explores the use of hypernetworks in learning on graphs . Sect . 2 first reviews existing GNN models from the related work to identify commonalities and differences . This involves generalising a number of existing formalisms to new formulations that are able to handle graphs with different types of edges , which are often used to model different relationship between vertices . Then , two new formalisms are introduced : Relational Graph Dynamic Convolutional Networks ( RGDCN ) , which dynamically compute the neural message passing function as a linear layer , and Graph Neural Networks with Feature-wise Linear Modulation ( GNN-FiLM ) , which combine learned message passing functions with dynamically computed element-wise affine transformations . In Sect . 3 , a range of baselines are compared in extensive experiments on three tasks from the literature , spanning classification , regression and ranking tasks on small and large graphs . Experiments were performed on re-implementations of existing model architectures in the same framework and hyperparameter setting searches were performed with the same computational budgets across all architectures . The results show that differences between baselines are smaller than the literature suggests and that the new FiLM model performs well on a number of interesting tasks . 2 MODEL . Notation . Let L be a finite ( usually small ) set of edge types . Then , a directed graph G = ( V , E ) has nodes V and typed edges E ⊆ V × L × V , where ( u , ` , v ) ∈ E denotes an edge from node u to node v of type ` , usually written as u →̀ v. Graph Neural Networks . As discussed above , Graph Neural Networks operate by propagating information along the edges of a given graph . Concretely , each node v is associated with an initial representation h ( 0 ) v ( for example obtained from the label of that node , or by some other model component ) . Then , a GNN layer updates the node representations using the node representations of its neighbours in the graph , yielding representations h ( 1 ) v . This process can be unrolled through time by repeatedly applying the same update function , yielding representations h ( 2 ) v . . .h ( T ) v . Alternatively , several GNN layers can be stacked , which is intuitively similar to unrolling through time , but increases the GNN capacity by using different parameters for each timestep . In Gated Graph Neural Networks ( GGNN ) ( Li et al. , 2016 ) , the update rule uses one linear layer W ` per edge type ` to compute messages and combines the aggregated messages with the current representation of a node using a recurrent unit r ( e.g. , GRU or LSTM cells ) , yielding the following definition . h ( t+1 ) v = r ( h ( t ) v , ∑ u→̀v∈E W ` h ( t ) u ; θr ) ( 1 ) The learnable parameters of the model are the edge-type-dependent weights W ` and the recurrent cell parameters θr . In Relational Graph Convolutional Networks ( R-GCN ) ( Schlichtkrull et al. , 2018 ) , the gated unit is replaced by a simple non-linearity σ ( e.g. , the hyperbolic tangent ) . h ( t+1 ) v = σ ( ∑ u→̀v∈E 1 cv , ` ·W ` h ( t ) u ) ( 2 ) Here , cv , ` is a normalisation factor usually set to the number of edges of type ` ending in v. The learnable parameters of the model are the edge-type-dependent weights W ` . It is important to note that in this setting , the edge type set L is assumed to contain a special edge type 0 for self-loops v 0→ v , allowing state associated with a node to be kept . In Graph Attention Networks ( GAT ) ( Veličković et al. , 2018 ) , new node representations are computed from a weighted sum of neighbouring node representations . The model can be generalised from the original definitional to support different edge types as follows ( we will call this R-GAT below ) .1 eu , ` , v = LeakyReLU ( α ` · ( W ` h ( t ) u ‖W ` h ( t ) v ) ) av = softmax ( eu , ` , v | u →̀ v ∈ E ) h ( t+1 ) v = σ ( ∑ u→̀v∈E ( av ) u→̀v ·W ` h ( t ) u ) ( 3 ) Here , α ` is a learnable row vector used to weigh different feature dimensions in the computation of an attention ( “ relevance ” ) score of the node representations , x‖y is the concatenation of vectors x and y , and ( av ) u→̀v refers to the weight computed by the softmax for that edge . The learnable parameters of the model are the edge-type-dependent weights W ` and the attention parameters α ` . In practice , GATs usually employ several attention heads that independently implement the mechanism above in parallel , using separate learnable parameters . The results of the different attention heads are then concatenated after each propagation round to yield the value of h ( t+1 ) v . More recently , Xu et al . ( 2019 ) analysed the expressiveness of different GNN types , comparing their ability to distinguish similar graphs with the Weisfeiler-Lehman ( WL ) graph isomorphism test . Their results show that GCNs and the GraphSAGE model Hamilton et al . ( 2017 ) are strictly weaker than the WL test and hence they developed Graph Isomorphism Networks ( GIN ) ( Xu et al. , 2019 ) , which are indeed as powerful as the WL test . While the GIN definition is limited to a single edge type , Corollary 6 of Xu et al . ( 2019 ) shows that using the definition h ( t+1 ) v = ϕ ( ( 1 + ) · f ( h ( t ) v ) + ∑ u→v∈E f ( h ( t ) u ) ) , there are choices for , ϕ and f such that the node representation update is sufficient for the overall network to be as powerful as the WL test . In the setting of different edge types , the function f in the sum over neighbouring nodes needs to reflect different edge types to distinguish graphs such as v 1→ u 2← w and v 2→ u 1← w from each other . Using different functions f ` for different edge types makes it possible to unify the use of the current node representation h ( t ) v with the use of neighbouring node representations by again using a fresh edge type 0 for self-loops v 0→ v. In that setting , the factor ( 1 + ) can be integrated into f0 . Finally , following an argument similar to Xu et al . ( 2019 ) , ϕ and f at subsequent layers can be “ merged ” into a single function which can be approximated by a multilayer perceptron ( MLP ) , yielding the final R-GIN definition h ( t+1 ) v = σ ( ∑ u→̀v∈E MLP ( h ( t ) u ; θ ` ) ) . ( 4 ) The learnable parameters here are the edge-specific weights θ ` . Note that Eq . ( 4 ) is very similar to the definition of R-GCNs ( Eq . ( 2 ) ) , only dropping the normalisation factor 1cv , ` and replacing linear layers by an MLP . While many more GNN variants exist , the four formalisms above are broadly representative of general trends . It is notable that in all of these models , the information passed from one node to another is based on the learned weights and the representation of the source of an edge . In contrast , the representation of the target of an edge is only updated ( in the GGNN case Eq . ( 1 ) ) , treated as another incoming message ( in the R-GCN case Eq . ( 2 ) and the R-GIN case Eq . ( 4 ) ) , or used to weight the relevance of an edge ( in the R-GAT case Eq . ( 3 ) ) . Sometimes unnamed GNN variants of the above are used ( e.g. , by Selsam et al . ( 2019 ) ; Paliwal et al . ( 2019 ) ) , replacing the linear layers to compute the messages for each edge by MLPs applied to the concatenation of the representations 1Note that this is similar to the ARGAT model presented by Busbridge et al . ( 2019 ) , but unlike the models studied there ( and like the original GATs ) uses a single linear layer to compute attention scores eu , ` , v , instead of simpler additive or multiplicative variants . of source and target nodes . In the experiments , this will be called GNN-MLP , formally defined as follows.2 h ( t+1 ) v = σ ( ∑ u→̀v∈E 1 cv , ` ·MLP ( h ( t ) u ‖h ( t ) v ; θ ` ) ) ( 5 ) Below , we will instantiate the MLP with a single linear layer to obtain what we call GNN-MLP0 , which only differs from R-GCNs ( Eq . ( 2 ) ) in that the message passing function is applied to the concatenation of source and target state .
This paper introduces a new type of Graph Neural Network (GNN) that incorporates Feature-wise Linear Modulation (FiLM) layers. Current GNNs update the target representations by aggregating information from neighbouring nodes without taking into the account the target node representation. As graph networks might benefit from such target-source interactions, the current work proposes to use FiLM layers to let the target node modulate the source node representations. The authors thoroughly evaluate this new architecture — called GNN-FiLM—-on several graph benchmarks, including Citeseer, PPI, QM9, and VarMisuse. The proposed network outperforms the other methods on QM9 and is on par on the other benchmarks.
SP:19147e08c4a7343bee155f8e74362d8214bb35e1
GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation
1 INTRODUCTION . Learning from graph-structured data has seen explosive growth over the last few years , as graphs are a convenient formalism to model the broad class of data that has objects ( treated as vertices ) with some known relationships ( treated as edges ) . Example usages include reasoning about physical and biological systems , knowledge bases , computer programs , and relational reasoning in computer vision tasks . This graph construction is a highly complex form of feature engineering , mapping the knowledge of a domain expert into a graph structure which can be consumed and exploited by high-capacity neural network models . Many neural graph learning methods can be summarised as neural message passing ( Gilmer et al. , 2017 ) : nodes are initialised with some representation and then exchange information by transforming their current state ( in practice with a single linear layer ) and sending it as a message to all neighbours in the graph . At each node , messages are aggregated in some way and then used to update the associated node representation . In this setting , the message is entirely determined by the source node ( and potentially the edge type ) and the target node is not taken into consideration . A ( partial ) exception to this is the family of Graph Attention Networks ( Veličković et al. , 2018 ) , where the agreement between source and target representation of an edge is used to determine the weight of the message in an attention architecture . However , this weight is applied to all dimensions of the message at the same time . A simple consequence of this observation may be to simply compute messages from the pair of source and target node state . However , the linear layer commonly used to compute messages would only allow additive interactions between the representations of source and target nodes . More complex transformation functions are often impractical , as computation in GNN implementations is dominated by the message transformation function . However , this need for non-trivial interaction between different information sources is a common problem in neural network design . A recent trend has been the use of hypernetworks ( Ha et al. , 2017 ) , neural networks that compute the weights of other networks . In this setting , interaction between two signal sources is achieved by using one of them as the input to a hypernetwork and the other as input to the computed network . While an intellectually pleasing approach , it is often impractical because the prediction of weights of non-trivial neural networks is computationally expensive . Approaches to mitigate this exist ( e.g. , Wu et al . ( 2019 ) handle this in natural language processing ) , but are often domain-specific . A more general mitigation method is to restrict the structure of the computed network . Recently , “ feature-wise linear modulations ” ( FiLM ) were introduced in the visual question answering domain ( Perez et al. , 2017 ) . Here , the hypernetwork is fed with an encoding of a question and produces an element-wise affine function that is applied to the features extracted from a picture . This can be adapted to the graph message passing domain by using the representation of the target node to compute the affine function . This compromise between expressiveness and computational feasibility has been very effective in some domains and the results presented in this article indicate that it is also a good fit for the graph domain . This article explores the use of hypernetworks in learning on graphs . Sect . 2 first reviews existing GNN models from the related work to identify commonalities and differences . This involves generalising a number of existing formalisms to new formulations that are able to handle graphs with different types of edges , which are often used to model different relationship between vertices . Then , two new formalisms are introduced : Relational Graph Dynamic Convolutional Networks ( RGDCN ) , which dynamically compute the neural message passing function as a linear layer , and Graph Neural Networks with Feature-wise Linear Modulation ( GNN-FiLM ) , which combine learned message passing functions with dynamically computed element-wise affine transformations . In Sect . 3 , a range of baselines are compared in extensive experiments on three tasks from the literature , spanning classification , regression and ranking tasks on small and large graphs . Experiments were performed on re-implementations of existing model architectures in the same framework and hyperparameter setting searches were performed with the same computational budgets across all architectures . The results show that differences between baselines are smaller than the literature suggests and that the new FiLM model performs well on a number of interesting tasks . 2 MODEL . Notation . Let L be a finite ( usually small ) set of edge types . Then , a directed graph G = ( V , E ) has nodes V and typed edges E ⊆ V × L × V , where ( u , ` , v ) ∈ E denotes an edge from node u to node v of type ` , usually written as u →̀ v. Graph Neural Networks . As discussed above , Graph Neural Networks operate by propagating information along the edges of a given graph . Concretely , each node v is associated with an initial representation h ( 0 ) v ( for example obtained from the label of that node , or by some other model component ) . Then , a GNN layer updates the node representations using the node representations of its neighbours in the graph , yielding representations h ( 1 ) v . This process can be unrolled through time by repeatedly applying the same update function , yielding representations h ( 2 ) v . . .h ( T ) v . Alternatively , several GNN layers can be stacked , which is intuitively similar to unrolling through time , but increases the GNN capacity by using different parameters for each timestep . In Gated Graph Neural Networks ( GGNN ) ( Li et al. , 2016 ) , the update rule uses one linear layer W ` per edge type ` to compute messages and combines the aggregated messages with the current representation of a node using a recurrent unit r ( e.g. , GRU or LSTM cells ) , yielding the following definition . h ( t+1 ) v = r ( h ( t ) v , ∑ u→̀v∈E W ` h ( t ) u ; θr ) ( 1 ) The learnable parameters of the model are the edge-type-dependent weights W ` and the recurrent cell parameters θr . In Relational Graph Convolutional Networks ( R-GCN ) ( Schlichtkrull et al. , 2018 ) , the gated unit is replaced by a simple non-linearity σ ( e.g. , the hyperbolic tangent ) . h ( t+1 ) v = σ ( ∑ u→̀v∈E 1 cv , ` ·W ` h ( t ) u ) ( 2 ) Here , cv , ` is a normalisation factor usually set to the number of edges of type ` ending in v. The learnable parameters of the model are the edge-type-dependent weights W ` . It is important to note that in this setting , the edge type set L is assumed to contain a special edge type 0 for self-loops v 0→ v , allowing state associated with a node to be kept . In Graph Attention Networks ( GAT ) ( Veličković et al. , 2018 ) , new node representations are computed from a weighted sum of neighbouring node representations . The model can be generalised from the original definitional to support different edge types as follows ( we will call this R-GAT below ) .1 eu , ` , v = LeakyReLU ( α ` · ( W ` h ( t ) u ‖W ` h ( t ) v ) ) av = softmax ( eu , ` , v | u →̀ v ∈ E ) h ( t+1 ) v = σ ( ∑ u→̀v∈E ( av ) u→̀v ·W ` h ( t ) u ) ( 3 ) Here , α ` is a learnable row vector used to weigh different feature dimensions in the computation of an attention ( “ relevance ” ) score of the node representations , x‖y is the concatenation of vectors x and y , and ( av ) u→̀v refers to the weight computed by the softmax for that edge . The learnable parameters of the model are the edge-type-dependent weights W ` and the attention parameters α ` . In practice , GATs usually employ several attention heads that independently implement the mechanism above in parallel , using separate learnable parameters . The results of the different attention heads are then concatenated after each propagation round to yield the value of h ( t+1 ) v . More recently , Xu et al . ( 2019 ) analysed the expressiveness of different GNN types , comparing their ability to distinguish similar graphs with the Weisfeiler-Lehman ( WL ) graph isomorphism test . Their results show that GCNs and the GraphSAGE model Hamilton et al . ( 2017 ) are strictly weaker than the WL test and hence they developed Graph Isomorphism Networks ( GIN ) ( Xu et al. , 2019 ) , which are indeed as powerful as the WL test . While the GIN definition is limited to a single edge type , Corollary 6 of Xu et al . ( 2019 ) shows that using the definition h ( t+1 ) v = ϕ ( ( 1 + ) · f ( h ( t ) v ) + ∑ u→v∈E f ( h ( t ) u ) ) , there are choices for , ϕ and f such that the node representation update is sufficient for the overall network to be as powerful as the WL test . In the setting of different edge types , the function f in the sum over neighbouring nodes needs to reflect different edge types to distinguish graphs such as v 1→ u 2← w and v 2→ u 1← w from each other . Using different functions f ` for different edge types makes it possible to unify the use of the current node representation h ( t ) v with the use of neighbouring node representations by again using a fresh edge type 0 for self-loops v 0→ v. In that setting , the factor ( 1 + ) can be integrated into f0 . Finally , following an argument similar to Xu et al . ( 2019 ) , ϕ and f at subsequent layers can be “ merged ” into a single function which can be approximated by a multilayer perceptron ( MLP ) , yielding the final R-GIN definition h ( t+1 ) v = σ ( ∑ u→̀v∈E MLP ( h ( t ) u ; θ ` ) ) . ( 4 ) The learnable parameters here are the edge-specific weights θ ` . Note that Eq . ( 4 ) is very similar to the definition of R-GCNs ( Eq . ( 2 ) ) , only dropping the normalisation factor 1cv , ` and replacing linear layers by an MLP . While many more GNN variants exist , the four formalisms above are broadly representative of general trends . It is notable that in all of these models , the information passed from one node to another is based on the learned weights and the representation of the source of an edge . In contrast , the representation of the target of an edge is only updated ( in the GGNN case Eq . ( 1 ) ) , treated as another incoming message ( in the R-GCN case Eq . ( 2 ) and the R-GIN case Eq . ( 4 ) ) , or used to weight the relevance of an edge ( in the R-GAT case Eq . ( 3 ) ) . Sometimes unnamed GNN variants of the above are used ( e.g. , by Selsam et al . ( 2019 ) ; Paliwal et al . ( 2019 ) ) , replacing the linear layers to compute the messages for each edge by MLPs applied to the concatenation of the representations 1Note that this is similar to the ARGAT model presented by Busbridge et al . ( 2019 ) , but unlike the models studied there ( and like the original GATs ) uses a single linear layer to compute attention scores eu , ` , v , instead of simpler additive or multiplicative variants . of source and target nodes . In the experiments , this will be called GNN-MLP , formally defined as follows.2 h ( t+1 ) v = σ ( ∑ u→̀v∈E 1 cv , ` ·MLP ( h ( t ) u ‖h ( t ) v ; θ ` ) ) ( 5 ) Below , we will instantiate the MLP with a single linear layer to obtain what we call GNN-MLP0 , which only differs from R-GCNs ( Eq . ( 2 ) ) in that the message passing function is applied to the concatenation of source and target state .
The paper proposes a new Graph Neural Network (GNN) architecture that uses Feature-wise Linear Modulation (FiLM) to condition the source-to-target node message-passing based on the target node representation. In this way, GNN-FiLM aims to allow a GNN's message propagation to "focus on feature that are especially relevant for the update of the target node." The authors clearly describe prior GNN architectures, showing that the do not incorporate such forms of message propagation. The authors then describe several intuitive ways of adding such a form of message propagation, before describing why those approaches do not work in practice. Finally, the authors introduce GNN-FiLM, which is computationally reasonable and works well in practice, as evaluated according to several GNN benchmarks. The GNN-FiLM model is also quite simple and elegant, which makes me think it is likely to work on more tasks than the authors experiment on.
SP:19147e08c4a7343bee155f8e74362d8214bb35e1
Universal Safeguarded Learned Convex Optimization with Guaranteed Convergence
1 INTRODUCTION . Solving scientific computing problems often requires application of efficient and scalable optimization algorithms . Despite the ever improving rates of convergence of state-of-the-art general purpose algorithms , their ability to apply to real-time applications is still limited due to the relatively large number of iterations that must be computed . To circumvent this shortcoming , a growing number of researchers use machine learning to develop task-specific algorithms from general-purpose algorithms . For example , inspired by the iterative shrinkage thresholding algorithm ( ISTA ) for solving the LASSO problem , a sparse coding problem , Gregor & LeCun ( 2010 ) proposed to learn the weights in the matrices of the ISTA updates that worked best for a given data set , rather than leave these parameters fixed . They then truncated the method to K iterations , making their Learned ISTA ( LISTA ) algorithm form a K-layer feed-forward neural network . Empirically , their examples showed roughly a 20-fold reduction in computational cost compared to the traditional algorithms . Several related works followed , also demonstrating numerical success ( discussed below ) . While classic optimization results often provide worst-case convergence rates , limited theory exists pertaining to such instances of data drawn from a common distribution ( e.g. , data supported on a lowdimensional manifold ) . As a step toward providing such theory , this work addresses the question : Does there exist a universal method that encompasses all L2O algorithms and generates iterates that approach the solution set with guarantees ? We provide an affirmative answer to this question by prescribing and proving properties of neural networks generated within our L2O framework . Convergence is established by including any choice among several practical safeguarding procedures , including nonmonotone options . Nonmonotone safeguarding enables sequences to traverse portions of the underlying space where the objective function value may increase for a few successive iterations as long as , on average , the sequence approaches the solution set . Although counterintuitive , this ability may lead to faster convergence . Furthermore , we provide a practical guide in our discussion for how practitioners may use our framework to create and apply L2O schemes to their own problems . The theoretical portion of this work is presented in the context of fixed point theory . This is done to be sufficiently general and provide the desired convergence result for the wide class of optimization methods that can be expressed as special cases of the Krasnosel ’ skiı̆-Mann ( KM ) method . For concreteness and ease of application , we then provide the special-case results to several well-known methods ( e.g. , proximal-gradient , Douglas-Rachford splitting , and ADMM ) . Related Works . Learning to learn methods date back decades ( e.g. , see ( Thrun & Pratt , 1998 ) for a survey of earlier works and references ) . A seminal L2O work in the context of sparse coding was by Gregor & LeCun ( 2010 ) . Numerous follow-up papers also demonstrated empirical success at constructing rapid regressors approximating iterative sparse solvers , compression , ` 0 encoding , combining sparse coding with clustering models , nonnegative matrix factorization , compressive sensing MRI , and other applications ( Sprechmann et al. , 2015 ; Wang et al. , 2016a ; b ; c ; d ; Hershey et al. , 2014 ; Yang et al. , 2016 ) . A nice summary of unfolded optimization procedures for sparse recovery is given by Ablin et al . ( 2019 ) in Table A.1 . However , the majority of L2O works pertain to sparse coding and provide limited theoretical results . Some works have interpreted LISTA in various ways to provide proofs of different convergence properties ( Giryes et al. , 2018 ; Moreau & Bruna , 2017 ) . Others have investigated structures related to LISTA ( Xin et al. , 2016 ; Blumensath & Davies , 2009 ; Borgerding et al. , 2017 ; Metzler et al. , 2017 ) , providing varying results dependent upon the assumptions made . Chen et al . ( 2018 ) introduced necessary conditions for the LISTA weight structure to asymptotically achieve a linear convergence rate . This was followed by Liu et al . ( 2019a ) , which proved linear convergence of their ALISTA method for the LASSO problem and provided a result stating that , with high probability , the convergence rate of LISTA is at most linear . The mentioned results are useful , yet can require intricate assumptions and proofs specific to the relevant sparse coding problems . L2O works have also taken other approaches . For example , the paper by Li & Malik ( 2016 ) used reinforcement learning with an objective function f and a stochastic policy π∗ that encodes the updates , which takes existing optimization algorithms as special cases . Our work is related to theirs ( cf . Method 1 below and Algorithm 1 in that paper ) , with the distinction that we include safeguarding and work in the fixed point setting . The idea of Andrychowicz et al . ( 2016 ) is to use long short term memory ( LSTM ) units in recurrent neural networks ( RNNs ) . Additional learning approaches have been applied in the discrete setting ( Dai et al. , 2018 ; Li et al. , 2018 ; Bengio et al. , 2018 ) . Balcan et al . ( 2019 ) reveal how many samples are needed for the average algorithm performance on the training set to generalize over the entire distribution . This is practical for choosing training data and may be used for training L2O networks within our framework . Our Contribution . This is the first work to merge ideas from machine learning , safeguarded optimization , and fixed point theory into a general framework for incorporating data-driven updates into iterative convex optimization algorithms . In particular , given a collection of data and an update operator from an established method ( e.g. , ADMM or proximal gradient ) for solving an optimization problem , we present procedures for creating a neural network that can be used to quickly infer solution estimates . The first novelty of this framework is the ability to incorporate several safeguarding procedures in a general setting . The second is that we present a procedure for utilizing machine learning methods to incorporate knowledge from particular data sets . However , our most significant contribution to the L2O literature is to combine these results into a single , general framework for use by practitioners on any convex optimization problem . Outline . We first provide a brief overview of the fixed point setting of this work in Section 2 . Then we present the SKM method and convergence results in Section 3 . The incorporation of the SKM method into a neural network and subsequent training approach is presented in Section 4 . This is followed in Section 5 by numerical examples , discussion in Section 6 , and conclusions in Section 7 . 2 FIXED POINT METHODS . Let H be a finite dimensional Hilbert space ( e.g. , the Euclidean space Rn ) with inner product 〈· , ·〉 and norm ‖ · ‖ . Denote the set of fixed points of each operator T : H → H by Fix ( T ) : = { x ∈ H : Tx = x } . In this work , for an operator T with a nonempty fixed point set ( i.e. , Fix ( T ) 6= ∅ ) , the primary problem considered is the fixed point problem : Find x ? ∈ Fix ( T ) . ( 1 ) Convex minimization problems , both constrained and unconstrained , may be equivalently rewritten as the problem ( 1 ) for an appropriate mapping T . The method chosen for solving the minimization problem determines the operator T in ( 1 ) ( e.g. , see Table 1 below for examples ) . We focus on the fixed point formulation to provide a general approach , given T , for creating a sequence that converges to a solution of ( 1 ) and , thus , also of the corresponding optimization problem . The following definitions will be used in the sequel . A mapping T : H → H is nonexpansive if ‖Tx− Ty‖ ≤ ‖x− y‖ , for all x , y ∈ H. ( 2 ) An operator T : H → H is α-averaged if α ∈ ( 0 , 1 ) and there is a nonexpansive operator Q : H → H such that T = ( 1 − α ) Id + αQ , where Id is the identity operator . If the constant α is not important , then T may for brevity be called averaged . We also denote the distance between a point x ∈ H and a set C by dC ( x ) : = inf { ‖x− y‖ : y ∈ C } . ( 3 ) Two operators frequently used in optimization are constructed from monotone relations . Letting α > 0 and f : H → R be a function , the resolvent of the ( possibly ) multi-valued subgradient ∂f is Jα∂f : = ( Id + α∂f ) −1 ( 4 ) and the reflected resolvent of ∂f is Rα∂f : = 2Jα∂f − Id . ( 5 ) If f is closed , convex , and proper , then the resolvent is precisely the proximal operator , i.e. , Jα∂f = proxαf ( x ) : = arg min z∈H αf ( z ) + 1 2 ‖z − x‖2 . ( 6 ) From these definitions , it can be shown that Rα∂f is nonexpansive and Jα∂f is averaged . Results above may all be found in Bauschke & Combettes ( 2017 ) ( e.g. , see Prop . 4.4 , Thm . 20.25 , Example 23.3 , and Prop . 23.8 ) . See Table 1 for examples of these operators in optimization methods . A classic theorem states that sequences generated by successively applying an averaged operator converges to a fixed point . This method comes from Krasnosel ’ skii ( 1955 ) and Mann ( 1953 ) , which yielded adoption of the name Krasnosel ’ skiı̆-Mann ( KM ) method . This result is stated below and can be found with various forms and proofs in many works ( e.g. , see ( Bauschke & Combettes , 2017 , Thm . 5.14 ) , ( Byrne , 2008 , Thm . 5.2 ) , ( Cegielski , 2012 , Thm . 3.5.4 ) , and ( Reich , 1979 , Thm . 2 ) ) . Theorem 2.1 . If an averaged operator T : H → H has a nonempty fixed point set and a sequence { xk } k∈N with arbitrary x1 ∈ H satisfies the update relation xk+1 = T ( xk ) , for all k ∈ N , ( 7 ) then there is x ? ∈ Fix ( T ) such that { xk } k∈N converges to x ? , i.e. , xk → x ? . There are pathological cases where the result fails for operators that are only nonexpansive ( e.g. , when x1 6= 0 and either T = −Id or T is a rotation ) . However , this is easily remedied since any convex combination of a nonexpansive operator with the identity is averaged . 3 SAFEGUARDED KM METHOD . This section generalizes the classic KM iteration in ( 7 ) . We accomplish this by defining an envelope of operators TL2O ( · ; · ) . For a parameter ζ chosen from an appropriate set , we let TL2O ( · ; ζ ) define an operator on H. Changing ζ may define a new operator with different properties . We do not impose restrictions on TL2O ( · ; ζ ) other than it be well-defined , meaning TL2O ( · ; ζ ) may fail to be averaged and/or fail to have a fixed point . This is illustrated by the following two examples . Example 3.1 . Let Q : H → H be nonexpansive . Then define TL2O : H× R→ R by TL2O ( x ; ζ ) : = ( 1− ζ ) x+ ζQ ( x ) . ( 8 ) For ζ ∈ ( 0 , 1 ) , the operator TL2O ( · ; ζ ) defined in ( 8 ) is ζ-averaged . Although using ζ 1 may result in an operator that fails to be averaged , this can be useful in accelerating the convergence of a method ( e.g. , see ( Giselsson et al. , 2016 ) ) . 4 Example 3.2 . Let f : H → R be closed , convex and proper , and define TL2O : H× ( 0 , ∞ ) → R by TL2O ( x ; ζ ) : = proxζf ( x ) . ( 9 ) For fixed ζ ∈ ( 0 , ∞ ) , the operator TL2O ( · ; ζ ) is averaged . 4 The practicality of TL2O is discussed and demonstrated in Sections 4 and 5 , respectively . In the remainder of this work , each operator T : H → H is assumed to be averaged and we set S : = Id−T . Our proposed method below is called the Safeguarded Krasnosel ’ skiı̆-Mann ( SKM ) Method . Explanation of the SKM Method is as follows . In Line 1 , the initial iterate and parameter δ are initialized . A common choice for the initial iterate is x1 = 0 . The for loop on Lines 2 to 11 generates each update xk+1 in the sequence { xk } k∈N . The choice of parameter ζk in Line 3 may be any value that results in a well-defined operator TL2O ( · ; ζk ) in Line 5 . The choice of µk in Line 4 defines the safeguarding procedure that is used to ensure convergence . Safeguarding is implemented through a descent condition inequality in Line 6 . When the inequality in Line 6 holds , TL2O ( xk ; ζk ) is used to update xk via Line 7 . Otherwise , a KM update is used to update xk via Line 9 . Notice also the iteration indexed parameters may all be chosen dynamically ( rather than precomputed ) . Below are several standard assumptions used to prove our convergence result in Theorem 3.1 . Assumption 1 . The operator T is nonexpansive with a nonempty fixed point set . The following assumption ensures boundedness of sequences generated by the SKM method . Assumption 2 . The operator S is coercive , i.e. , lim ‖x‖→∞ ‖S ( x ) ‖ =∞ . ( 10 ) Remark 3.1 . Assumption 2 does not hold , in general , for nonexpansive operators . For example , if T is the gradient operator ( Id− α∇f ) for some α > 0 and f is a constant function , then S ( x ) = 0 for all x ∈ H. However , a minor perturbation to the f enables Assumption 2 to hold . In this example , if one fixes small ε > 0 and sets f̃ ( x ) : = f ( x ) + ε2‖x‖ 2 , then the associated S̃ satisfies ‖S̃ ( x ) ‖ = ε‖x‖ . This idea generalizes and , since this works for arbitrarily small ε , in practice it may be reasonable to assume Assumption 2 holds when applying the SKM Method . Algorithm 2 Learned SKM ( LSKM ) 1 : Stage 1 : Initialization/Training . 2 : Choose envelope TL2O ( · ; · ) and network structure C , parameterized by Θ = ( ζk ) Kk=1 3 : Choose training loss function φd 4 : Choose ‘ optimal ’ parameter Θ ? ∈ arg min Θ∈C Ed∼D [ φd ( x K ) ] , assuming µk =∞ at each layer k 5 : Choose δ and safeguarding scheme for { µk } Kk=1 6 : Define the neural networkM =MΘ ? , δ , µk . 7 : Stage 2 : Inference . 8 : For input d return x =M ( d ) Assumption 3 . If the inequality in Line 6 is satisfied infinitely many times , then the sequence { µk } k∈N converges to zero . Remark 3.2 . Assumption 3 may be enforced by using various choices that are dependent upon combinations of the previous residuals . This is illustrated by Table 2 and Corollary 3.1 below . Our main convergence result is Theorem 3.1 below ( proven in the Appendix ) . Theorem 3.1 . If { xk } k∈N is a sequence generated by the SKM method and Assumptions 1 to 3 hold , then lim k→∞ dFix ( T ) ( xk ) = 0 . ( 11 ) And , if { xk } k∈N contains a single cluster point , then { xk } k∈N converges to a point x ? ∈ Fix ( T ) . We propose several methods for choosing the sequence { µk } k∈N in Table 2 . These methods are adaptive in the sense that each update to an iterate µk depends upon the current iterate xk and ( possibly ) previous iterates . These update schemes enable each µk to trail the value of the residual norm ‖S ( xk ) ‖ . This implies there may exist j ∈ N for which ‖S ( xj ) ‖ < ‖S ( xj+1 ) ‖ = ‖S ( TL2O ( x j ; ζj ) ) ‖ ≤ ( 1− δ ) µj . ( 12 ) Such leniency is desirable since it is possible that , even though the residual norms converge to zero , constructing a sequence of iterates along the quickest route to the solution set requires traversing portions of the underlying space where the residual norms increase for a few successive iterations . The safeguarding schemes in Table 2 are justified by the Corollary below ( proven in the Appendix ) . Corollary 3.1 . If { xk } k∈N is a sequence generated by the SKM method and Assumptions 1 and 2 hold and { µk } k∈N is generated using a scheme outlined in Table 2 , then Assumption 3 holds and , by Theorem 3.1 , the limit ( 33 ) holds .
This paper proposes a framework to unfold the safeguarded Krasnosel’ski˘ı-Mann (SKM) method for the learn to optimization (L2O) schemes. First, SKM is proposed in Algorithm 1 with convergence guarantee established in Theorem 3.1 and Corollary 3.1. Then, SKM is unfolded and executed with a neural network summarized in Algorithm 2. Experiments on the Lasso and nonnegative least squares show the efficiency of the proposed method as well as the effectiveness of safeguarding compared to traditional L2O methods.
SP:62366ea14ace4437298fb9ddf7f095563709e3bf
Universal Safeguarded Learned Convex Optimization with Guaranteed Convergence
1 INTRODUCTION . Solving scientific computing problems often requires application of efficient and scalable optimization algorithms . Despite the ever improving rates of convergence of state-of-the-art general purpose algorithms , their ability to apply to real-time applications is still limited due to the relatively large number of iterations that must be computed . To circumvent this shortcoming , a growing number of researchers use machine learning to develop task-specific algorithms from general-purpose algorithms . For example , inspired by the iterative shrinkage thresholding algorithm ( ISTA ) for solving the LASSO problem , a sparse coding problem , Gregor & LeCun ( 2010 ) proposed to learn the weights in the matrices of the ISTA updates that worked best for a given data set , rather than leave these parameters fixed . They then truncated the method to K iterations , making their Learned ISTA ( LISTA ) algorithm form a K-layer feed-forward neural network . Empirically , their examples showed roughly a 20-fold reduction in computational cost compared to the traditional algorithms . Several related works followed , also demonstrating numerical success ( discussed below ) . While classic optimization results often provide worst-case convergence rates , limited theory exists pertaining to such instances of data drawn from a common distribution ( e.g. , data supported on a lowdimensional manifold ) . As a step toward providing such theory , this work addresses the question : Does there exist a universal method that encompasses all L2O algorithms and generates iterates that approach the solution set with guarantees ? We provide an affirmative answer to this question by prescribing and proving properties of neural networks generated within our L2O framework . Convergence is established by including any choice among several practical safeguarding procedures , including nonmonotone options . Nonmonotone safeguarding enables sequences to traverse portions of the underlying space where the objective function value may increase for a few successive iterations as long as , on average , the sequence approaches the solution set . Although counterintuitive , this ability may lead to faster convergence . Furthermore , we provide a practical guide in our discussion for how practitioners may use our framework to create and apply L2O schemes to their own problems . The theoretical portion of this work is presented in the context of fixed point theory . This is done to be sufficiently general and provide the desired convergence result for the wide class of optimization methods that can be expressed as special cases of the Krasnosel ’ skiı̆-Mann ( KM ) method . For concreteness and ease of application , we then provide the special-case results to several well-known methods ( e.g. , proximal-gradient , Douglas-Rachford splitting , and ADMM ) . Related Works . Learning to learn methods date back decades ( e.g. , see ( Thrun & Pratt , 1998 ) for a survey of earlier works and references ) . A seminal L2O work in the context of sparse coding was by Gregor & LeCun ( 2010 ) . Numerous follow-up papers also demonstrated empirical success at constructing rapid regressors approximating iterative sparse solvers , compression , ` 0 encoding , combining sparse coding with clustering models , nonnegative matrix factorization , compressive sensing MRI , and other applications ( Sprechmann et al. , 2015 ; Wang et al. , 2016a ; b ; c ; d ; Hershey et al. , 2014 ; Yang et al. , 2016 ) . A nice summary of unfolded optimization procedures for sparse recovery is given by Ablin et al . ( 2019 ) in Table A.1 . However , the majority of L2O works pertain to sparse coding and provide limited theoretical results . Some works have interpreted LISTA in various ways to provide proofs of different convergence properties ( Giryes et al. , 2018 ; Moreau & Bruna , 2017 ) . Others have investigated structures related to LISTA ( Xin et al. , 2016 ; Blumensath & Davies , 2009 ; Borgerding et al. , 2017 ; Metzler et al. , 2017 ) , providing varying results dependent upon the assumptions made . Chen et al . ( 2018 ) introduced necessary conditions for the LISTA weight structure to asymptotically achieve a linear convergence rate . This was followed by Liu et al . ( 2019a ) , which proved linear convergence of their ALISTA method for the LASSO problem and provided a result stating that , with high probability , the convergence rate of LISTA is at most linear . The mentioned results are useful , yet can require intricate assumptions and proofs specific to the relevant sparse coding problems . L2O works have also taken other approaches . For example , the paper by Li & Malik ( 2016 ) used reinforcement learning with an objective function f and a stochastic policy π∗ that encodes the updates , which takes existing optimization algorithms as special cases . Our work is related to theirs ( cf . Method 1 below and Algorithm 1 in that paper ) , with the distinction that we include safeguarding and work in the fixed point setting . The idea of Andrychowicz et al . ( 2016 ) is to use long short term memory ( LSTM ) units in recurrent neural networks ( RNNs ) . Additional learning approaches have been applied in the discrete setting ( Dai et al. , 2018 ; Li et al. , 2018 ; Bengio et al. , 2018 ) . Balcan et al . ( 2019 ) reveal how many samples are needed for the average algorithm performance on the training set to generalize over the entire distribution . This is practical for choosing training data and may be used for training L2O networks within our framework . Our Contribution . This is the first work to merge ideas from machine learning , safeguarded optimization , and fixed point theory into a general framework for incorporating data-driven updates into iterative convex optimization algorithms . In particular , given a collection of data and an update operator from an established method ( e.g. , ADMM or proximal gradient ) for solving an optimization problem , we present procedures for creating a neural network that can be used to quickly infer solution estimates . The first novelty of this framework is the ability to incorporate several safeguarding procedures in a general setting . The second is that we present a procedure for utilizing machine learning methods to incorporate knowledge from particular data sets . However , our most significant contribution to the L2O literature is to combine these results into a single , general framework for use by practitioners on any convex optimization problem . Outline . We first provide a brief overview of the fixed point setting of this work in Section 2 . Then we present the SKM method and convergence results in Section 3 . The incorporation of the SKM method into a neural network and subsequent training approach is presented in Section 4 . This is followed in Section 5 by numerical examples , discussion in Section 6 , and conclusions in Section 7 . 2 FIXED POINT METHODS . Let H be a finite dimensional Hilbert space ( e.g. , the Euclidean space Rn ) with inner product 〈· , ·〉 and norm ‖ · ‖ . Denote the set of fixed points of each operator T : H → H by Fix ( T ) : = { x ∈ H : Tx = x } . In this work , for an operator T with a nonempty fixed point set ( i.e. , Fix ( T ) 6= ∅ ) , the primary problem considered is the fixed point problem : Find x ? ∈ Fix ( T ) . ( 1 ) Convex minimization problems , both constrained and unconstrained , may be equivalently rewritten as the problem ( 1 ) for an appropriate mapping T . The method chosen for solving the minimization problem determines the operator T in ( 1 ) ( e.g. , see Table 1 below for examples ) . We focus on the fixed point formulation to provide a general approach , given T , for creating a sequence that converges to a solution of ( 1 ) and , thus , also of the corresponding optimization problem . The following definitions will be used in the sequel . A mapping T : H → H is nonexpansive if ‖Tx− Ty‖ ≤ ‖x− y‖ , for all x , y ∈ H. ( 2 ) An operator T : H → H is α-averaged if α ∈ ( 0 , 1 ) and there is a nonexpansive operator Q : H → H such that T = ( 1 − α ) Id + αQ , where Id is the identity operator . If the constant α is not important , then T may for brevity be called averaged . We also denote the distance between a point x ∈ H and a set C by dC ( x ) : = inf { ‖x− y‖ : y ∈ C } . ( 3 ) Two operators frequently used in optimization are constructed from monotone relations . Letting α > 0 and f : H → R be a function , the resolvent of the ( possibly ) multi-valued subgradient ∂f is Jα∂f : = ( Id + α∂f ) −1 ( 4 ) and the reflected resolvent of ∂f is Rα∂f : = 2Jα∂f − Id . ( 5 ) If f is closed , convex , and proper , then the resolvent is precisely the proximal operator , i.e. , Jα∂f = proxαf ( x ) : = arg min z∈H αf ( z ) + 1 2 ‖z − x‖2 . ( 6 ) From these definitions , it can be shown that Rα∂f is nonexpansive and Jα∂f is averaged . Results above may all be found in Bauschke & Combettes ( 2017 ) ( e.g. , see Prop . 4.4 , Thm . 20.25 , Example 23.3 , and Prop . 23.8 ) . See Table 1 for examples of these operators in optimization methods . A classic theorem states that sequences generated by successively applying an averaged operator converges to a fixed point . This method comes from Krasnosel ’ skii ( 1955 ) and Mann ( 1953 ) , which yielded adoption of the name Krasnosel ’ skiı̆-Mann ( KM ) method . This result is stated below and can be found with various forms and proofs in many works ( e.g. , see ( Bauschke & Combettes , 2017 , Thm . 5.14 ) , ( Byrne , 2008 , Thm . 5.2 ) , ( Cegielski , 2012 , Thm . 3.5.4 ) , and ( Reich , 1979 , Thm . 2 ) ) . Theorem 2.1 . If an averaged operator T : H → H has a nonempty fixed point set and a sequence { xk } k∈N with arbitrary x1 ∈ H satisfies the update relation xk+1 = T ( xk ) , for all k ∈ N , ( 7 ) then there is x ? ∈ Fix ( T ) such that { xk } k∈N converges to x ? , i.e. , xk → x ? . There are pathological cases where the result fails for operators that are only nonexpansive ( e.g. , when x1 6= 0 and either T = −Id or T is a rotation ) . However , this is easily remedied since any convex combination of a nonexpansive operator with the identity is averaged . 3 SAFEGUARDED KM METHOD . This section generalizes the classic KM iteration in ( 7 ) . We accomplish this by defining an envelope of operators TL2O ( · ; · ) . For a parameter ζ chosen from an appropriate set , we let TL2O ( · ; ζ ) define an operator on H. Changing ζ may define a new operator with different properties . We do not impose restrictions on TL2O ( · ; ζ ) other than it be well-defined , meaning TL2O ( · ; ζ ) may fail to be averaged and/or fail to have a fixed point . This is illustrated by the following two examples . Example 3.1 . Let Q : H → H be nonexpansive . Then define TL2O : H× R→ R by TL2O ( x ; ζ ) : = ( 1− ζ ) x+ ζQ ( x ) . ( 8 ) For ζ ∈ ( 0 , 1 ) , the operator TL2O ( · ; ζ ) defined in ( 8 ) is ζ-averaged . Although using ζ 1 may result in an operator that fails to be averaged , this can be useful in accelerating the convergence of a method ( e.g. , see ( Giselsson et al. , 2016 ) ) . 4 Example 3.2 . Let f : H → R be closed , convex and proper , and define TL2O : H× ( 0 , ∞ ) → R by TL2O ( x ; ζ ) : = proxζf ( x ) . ( 9 ) For fixed ζ ∈ ( 0 , ∞ ) , the operator TL2O ( · ; ζ ) is averaged . 4 The practicality of TL2O is discussed and demonstrated in Sections 4 and 5 , respectively . In the remainder of this work , each operator T : H → H is assumed to be averaged and we set S : = Id−T . Our proposed method below is called the Safeguarded Krasnosel ’ skiı̆-Mann ( SKM ) Method . Explanation of the SKM Method is as follows . In Line 1 , the initial iterate and parameter δ are initialized . A common choice for the initial iterate is x1 = 0 . The for loop on Lines 2 to 11 generates each update xk+1 in the sequence { xk } k∈N . The choice of parameter ζk in Line 3 may be any value that results in a well-defined operator TL2O ( · ; ζk ) in Line 5 . The choice of µk in Line 4 defines the safeguarding procedure that is used to ensure convergence . Safeguarding is implemented through a descent condition inequality in Line 6 . When the inequality in Line 6 holds , TL2O ( xk ; ζk ) is used to update xk via Line 7 . Otherwise , a KM update is used to update xk via Line 9 . Notice also the iteration indexed parameters may all be chosen dynamically ( rather than precomputed ) . Below are several standard assumptions used to prove our convergence result in Theorem 3.1 . Assumption 1 . The operator T is nonexpansive with a nonempty fixed point set . The following assumption ensures boundedness of sequences generated by the SKM method . Assumption 2 . The operator S is coercive , i.e. , lim ‖x‖→∞ ‖S ( x ) ‖ =∞ . ( 10 ) Remark 3.1 . Assumption 2 does not hold , in general , for nonexpansive operators . For example , if T is the gradient operator ( Id− α∇f ) for some α > 0 and f is a constant function , then S ( x ) = 0 for all x ∈ H. However , a minor perturbation to the f enables Assumption 2 to hold . In this example , if one fixes small ε > 0 and sets f̃ ( x ) : = f ( x ) + ε2‖x‖ 2 , then the associated S̃ satisfies ‖S̃ ( x ) ‖ = ε‖x‖ . This idea generalizes and , since this works for arbitrarily small ε , in practice it may be reasonable to assume Assumption 2 holds when applying the SKM Method . Algorithm 2 Learned SKM ( LSKM ) 1 : Stage 1 : Initialization/Training . 2 : Choose envelope TL2O ( · ; · ) and network structure C , parameterized by Θ = ( ζk ) Kk=1 3 : Choose training loss function φd 4 : Choose ‘ optimal ’ parameter Θ ? ∈ arg min Θ∈C Ed∼D [ φd ( x K ) ] , assuming µk =∞ at each layer k 5 : Choose δ and safeguarding scheme for { µk } Kk=1 6 : Define the neural networkM =MΘ ? , δ , µk . 7 : Stage 2 : Inference . 8 : For input d return x =M ( d ) Assumption 3 . If the inequality in Line 6 is satisfied infinitely many times , then the sequence { µk } k∈N converges to zero . Remark 3.2 . Assumption 3 may be enforced by using various choices that are dependent upon combinations of the previous residuals . This is illustrated by Table 2 and Corollary 3.1 below . Our main convergence result is Theorem 3.1 below ( proven in the Appendix ) . Theorem 3.1 . If { xk } k∈N is a sequence generated by the SKM method and Assumptions 1 to 3 hold , then lim k→∞ dFix ( T ) ( xk ) = 0 . ( 11 ) And , if { xk } k∈N contains a single cluster point , then { xk } k∈N converges to a point x ? ∈ Fix ( T ) . We propose several methods for choosing the sequence { µk } k∈N in Table 2 . These methods are adaptive in the sense that each update to an iterate µk depends upon the current iterate xk and ( possibly ) previous iterates . These update schemes enable each µk to trail the value of the residual norm ‖S ( xk ) ‖ . This implies there may exist j ∈ N for which ‖S ( xj ) ‖ < ‖S ( xj+1 ) ‖ = ‖S ( TL2O ( x j ; ζj ) ) ‖ ≤ ( 1− δ ) µj . ( 12 ) Such leniency is desirable since it is possible that , even though the residual norms converge to zero , constructing a sequence of iterates along the quickest route to the solution set requires traversing portions of the underlying space where the residual norms increase for a few successive iterations . The safeguarding schemes in Table 2 are justified by the Corollary below ( proven in the Appendix ) . Corollary 3.1 . If { xk } k∈N is a sequence generated by the SKM method and Assumptions 1 and 2 hold and { µk } k∈N is generated using a scheme outlined in Table 2 , then Assumption 3 holds and , by Theorem 3.1 , the limit ( 33 ) holds .
This paper presents a unified framework for parametrizing provably convergent algorithms and learning the parameters for a training dataset of problem instances of interest. The learned algorithm can then be used on unseen problems. One key idea to this algorithm is that it is safeguarded, meaning it will perform some standard, non-learned iterations, if the predicted iterate is not good enough under some condition.
SP:62366ea14ace4437298fb9ddf7f095563709e3bf
Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning
1 INTRODUCTION . Meta-learning or learning to learn is an appealing notion due to its potential in addressing important challenges when applying machine learning to real-world problems . In particular , learning from prior tasks but being able to to adapt quickly to new tasks improves learning efficiency , model robustness , etc . A promising set of techiques , Model-Agnostic Meta-Learning ( Finn et al. , 2017 ) or MAML , and its variants , have received a lot of interest ( Nichol et al. , 2018 ; Lee & Choi , 2018 ; Grant et al. , 2018 ) . However , despite several efforts , understanding of how MAML works , either theoretically or in practice , has been lacking ( Finn & Levine , 2018 ; Fallah et al. , 2019 ) . For a model that meta-learns , its parameters need to encode not only the common knowledge extracted from the tasks it has seen , which form a task-general inductive bias , but also the capability to adapt to new test tasks ( similar to those it has seen ) with task-specific knowledge . This begs the question : how are these two sets of capabilities represented in a single model and how do they work together ? In the case of deep learning models , one natural hypothesis is that while knowledge is represented distributedly in parameters , they can be localized – for instance , lower layers encode task-general inductive bias and the higher layers encode adaptable task-specific inductive bias . This hypothesis is consistent with one of deep learning ’ s advantages in learning representations ( or feature extractors ) using its bottom layers . Then we must ask , in order for a deep learning model to meta-learn , does it need more depth than it needs for solving the target tasks ? In other words , is having a large capacity to encode knowledge that is unnecessary post-adaptation the price one has to pay in order to be adaptable ? Is there a way to have a smaller ( say , less deep ) meta-learnable model which still adapts well ? This question is of both scientific interest and practical importance – a smaller model has a smaller ( memory ) footprint , faster inference and consumes less resources . In this work , through empirical studies on both synthetic datasets and benchmarks used in the literature , we investigate these questions by analyzing how well different learning models can meta-learn and adapt . We choose to focus on MAML due to its popularity . Our observations suggest depth is indeed necessary for meta-learning , despite the tasks being solvable using a shallower model . Thus , applying MAML to shallower models does not result in successful meta-learning models that can adapt well . Moreover , our studies also show that higher layers are responsible more for adapting to new tasks while the lower layers are responsible for learning task-general features . Our findings prompt us to propose a new method for meta-learning . The new approach introduces a meta-optimizer which learns to guide the ( parameter ) optimization process of a small model . The small model is used for solving the tasks while the optimizer bears the burden of extracting the knowledge of how to adapt . Empirical results show that despite using smaller models , the proposed algorithm with small models attains similar performance to larger models which use MAML to meta-learn and adapt . We note that a recent and concurrent work to ours addresses questions in this line of inquiry ( Raghu et al. , 2019 ) . They reach similar conclusions through different analysis and likewise , they propose a different approach for improving MAML . We believe our work is complementary to theirs . 2 RELATED WORK . Meta-learning , or learning-to-learn , is a vibrant research area with a long and rich history , lying at the intersection of psychology ( Maudsley , 1980 ; Biggs , 1985 ) , neuroscience ( Hasselmo & Bower , 1993 ) , and computer science ( Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) Of particular interest to this manuscript is the line of work concerned with optimization-based meta-learning ( OBML ) algorithms in the few-shot regime , of which MAML is a particular instance . ( Finn & Levine , 2018 ; Finn et al. , 2017 ; Finn , 2018 ) Since its inception , MAML has been widely applied to tackle the few-shot learning challenge , in domains such as computer vision ( Lee et al. , 2019 ) , natural language processing ( Gu et al. , 2018 ) , and robotics ( Nagabandi et al. , 2018 ) . It is also the basis of extensions for continual learning ( Finn et al. , 2019 ) , single- and multi-agent reinforcement learning ( Rothfuss et al. , 2018 ; Al-Shedivat et al. , 2017 ) , objective learning ( Chebotar et al. , 2019 ) , transfer learning ( Kirsch et al. , 2019 ) , and domain adaptation ( Li et al. , 2018 ) . Due to its generality , the adaptation procedure MAML introduces – which is the focus of our analysis – has recently been branded as Generalized Inner Loop MetaLearning . ( Grefenstette et al. , 2019 ) While popular in practical applications , relatively few works have analysed the convergence and modelling properties of those algorithms . Finn & Levine ( 2018 ) showed that , when combined with deep architectures , OBML is able to approximate arbitrary meta-learning schemes . Fallah et al . ( 2019 ) recently provided convergence guarantees for MAML to approximate first-order stationary points for non-convex loss surfaces , under some assumptions on the availability and distribution of the data . Other analyses ( empirical or theoretical ) have attempted to explain the generalization ability of OBML ( Guiroy et al. , 2019 ; Nichol et al. , 2018 ) , the bias induced by restricting the number of adaptation steps ( Wu et al. , 2018 ) , or the effect of higher-order terms in the meta-gradient estimation ( Foerster et al. , 2018 ; Rothfuss et al. , 2019 ) Closely related to our proposed methods are works attempting to improve the adaptation mechanisms of OBML . Meta-SGD ( Li et al. , 2017 ) meta-learns per-parameter learning rates , while Alpha MAML ( Behl et al. , 2019 ) adapts those learning rates during adaptation via gradient-descent . MetaCurvature ( Park & Oliva , 2019 ) propose to learn a Kronecker-factored pre-conditioning matrix to compute fast-adaptation updates . Their resulting algorithm is a special case of one of our methods , where the linear transformation is not updated during adaptation . Another way of constructing preconditioning matrices is to explicitly decompose all weight matrices of the model in two separate components , as done in T-Nets ( Lee & Choi , 2018 ) . The first component is only updated via the evaluation loss , while the second is also updated during fast-adaptation . Warped Gradient Descent ( Flennerhag et al. , 2019 ) further extends T-Nets by allowing both components to be non-linear functions . Instead of directly working with gradients , ( Chebotar et al. , 2019 ; Xu et al. , 2018 ) suggest to directly learn a loss function which is differentiated during fast-adaptation and results in faster and better learning . Additionally , meta-optimizers have also been used for meta-descent ( Sutton , 1981 ; Jacobs , 1988 ; Sutton , 1992b ; a ; Schraudolph , 1999 ) . They can be learned during a pre-training phase ( Andrychowicz et al. , 2016 ; Li & Malik , 2017a ; b ; Metz et al. , 2019b ; a ; c ; Wichrowska et al. , 2017 ) or online ( Kearney et al. , 2018 ; Jacobsen et al. , 2019 ; Ravi & Larochelle , 2017 ) . Our work differentiates from the the above by diagnosing and attempting to address the entanglement between modelling and adaptation in the meta-learning regime . We uncover a failure mode of MAML with linear and smaller models , and propose an effective solution in the form of expressive meta-optimizers . 3 ANALYSIS OF MAML . 3.1 BACKGROUND . In MAML and its many variants ( Lee & Choi , 2018 ; Nichol et al. , 2018 ; Li et al. , 2017 ) , we have a model whose parameters are denoted by θ . We would like to optimize θ such that the resulting model can adapt to new and unseeen tasks fast . To this end , we are given a set of ( meta ) training tasks , indexed by τ . For each such task , we associate with a loss Lτ ( θ ) . Distinctively , MAML minimizes the expected task loss after an adaptation phase , consisting of a few steps of gradient descent from the model ’ s current parameters . Since we do not have access to the target tasks to which we wish to adapt to , we use the expected loss over the training tasks , LMETA = E τ∼p ( τ ) [ Lτ ( θ − α∇Lτ ( θ ) ) ] ( 1 ) where the expectation is taken with respect to the distribution of the training tasks . α is the learning rate for the adaptation phase . The right-hand-side uses only one step gradient descent such that the aim is to adapt fast : in one step , we would like to reduce the loss as much as possible . In practice , a few more steps are often used . 3.2 ANALYSIS . Shallow models can be hard to meta-learn Many intuitive explanations for why MAML works exist . One appealing suggestion is that the minimizer of the meta-learning loss LMETA is chosen in such a way that it provides a very good initialization for the adaptation phase ; however , if the model is shallow such that Lτ is convex in its parameters , then any initialization that is good for fast adapting to one subset of tasks could be bad for another subset of tasks since all the tasks have precisely one global minimizer and those minimizers can be arbitrarily far from each other . When the test tasks are distributed similar to the training tasks , the ideal initialization point has to be the “ mean ” of the minimizers of the training tasks — the precise definition of the mean is not important , as we will see below . We illustrate a surprising challenge by studying MAML on a synthetic dataset and the Omniglot task ( Lake et al. , 2015 ) . Specifically , for the former study , we construct a set of binary classification tasks by first randomly sampling datapoints w ∈ R100 from a standard Gaussian and use each of them to define a linear decision boundary of a binary classification task . We assume the boundaries pass through the origin and we sample training , validation and testing samples by randomly sampling data points from both sides of the decision boundaries . By construction , a linear model such as logistic regression is sufficient to achieve very high accuracy on any task . But can MAML learn a logistic regression model from a subset of training tasks that is able to adapt quickly to the test tasks ? Note that due to the random sampling of the training tasks , the average of the minimizers ( ie , the samples from the Gaussian distribution ) is the origin . Likewise , for a set of test tasks randomly sampled the same way , the origin provides the best initialization by not favoring any particular task . Figure 1 reports the 1-step post-adaptation accuracy on the test tasks for the meta-learned logistic regression model . Surprisingly , the model fails to perform better than chance . Despite the simplicity of the tasks , logistic regression models are unable to find the origin as an initialization that adapts quickly to a set of test tasks that are similar to the training tasks . Figure 1 also reports how being deep can drastically change the behavior of MAML . There , we add a 4-layer linear network ( LinNet ) to the logistic regression model ( before the sigmoid activation ) . Note that while the model has the same representational capacity as a linear logistic regression , it is overparameterized and has many local optimizers . As such , MAML can train this model such that its 1-step adaptation accuracy reaches 92 % on average . We observe the same phenomena on meta-learning with MAML on the Omniglot dataset ( details of the dataset are given in the Section 5 ) . The shallow logistic regression model achieves 45 % accuracy on average ( for the 5-way classification tasks ) after 2-step adaptation from the meta-learned initialization . However , with a linear network , the adapted model achieves significantly higher accuracy – 70 % on average , while having the same modeling capacity as the logistic regression . In summary , these experiments suggest that even for tasks that are solvable with shallow models , a model needs to have enough depth in order to be meta-learnable and to adapt . We postpone the description of our experiments on nonlinear models to section 5 , where we also show that having sufficient depth is crucial for models to be meta-learnable , even when the tasks require fewer layers . A natural question arises : if being deep is so important for meta-learning on even very simple tasks , what different roles , if any , do different layers of a deep network play ? Depth enables task-general feature learning and fast adaptation We hypothesize that for deep models meta-learned with MAML , lower layers learn task-invariant features while higher layers are responsible for fast-adaptation . To examine this claim , we meta-train a model consisting of four convolutional layers ( C1 - C4 ) and a final fully-connected layer ( FC ) on Omniglot ( Lake et al. , 2015 ) and CIFAR-FS ( Bertinetto et al. , 2019 ) . ( Experimental setups are detailed in Section 5 . ) Once the model has finished meta-training , we perform a layer-wise ablation to study each layer ’ s effect on adaptation . In particular , we iterate over each layer and perform two sets of experiments . In the first , we freeze the weights of the layer such that it does not get updated during fast-adaptation – we call it freezing only this . In the second experiment , we freeze all layers but the layer such that this layer is updated during fast-adaptation – we call it adapting only this . The left two plots in Figure 2 report the average accuracy over 100 testing tasks from both datasets . We observe that freezing only the first few lower layers ( C1-C3 ) does not cause noticeable degradation to the post-adaptation accuracy . In fact , as long as the last convolutional layer ( C4 ) is not frozen , post-adaptation accuracy remains unaffected . This indicate that C1-C3 provide information that is task-invariant , while C4 is crucial for adaptation . This does not mean FC is not important — since adapting C4 requires gradients passing through the FC layer , it can not be arbitrary . In fact , in the rightmost plot of the figure , C1-C3 are held fixed during adaptation . While C4 and FC are allowed to adapt , and FC is perturbed with noise . When the noise is strong , the performance degrades significantly . Thus , we conclude that both C4 and FC play important roles in the mechanism for fast adaptation . We note that the recent work by Raghu et al . ( 2019 ) concurently reached similar conclusions on the mini-ImageNet dataset , using feature similarity-based analyses of the model ’ s representations . These observations highlight a fundamental issue : the property of being meta-learnable entails more model capacity than being learnable for a specific task . Thus , MAML can fail on models that lack the capacity to encode both task-general features and adaptation information , even when the models themselves are powerful enough to perform well on each of the tasks used for the meta-learning procedure . For example , with linear models ( e.g . logistic regression ) , the parameters are forced to overlap and serve both modelling and adaptation purposes . However , as soon as the models are overparameterized , the extra layers enable meta-learnability . In section 5 , we show that this observation also applies to nonlinear models where MAML-trained models quickly lose their performance when the number of layers is reduced .
This paper analyzes the popular MAML (Model-Agnostic Meta-Learner) method, and thereafter proposes a new approach to meta-learning based on observations from empirical studies. The key idea of the work is to separate the base model and task-specific adaptation components of MAML. This decoupling of adaptation and modeling reduces the burden on the model, thus enabling smaller memory efficient deep learning models to adapt and give high performance on meta learning tasks. The paper proposes a learnable meta-optimizer consisting of a parametrized function U such that the knowledge of adaptation is embedded into its parameters (A,b), instead of forward model parameters. The computational challenges posed by the proposed method are addressed by expressing the parameter matrix A as a Knonecker product of small matrices which is more efficient from memory and time complexity view point. The results on Omniglot and CIFAR-FS are promising, and the paper shows that the proposed meta-optimize is "more expressive", as well as can adapt a shallower model to the same level of performance as MAML.
SP:62c41894b5a79ff20a4a1e3d56c646e08981814d
Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning
1 INTRODUCTION . Meta-learning or learning to learn is an appealing notion due to its potential in addressing important challenges when applying machine learning to real-world problems . In particular , learning from prior tasks but being able to to adapt quickly to new tasks improves learning efficiency , model robustness , etc . A promising set of techiques , Model-Agnostic Meta-Learning ( Finn et al. , 2017 ) or MAML , and its variants , have received a lot of interest ( Nichol et al. , 2018 ; Lee & Choi , 2018 ; Grant et al. , 2018 ) . However , despite several efforts , understanding of how MAML works , either theoretically or in practice , has been lacking ( Finn & Levine , 2018 ; Fallah et al. , 2019 ) . For a model that meta-learns , its parameters need to encode not only the common knowledge extracted from the tasks it has seen , which form a task-general inductive bias , but also the capability to adapt to new test tasks ( similar to those it has seen ) with task-specific knowledge . This begs the question : how are these two sets of capabilities represented in a single model and how do they work together ? In the case of deep learning models , one natural hypothesis is that while knowledge is represented distributedly in parameters , they can be localized – for instance , lower layers encode task-general inductive bias and the higher layers encode adaptable task-specific inductive bias . This hypothesis is consistent with one of deep learning ’ s advantages in learning representations ( or feature extractors ) using its bottom layers . Then we must ask , in order for a deep learning model to meta-learn , does it need more depth than it needs for solving the target tasks ? In other words , is having a large capacity to encode knowledge that is unnecessary post-adaptation the price one has to pay in order to be adaptable ? Is there a way to have a smaller ( say , less deep ) meta-learnable model which still adapts well ? This question is of both scientific interest and practical importance – a smaller model has a smaller ( memory ) footprint , faster inference and consumes less resources . In this work , through empirical studies on both synthetic datasets and benchmarks used in the literature , we investigate these questions by analyzing how well different learning models can meta-learn and adapt . We choose to focus on MAML due to its popularity . Our observations suggest depth is indeed necessary for meta-learning , despite the tasks being solvable using a shallower model . Thus , applying MAML to shallower models does not result in successful meta-learning models that can adapt well . Moreover , our studies also show that higher layers are responsible more for adapting to new tasks while the lower layers are responsible for learning task-general features . Our findings prompt us to propose a new method for meta-learning . The new approach introduces a meta-optimizer which learns to guide the ( parameter ) optimization process of a small model . The small model is used for solving the tasks while the optimizer bears the burden of extracting the knowledge of how to adapt . Empirical results show that despite using smaller models , the proposed algorithm with small models attains similar performance to larger models which use MAML to meta-learn and adapt . We note that a recent and concurrent work to ours addresses questions in this line of inquiry ( Raghu et al. , 2019 ) . They reach similar conclusions through different analysis and likewise , they propose a different approach for improving MAML . We believe our work is complementary to theirs . 2 RELATED WORK . Meta-learning , or learning-to-learn , is a vibrant research area with a long and rich history , lying at the intersection of psychology ( Maudsley , 1980 ; Biggs , 1985 ) , neuroscience ( Hasselmo & Bower , 1993 ) , and computer science ( Schmidhuber , 1987 ; Thrun & Pratt , 1998 ) Of particular interest to this manuscript is the line of work concerned with optimization-based meta-learning ( OBML ) algorithms in the few-shot regime , of which MAML is a particular instance . ( Finn & Levine , 2018 ; Finn et al. , 2017 ; Finn , 2018 ) Since its inception , MAML has been widely applied to tackle the few-shot learning challenge , in domains such as computer vision ( Lee et al. , 2019 ) , natural language processing ( Gu et al. , 2018 ) , and robotics ( Nagabandi et al. , 2018 ) . It is also the basis of extensions for continual learning ( Finn et al. , 2019 ) , single- and multi-agent reinforcement learning ( Rothfuss et al. , 2018 ; Al-Shedivat et al. , 2017 ) , objective learning ( Chebotar et al. , 2019 ) , transfer learning ( Kirsch et al. , 2019 ) , and domain adaptation ( Li et al. , 2018 ) . Due to its generality , the adaptation procedure MAML introduces – which is the focus of our analysis – has recently been branded as Generalized Inner Loop MetaLearning . ( Grefenstette et al. , 2019 ) While popular in practical applications , relatively few works have analysed the convergence and modelling properties of those algorithms . Finn & Levine ( 2018 ) showed that , when combined with deep architectures , OBML is able to approximate arbitrary meta-learning schemes . Fallah et al . ( 2019 ) recently provided convergence guarantees for MAML to approximate first-order stationary points for non-convex loss surfaces , under some assumptions on the availability and distribution of the data . Other analyses ( empirical or theoretical ) have attempted to explain the generalization ability of OBML ( Guiroy et al. , 2019 ; Nichol et al. , 2018 ) , the bias induced by restricting the number of adaptation steps ( Wu et al. , 2018 ) , or the effect of higher-order terms in the meta-gradient estimation ( Foerster et al. , 2018 ; Rothfuss et al. , 2019 ) Closely related to our proposed methods are works attempting to improve the adaptation mechanisms of OBML . Meta-SGD ( Li et al. , 2017 ) meta-learns per-parameter learning rates , while Alpha MAML ( Behl et al. , 2019 ) adapts those learning rates during adaptation via gradient-descent . MetaCurvature ( Park & Oliva , 2019 ) propose to learn a Kronecker-factored pre-conditioning matrix to compute fast-adaptation updates . Their resulting algorithm is a special case of one of our methods , where the linear transformation is not updated during adaptation . Another way of constructing preconditioning matrices is to explicitly decompose all weight matrices of the model in two separate components , as done in T-Nets ( Lee & Choi , 2018 ) . The first component is only updated via the evaluation loss , while the second is also updated during fast-adaptation . Warped Gradient Descent ( Flennerhag et al. , 2019 ) further extends T-Nets by allowing both components to be non-linear functions . Instead of directly working with gradients , ( Chebotar et al. , 2019 ; Xu et al. , 2018 ) suggest to directly learn a loss function which is differentiated during fast-adaptation and results in faster and better learning . Additionally , meta-optimizers have also been used for meta-descent ( Sutton , 1981 ; Jacobs , 1988 ; Sutton , 1992b ; a ; Schraudolph , 1999 ) . They can be learned during a pre-training phase ( Andrychowicz et al. , 2016 ; Li & Malik , 2017a ; b ; Metz et al. , 2019b ; a ; c ; Wichrowska et al. , 2017 ) or online ( Kearney et al. , 2018 ; Jacobsen et al. , 2019 ; Ravi & Larochelle , 2017 ) . Our work differentiates from the the above by diagnosing and attempting to address the entanglement between modelling and adaptation in the meta-learning regime . We uncover a failure mode of MAML with linear and smaller models , and propose an effective solution in the form of expressive meta-optimizers . 3 ANALYSIS OF MAML . 3.1 BACKGROUND . In MAML and its many variants ( Lee & Choi , 2018 ; Nichol et al. , 2018 ; Li et al. , 2017 ) , we have a model whose parameters are denoted by θ . We would like to optimize θ such that the resulting model can adapt to new and unseeen tasks fast . To this end , we are given a set of ( meta ) training tasks , indexed by τ . For each such task , we associate with a loss Lτ ( θ ) . Distinctively , MAML minimizes the expected task loss after an adaptation phase , consisting of a few steps of gradient descent from the model ’ s current parameters . Since we do not have access to the target tasks to which we wish to adapt to , we use the expected loss over the training tasks , LMETA = E τ∼p ( τ ) [ Lτ ( θ − α∇Lτ ( θ ) ) ] ( 1 ) where the expectation is taken with respect to the distribution of the training tasks . α is the learning rate for the adaptation phase . The right-hand-side uses only one step gradient descent such that the aim is to adapt fast : in one step , we would like to reduce the loss as much as possible . In practice , a few more steps are often used . 3.2 ANALYSIS . Shallow models can be hard to meta-learn Many intuitive explanations for why MAML works exist . One appealing suggestion is that the minimizer of the meta-learning loss LMETA is chosen in such a way that it provides a very good initialization for the adaptation phase ; however , if the model is shallow such that Lτ is convex in its parameters , then any initialization that is good for fast adapting to one subset of tasks could be bad for another subset of tasks since all the tasks have precisely one global minimizer and those minimizers can be arbitrarily far from each other . When the test tasks are distributed similar to the training tasks , the ideal initialization point has to be the “ mean ” of the minimizers of the training tasks — the precise definition of the mean is not important , as we will see below . We illustrate a surprising challenge by studying MAML on a synthetic dataset and the Omniglot task ( Lake et al. , 2015 ) . Specifically , for the former study , we construct a set of binary classification tasks by first randomly sampling datapoints w ∈ R100 from a standard Gaussian and use each of them to define a linear decision boundary of a binary classification task . We assume the boundaries pass through the origin and we sample training , validation and testing samples by randomly sampling data points from both sides of the decision boundaries . By construction , a linear model such as logistic regression is sufficient to achieve very high accuracy on any task . But can MAML learn a logistic regression model from a subset of training tasks that is able to adapt quickly to the test tasks ? Note that due to the random sampling of the training tasks , the average of the minimizers ( ie , the samples from the Gaussian distribution ) is the origin . Likewise , for a set of test tasks randomly sampled the same way , the origin provides the best initialization by not favoring any particular task . Figure 1 reports the 1-step post-adaptation accuracy on the test tasks for the meta-learned logistic regression model . Surprisingly , the model fails to perform better than chance . Despite the simplicity of the tasks , logistic regression models are unable to find the origin as an initialization that adapts quickly to a set of test tasks that are similar to the training tasks . Figure 1 also reports how being deep can drastically change the behavior of MAML . There , we add a 4-layer linear network ( LinNet ) to the logistic regression model ( before the sigmoid activation ) . Note that while the model has the same representational capacity as a linear logistic regression , it is overparameterized and has many local optimizers . As such , MAML can train this model such that its 1-step adaptation accuracy reaches 92 % on average . We observe the same phenomena on meta-learning with MAML on the Omniglot dataset ( details of the dataset are given in the Section 5 ) . The shallow logistic regression model achieves 45 % accuracy on average ( for the 5-way classification tasks ) after 2-step adaptation from the meta-learned initialization . However , with a linear network , the adapted model achieves significantly higher accuracy – 70 % on average , while having the same modeling capacity as the logistic regression . In summary , these experiments suggest that even for tasks that are solvable with shallow models , a model needs to have enough depth in order to be meta-learnable and to adapt . We postpone the description of our experiments on nonlinear models to section 5 , where we also show that having sufficient depth is crucial for models to be meta-learnable , even when the tasks require fewer layers . A natural question arises : if being deep is so important for meta-learning on even very simple tasks , what different roles , if any , do different layers of a deep network play ? Depth enables task-general feature learning and fast adaptation We hypothesize that for deep models meta-learned with MAML , lower layers learn task-invariant features while higher layers are responsible for fast-adaptation . To examine this claim , we meta-train a model consisting of four convolutional layers ( C1 - C4 ) and a final fully-connected layer ( FC ) on Omniglot ( Lake et al. , 2015 ) and CIFAR-FS ( Bertinetto et al. , 2019 ) . ( Experimental setups are detailed in Section 5 . ) Once the model has finished meta-training , we perform a layer-wise ablation to study each layer ’ s effect on adaptation . In particular , we iterate over each layer and perform two sets of experiments . In the first , we freeze the weights of the layer such that it does not get updated during fast-adaptation – we call it freezing only this . In the second experiment , we freeze all layers but the layer such that this layer is updated during fast-adaptation – we call it adapting only this . The left two plots in Figure 2 report the average accuracy over 100 testing tasks from both datasets . We observe that freezing only the first few lower layers ( C1-C3 ) does not cause noticeable degradation to the post-adaptation accuracy . In fact , as long as the last convolutional layer ( C4 ) is not frozen , post-adaptation accuracy remains unaffected . This indicate that C1-C3 provide information that is task-invariant , while C4 is crucial for adaptation . This does not mean FC is not important — since adapting C4 requires gradients passing through the FC layer , it can not be arbitrary . In fact , in the rightmost plot of the figure , C1-C3 are held fixed during adaptation . While C4 and FC are allowed to adapt , and FC is perturbed with noise . When the noise is strong , the performance degrades significantly . Thus , we conclude that both C4 and FC play important roles in the mechanism for fast adaptation . We note that the recent work by Raghu et al . ( 2019 ) concurently reached similar conclusions on the mini-ImageNet dataset , using feature similarity-based analyses of the model ’ s representations . These observations highlight a fundamental issue : the property of being meta-learnable entails more model capacity than being learnable for a specific task . Thus , MAML can fail on models that lack the capacity to encode both task-general features and adaptation information , even when the models themselves are powerful enough to perform well on each of the tasks used for the meta-learning procedure . For example , with linear models ( e.g . logistic regression ) , the parameters are forced to overlap and serve both modelling and adaptation purposes . However , as soon as the models are overparameterized , the extra layers enable meta-learnability . In section 5 , we show that this observation also applies to nonlinear models where MAML-trained models quickly lose their performance when the number of layers is reduced .
This paper presents an experimental study of gradient based meta learning models and most notably MAML. The results suggest that modeling and adaptation are happening on different parts of the network leading to an inefficient use of the model capacity which explains the poor performance of MAML on linear (or small networks) models. To tackle this issue they proposed a kronecker factorization of the meta optimizer.
SP:62c41894b5a79ff20a4a1e3d56c646e08981814d
Improving the Gating Mechanism of Recurrent Neural Networks
1 INTRODUCTION . Recurrent neural networks ( RNNs ) have become a standard machine learning tool for learning from sequential data . However , RNNs are prone to the vanishing gradient problem , which occurs when the gradients of the recurrent weights become vanishingly small as they get backpropagated through time ( Hochreiter et al. , 2001 ) . A common approach to alleviate the vanishing gradient problem is to use gating mechanisms , leading to models such as the long short term memory ( Hochreiter & Schmidhuber , 1997 , LSTM ) and gated recurrent units ( Chung et al. , 2014 , GRUs ) . These gated RNNs have been very successful in several different application areas such as in reinforcement learning ( Kapturowski et al. , 2018 ; Espeholt et al. , 2018 ) and natural language processing ( Bahdanau et al. , 2014 ; Kočiskỳ et al. , 2018 ) . At every time step , gated recurrent models form a weighted combination of the history summarized by the previous state , and ( a function of ) the incoming inputs , to create the next state . The values of the gates , which are the coefficients of the combination , control the length of temporal dependencies that can be addressed . This weighted update can be seen as an additive or residual connection on the recurrent state , which helps signals propagate through time without vanishing . However , the gates themselves are prone to a saturating property which can also hamper gradient-based learning . This is particularly troublesome for RNNs , where carrying information for very long time delays requires gates to be very close to their saturated states . We formulate and address two particular problems that arise with the standard gating mechanism of recurrent models . First , typical initialization of the gates is relatively concentrated . This restricts the range of timescales the model can address , as the timescale of a particular unit is dictated by its gates . Our first proposal , which we call uniform gate initialization ( Section 2.2 ) , addresses this by directly initializing the activations of these gates from a distribution that captures a wider spread of dependency lengths . Second , learning when gates are in their saturation regime is difficult because of vanishing gradients through the gates . We derive a modification that uses an auxiliary refine gate to modulate a main gate , which allows it to have a wider range of activations without gradients vanishing as quickly . Combining these two independent modifications yields our main proposal , which we call the URgating mechanism . These changes can be applied to any gate ( i.e . bounded parametrized function ) and have minimal to no overhead in terms of speed , memory , code complexity , and ( hyper- ) parameters . We apply them to the forget gate of recurrent models , and evaluate on many benchmarks including synthetic long-term dependency tasks , sequential pixel-level image classification , language modeling , program execution , and reinforcement learning . Finally , we connect our methods to other proposed gating modifications , introduce a framework that allows each component to be replaced with similar ones , and perform theoretical analysis and extensive ablations of our method . Empirically , the UR- gating mechanism robustly improves on the standard forget and input gates of gated recurrent models . When applied to the LSTM , these simple modifications solve synthetic memory tasks that are pathologically difficult for the standard LSTM , achieve state-of-the-art results on sequential MNIST and CIFAR-10 , and show consistent improvements in language modeling on the WikiText-103 dataset ( Merity et al. , 2016 ) and reinforcement learning tasks ( Hung et al. , 2018 ) . 2 GATED RECURRENT NEURAL NETWORKS . Broadly speaking , RNNs are used to sweep over a sequence of input data xt to produce a sequence of recurrent states ht ∈Rd summarizing information seen so far . At a high level , an RNN is just a parametrized function in which each sequential application of the network computes a state update u : ( xt , ht−1 ) 7→ht . Gating mechanisms were introduced to address the vanishing gradient problem ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) , and have proven crucial to the success of RNNs . This mechanism essentially smooths out the update using the following equation , ht=ft ( xt , ht−1 ) ◦ht−1+it ( xt , ht−1 ) ◦u ( xt , ht−1 ) , ( 1 ) where the forget gate ft and input gate it are [ 0,1 ] d-valued functions that control how fast information is forgotten or allowed into the memory state . When the gates are tied , i.e . ft+it=1 as in GRUs , they behave as a low-pass filter , deciding the time-scale on which the unit will respond ( Tallec & Ollivier , 2018 ) . For example , large forget gate activations close to ft=1 are necessary for recurrent models to address long-term dependencies.1 We will introduce our improvements to the gating mechanism primarily in the context of the LSTM , which is the most popular recurrent model . However , these techniques can be used in any model that makes similar use of gates . ft=σ ( Lf ( xt , ht−1 ) ) ( 2 ) it=σ ( Li ( xt , ht−1 ) ) ( 3 ) ut=tanh ( Lu ( xt , ht−1 ) ) ( 4 ) ct=ft◦ct−1+it◦ut ( 5 ) ot=σ ( Lo ( xt , ht−1 ) ) ( 6 ) ht=ottanh ( ct ) ( 7 ) A typical LSTM ( equations ( 2 ) - ( 7 ) ) is an RNN whose state is represented by a tuple ( ht , ct ) consisting of a “ hidden ” state and “ cell ” state . The basic gate equation ( 1 ) is used to create the next cell state ct ( 5 ) . Note that the gate and update activations are a function of the previous hidden state ht−1 instead of ct−1 . Here , L ? stands for a parameterized linear function of its inputs with bias b ? , e.g . Lf ( xt , ht−1 ) =Wfxxt+Wfhht−1+bf , ( 8 ) and σ ( · ) refers to the standard sigmoid activation function which we will assume is used for defining [ 0,1 ] -valued activations in the rest of this paper . The gates of the LSTM were initially motivated as a binary mechanism , switching on or off to allow information and gradients to pass through . However , in reality this fails to happen due to a combination of initialization and saturation . This can be problematic , such as when very long dependencies are present . 2.1 THE UR-LSTM bf ∼σ−1 ( U [ 0,1 ] ) ( 9 ) ft=σ ( Lf ( xt , ht−1 ) ) ( 10 ) rt=σ ( Lr ( xt , ht−1 ) ) ( 11 ) gt=rt · ( 1− ( 1−ft ) 2 ) + ( 1−rt ) ·f2t ( 12 ) ct=gtct−1+ ( 1−gt ) ut ( 13 ) ot=σ ( Lo ( xt , ht−1 ) ) ( 14 ) ht=ottanh ( ct ) ( 15 ) We present two solutions which work in tandem to address the previously described issues . The first ensures a diverse range of gate values at the start of training by sampling the gate ’ s biases so that the activations will be approximately uniformly distributed at initialization . We call this Uniform Gate Initialization ( UGI ) . The second allows better gradient flow by reparameterizing the gate using an auxiliary “ refine ” gate . As our main application is for recurrent models , we present the full UR-LSTM model in equations ( 9 ) - ( 15 ) . However , we note that 1In this work , we use “ gate ” to alternatively refer to a [ 0,1 ] -valued function or the value ( “ activation ” ) of that function . these methods can be used to modify any gate ( or more generally , bounded function ) in any model . In this context the UR-LSTM is simply defined by applying UGI and a refine gate r on the original forget gate f to create an effective forget gate g ( equation ( 12 ) ) . This effective gate is then used in the cell state update ( 13 ) . Empirically , these small modifications to an LSTM are enough to allow it to achieve nearly binary activations and solve difficult memory problems ( Figure 4 ) . In the rest of Section 2 , we provide theoretical justifications for UGI and refine gates . 2.2 UNIFORM GATE INITIALIZATION . Standard initialization schemes for the gates can prevent the learning of long-term temporal correlations ( Tallec & Ollivier , 2018 ) . For example , supposing that a unit in the cell state has constant forget gate value ft , then the contribution of an input xt in k time steps will decay by ( ft ) k. This gives the unit an effective decay period or characteristic timescale ofO ( 11−ft ) . 2 Standard initialization of linear layersL sets the bias term to 0 , which causes the forget gate values ( 2 ) to concentrate around 1/2 . A common trick of setting the forget gate bias to bf =1.0 ( Jozefowicz et al. , 2015 ) does increase the value of the decay period to 11−σ ( 1.0 ) ≈3.7 . However , this is still relatively small , and moreover fixed , hindering the model from easily learning dependencies at varying timescales . We instead propose to directly control the distribution of forget gates , and hence the corresponding distribution of decay periods . In particular , we propose to simply initialize the value of the forget gate activations ft according to a uniform distribution U ( 0,1 ) , as described in Section 2.1 . An important difference between UGI and standard or other ( e.g . Tallec & Ollivier , 2018 ) initializations is that negative forget biases are allowed . The effect of UGI is that all timescales are covered , from units with very high forget activations remembering information ( nearly ) indefinitely , to those with low activations focusing solely on the incoming input . Additionally , it introduces no additional parameters ; it even can have less hyperparameters than the standard gate initialization , which sometimes tunes the forget bias bf . Appendix B.2 and B.3 further discuss the theoretical effects of UGI on timescales . 2.3 THE REFINE GATE . Given a gate f=σ ( Lf ( x ) ) ∈ [ 0,1 ] , the refine gate is an independent gate r=σ ( Lr ( x ) ) , and modulates f to produce a value g∈ [ 0,1 ] that will be used in place of f downstream . It is motivated by considering how to modify the output of a gate f in a way that promotes gradient-based learning , derived below . An additive modification The root of the saturation problem is that the gradient∇f of a gate , which can be written solely as a function of the activation value as f ( 1−f ) , decays rapidly as f approaches 0 or 1 . Thus when the activation f is past a certain upper or lower threshold , learning effectively stops . This problem can not be fully addressed only by modifying the input to the sigmoid , as in UGI and other techniques , as the gradient will still vanish by backpropagating through the activation function . Therefore to better control activations near the saturating regime , instead of changing the input to the sigmoid in f = σ ( L ( x ) ) , we consider modifying the output . In particular , we consider adjusting f with an input-dependent update φ ( f , x ) for some function φ , to create an effective gate g=f+φ ( f , x ) that will be used in place of f downstream such as in the main state update ( 1 ) . This sort of additive ( “ residual ” ) connection is a common technique to increase gradient flow , and indeed was the motivation of the LSTM additive gated update ( 1 ) itself ( Hochreiter & Schmidhuber , 1997 ) . Choosing the adjustment function Although many choices seem plausible for selecting the additive update φ , we reason backwards from necessary properties of the effective activation g to deduce a principled function φ . The refine gate will appear as a result . First , note that ft might need to be increased or decreased , regardless of what its value is . For example , given a large activation ft near saturation , it may need to be even higher to address long-term dependencies in recurrent models ; alternatively , if it is too high by initialization or needs to unlearn previous behavior , it may need to decrease . Therefore , the additive update to f should create an effective activation gt in the range ft±α for some α . Note that the allowed adjustment range α=α ( ft ) needs to be a function of f in order to keep g∈ [ 0,1 ] . 2This corresponds to the number of timesteps it takes to decay by 1/e . In particular , the additive adjustment range α ( f ) should satisfy the following natural properties : Validity : α ( f ) ≤min ( f,1−f ) , to ensure g∈f±α ( f ) ∈ [ 0,1 ] . Symmetry : Since 0 and 1 are completely symmetrical in the gating framework , α ( f ) =α ( 1−f ) . Differentiability : α ( f ) will be used in backpropagation , requiring α∈C1 ( R ) . Figure 2a illustrates the general appearance of α ( f ) based on these properties . In particular , Validity implies that that its derivative satisfies α′ ( 0 ) ≤ 1 and α′ ( 1 ) ≥ −1 , Symmetry implies α′ ( f ) = −α′ ( 1−f ) , and Differentiability implies α′ is continuous . The simplest such function satisfying these is the linear α′ ( f ) =1−2f , yielding α ( f ) =f−f2=f ( 1−f ) . Given such a α ( f ) , recall that the goal is to produce an effective activation g=f+φ ( f , x ) such that g∈f±α ( f ) ( Figure 2b ) . Our final observation is that the simplest such function φ satisfying this is φ ( f , x ) =α ( f ) ψ ( f , x ) for some ψ ( · ) ∈ [ −1,1 ] . Using the standard method for defining [ −1,1 ] -valued functions via a tanh non-linearity leads to φ ( f , x ) =α ( f ) ( 2r−1 ) for another gate r=σ ( L ( x ) ) . The full update is given in Equation ( 16 ) , g=f+α ( f ) ( 2r−1 ) =f+f ( 1−f ) ( 2r−1 ) = ( 1−r ) ·f2+r· ( 1− ( 1−f ) 2 ) ( 16 ) Equation ( 16 ) has the elegant interpretation that the gate r linearly interpolates between the lower band f−α ( f ) =f2 and the symmetric upper band f+α ( f ) =1− ( 1−f ) 2 ( Figure 2b ) . In other words , the original gate f is the coarse-grained determinant of the effective gate g , while the gate r “ refines ” it . This allows the effective gate g to reach much higher and lower activations than the constituent gates f and r ( Figure 2c ) , bypassing the saturating gradient problem . For example , this allows the effective forget gate to reach g=0.99 when the forget gate is only f=0.9 .
This paper introduces two novel techniques to help long term signal propagation in RNNs. One is an initialization strategy which uses inverse sigmoid function to avoid the decay of the contribution of the input earlier in time and another is a new design of a refine gate which pushes the value of the gate closer to 0 or 1. The authors conduct exhaustive ablation and empirical studies on copy task, sequential MNIST, language modeling and reinforcement learning.
SP:0a036636575bd445f18928c438a9fb063f11b012
Improving the Gating Mechanism of Recurrent Neural Networks
1 INTRODUCTION . Recurrent neural networks ( RNNs ) have become a standard machine learning tool for learning from sequential data . However , RNNs are prone to the vanishing gradient problem , which occurs when the gradients of the recurrent weights become vanishingly small as they get backpropagated through time ( Hochreiter et al. , 2001 ) . A common approach to alleviate the vanishing gradient problem is to use gating mechanisms , leading to models such as the long short term memory ( Hochreiter & Schmidhuber , 1997 , LSTM ) and gated recurrent units ( Chung et al. , 2014 , GRUs ) . These gated RNNs have been very successful in several different application areas such as in reinforcement learning ( Kapturowski et al. , 2018 ; Espeholt et al. , 2018 ) and natural language processing ( Bahdanau et al. , 2014 ; Kočiskỳ et al. , 2018 ) . At every time step , gated recurrent models form a weighted combination of the history summarized by the previous state , and ( a function of ) the incoming inputs , to create the next state . The values of the gates , which are the coefficients of the combination , control the length of temporal dependencies that can be addressed . This weighted update can be seen as an additive or residual connection on the recurrent state , which helps signals propagate through time without vanishing . However , the gates themselves are prone to a saturating property which can also hamper gradient-based learning . This is particularly troublesome for RNNs , where carrying information for very long time delays requires gates to be very close to their saturated states . We formulate and address two particular problems that arise with the standard gating mechanism of recurrent models . First , typical initialization of the gates is relatively concentrated . This restricts the range of timescales the model can address , as the timescale of a particular unit is dictated by its gates . Our first proposal , which we call uniform gate initialization ( Section 2.2 ) , addresses this by directly initializing the activations of these gates from a distribution that captures a wider spread of dependency lengths . Second , learning when gates are in their saturation regime is difficult because of vanishing gradients through the gates . We derive a modification that uses an auxiliary refine gate to modulate a main gate , which allows it to have a wider range of activations without gradients vanishing as quickly . Combining these two independent modifications yields our main proposal , which we call the URgating mechanism . These changes can be applied to any gate ( i.e . bounded parametrized function ) and have minimal to no overhead in terms of speed , memory , code complexity , and ( hyper- ) parameters . We apply them to the forget gate of recurrent models , and evaluate on many benchmarks including synthetic long-term dependency tasks , sequential pixel-level image classification , language modeling , program execution , and reinforcement learning . Finally , we connect our methods to other proposed gating modifications , introduce a framework that allows each component to be replaced with similar ones , and perform theoretical analysis and extensive ablations of our method . Empirically , the UR- gating mechanism robustly improves on the standard forget and input gates of gated recurrent models . When applied to the LSTM , these simple modifications solve synthetic memory tasks that are pathologically difficult for the standard LSTM , achieve state-of-the-art results on sequential MNIST and CIFAR-10 , and show consistent improvements in language modeling on the WikiText-103 dataset ( Merity et al. , 2016 ) and reinforcement learning tasks ( Hung et al. , 2018 ) . 2 GATED RECURRENT NEURAL NETWORKS . Broadly speaking , RNNs are used to sweep over a sequence of input data xt to produce a sequence of recurrent states ht ∈Rd summarizing information seen so far . At a high level , an RNN is just a parametrized function in which each sequential application of the network computes a state update u : ( xt , ht−1 ) 7→ht . Gating mechanisms were introduced to address the vanishing gradient problem ( Bengio et al. , 1994 ; Hochreiter et al. , 2001 ) , and have proven crucial to the success of RNNs . This mechanism essentially smooths out the update using the following equation , ht=ft ( xt , ht−1 ) ◦ht−1+it ( xt , ht−1 ) ◦u ( xt , ht−1 ) , ( 1 ) where the forget gate ft and input gate it are [ 0,1 ] d-valued functions that control how fast information is forgotten or allowed into the memory state . When the gates are tied , i.e . ft+it=1 as in GRUs , they behave as a low-pass filter , deciding the time-scale on which the unit will respond ( Tallec & Ollivier , 2018 ) . For example , large forget gate activations close to ft=1 are necessary for recurrent models to address long-term dependencies.1 We will introduce our improvements to the gating mechanism primarily in the context of the LSTM , which is the most popular recurrent model . However , these techniques can be used in any model that makes similar use of gates . ft=σ ( Lf ( xt , ht−1 ) ) ( 2 ) it=σ ( Li ( xt , ht−1 ) ) ( 3 ) ut=tanh ( Lu ( xt , ht−1 ) ) ( 4 ) ct=ft◦ct−1+it◦ut ( 5 ) ot=σ ( Lo ( xt , ht−1 ) ) ( 6 ) ht=ottanh ( ct ) ( 7 ) A typical LSTM ( equations ( 2 ) - ( 7 ) ) is an RNN whose state is represented by a tuple ( ht , ct ) consisting of a “ hidden ” state and “ cell ” state . The basic gate equation ( 1 ) is used to create the next cell state ct ( 5 ) . Note that the gate and update activations are a function of the previous hidden state ht−1 instead of ct−1 . Here , L ? stands for a parameterized linear function of its inputs with bias b ? , e.g . Lf ( xt , ht−1 ) =Wfxxt+Wfhht−1+bf , ( 8 ) and σ ( · ) refers to the standard sigmoid activation function which we will assume is used for defining [ 0,1 ] -valued activations in the rest of this paper . The gates of the LSTM were initially motivated as a binary mechanism , switching on or off to allow information and gradients to pass through . However , in reality this fails to happen due to a combination of initialization and saturation . This can be problematic , such as when very long dependencies are present . 2.1 THE UR-LSTM bf ∼σ−1 ( U [ 0,1 ] ) ( 9 ) ft=σ ( Lf ( xt , ht−1 ) ) ( 10 ) rt=σ ( Lr ( xt , ht−1 ) ) ( 11 ) gt=rt · ( 1− ( 1−ft ) 2 ) + ( 1−rt ) ·f2t ( 12 ) ct=gtct−1+ ( 1−gt ) ut ( 13 ) ot=σ ( Lo ( xt , ht−1 ) ) ( 14 ) ht=ottanh ( ct ) ( 15 ) We present two solutions which work in tandem to address the previously described issues . The first ensures a diverse range of gate values at the start of training by sampling the gate ’ s biases so that the activations will be approximately uniformly distributed at initialization . We call this Uniform Gate Initialization ( UGI ) . The second allows better gradient flow by reparameterizing the gate using an auxiliary “ refine ” gate . As our main application is for recurrent models , we present the full UR-LSTM model in equations ( 9 ) - ( 15 ) . However , we note that 1In this work , we use “ gate ” to alternatively refer to a [ 0,1 ] -valued function or the value ( “ activation ” ) of that function . these methods can be used to modify any gate ( or more generally , bounded function ) in any model . In this context the UR-LSTM is simply defined by applying UGI and a refine gate r on the original forget gate f to create an effective forget gate g ( equation ( 12 ) ) . This effective gate is then used in the cell state update ( 13 ) . Empirically , these small modifications to an LSTM are enough to allow it to achieve nearly binary activations and solve difficult memory problems ( Figure 4 ) . In the rest of Section 2 , we provide theoretical justifications for UGI and refine gates . 2.2 UNIFORM GATE INITIALIZATION . Standard initialization schemes for the gates can prevent the learning of long-term temporal correlations ( Tallec & Ollivier , 2018 ) . For example , supposing that a unit in the cell state has constant forget gate value ft , then the contribution of an input xt in k time steps will decay by ( ft ) k. This gives the unit an effective decay period or characteristic timescale ofO ( 11−ft ) . 2 Standard initialization of linear layersL sets the bias term to 0 , which causes the forget gate values ( 2 ) to concentrate around 1/2 . A common trick of setting the forget gate bias to bf =1.0 ( Jozefowicz et al. , 2015 ) does increase the value of the decay period to 11−σ ( 1.0 ) ≈3.7 . However , this is still relatively small , and moreover fixed , hindering the model from easily learning dependencies at varying timescales . We instead propose to directly control the distribution of forget gates , and hence the corresponding distribution of decay periods . In particular , we propose to simply initialize the value of the forget gate activations ft according to a uniform distribution U ( 0,1 ) , as described in Section 2.1 . An important difference between UGI and standard or other ( e.g . Tallec & Ollivier , 2018 ) initializations is that negative forget biases are allowed . The effect of UGI is that all timescales are covered , from units with very high forget activations remembering information ( nearly ) indefinitely , to those with low activations focusing solely on the incoming input . Additionally , it introduces no additional parameters ; it even can have less hyperparameters than the standard gate initialization , which sometimes tunes the forget bias bf . Appendix B.2 and B.3 further discuss the theoretical effects of UGI on timescales . 2.3 THE REFINE GATE . Given a gate f=σ ( Lf ( x ) ) ∈ [ 0,1 ] , the refine gate is an independent gate r=σ ( Lr ( x ) ) , and modulates f to produce a value g∈ [ 0,1 ] that will be used in place of f downstream . It is motivated by considering how to modify the output of a gate f in a way that promotes gradient-based learning , derived below . An additive modification The root of the saturation problem is that the gradient∇f of a gate , which can be written solely as a function of the activation value as f ( 1−f ) , decays rapidly as f approaches 0 or 1 . Thus when the activation f is past a certain upper or lower threshold , learning effectively stops . This problem can not be fully addressed only by modifying the input to the sigmoid , as in UGI and other techniques , as the gradient will still vanish by backpropagating through the activation function . Therefore to better control activations near the saturating regime , instead of changing the input to the sigmoid in f = σ ( L ( x ) ) , we consider modifying the output . In particular , we consider adjusting f with an input-dependent update φ ( f , x ) for some function φ , to create an effective gate g=f+φ ( f , x ) that will be used in place of f downstream such as in the main state update ( 1 ) . This sort of additive ( “ residual ” ) connection is a common technique to increase gradient flow , and indeed was the motivation of the LSTM additive gated update ( 1 ) itself ( Hochreiter & Schmidhuber , 1997 ) . Choosing the adjustment function Although many choices seem plausible for selecting the additive update φ , we reason backwards from necessary properties of the effective activation g to deduce a principled function φ . The refine gate will appear as a result . First , note that ft might need to be increased or decreased , regardless of what its value is . For example , given a large activation ft near saturation , it may need to be even higher to address long-term dependencies in recurrent models ; alternatively , if it is too high by initialization or needs to unlearn previous behavior , it may need to decrease . Therefore , the additive update to f should create an effective activation gt in the range ft±α for some α . Note that the allowed adjustment range α=α ( ft ) needs to be a function of f in order to keep g∈ [ 0,1 ] . 2This corresponds to the number of timesteps it takes to decay by 1/e . In particular , the additive adjustment range α ( f ) should satisfy the following natural properties : Validity : α ( f ) ≤min ( f,1−f ) , to ensure g∈f±α ( f ) ∈ [ 0,1 ] . Symmetry : Since 0 and 1 are completely symmetrical in the gating framework , α ( f ) =α ( 1−f ) . Differentiability : α ( f ) will be used in backpropagation , requiring α∈C1 ( R ) . Figure 2a illustrates the general appearance of α ( f ) based on these properties . In particular , Validity implies that that its derivative satisfies α′ ( 0 ) ≤ 1 and α′ ( 1 ) ≥ −1 , Symmetry implies α′ ( f ) = −α′ ( 1−f ) , and Differentiability implies α′ is continuous . The simplest such function satisfying these is the linear α′ ( f ) =1−2f , yielding α ( f ) =f−f2=f ( 1−f ) . Given such a α ( f ) , recall that the goal is to produce an effective activation g=f+φ ( f , x ) such that g∈f±α ( f ) ( Figure 2b ) . Our final observation is that the simplest such function φ satisfying this is φ ( f , x ) =α ( f ) ψ ( f , x ) for some ψ ( · ) ∈ [ −1,1 ] . Using the standard method for defining [ −1,1 ] -valued functions via a tanh non-linearity leads to φ ( f , x ) =α ( f ) ( 2r−1 ) for another gate r=σ ( L ( x ) ) . The full update is given in Equation ( 16 ) , g=f+α ( f ) ( 2r−1 ) =f+f ( 1−f ) ( 2r−1 ) = ( 1−r ) ·f2+r· ( 1− ( 1−f ) 2 ) ( 16 ) Equation ( 16 ) has the elegant interpretation that the gate r linearly interpolates between the lower band f−α ( f ) =f2 and the symmetric upper band f+α ( f ) =1− ( 1−f ) 2 ( Figure 2b ) . In other words , the original gate f is the coarse-grained determinant of the effective gate g , while the gate r “ refines ” it . This allows the effective gate g to reach much higher and lower activations than the constituent gates f and r ( Figure 2c ) , bypassing the saturating gradient problem . For example , this allows the effective forget gate to reach g=0.99 when the forget gate is only f=0.9 .
This paper proposes to improve the learnability of the gating mechanism in RNN by two modifications on the standard RNN structure, uniform gate initialization and refine gate. The authors give some propositions to show that the refine gate can maintain an effective forget effect within a larger range of timescale. The authors conduct experiments on four different tasks and compare the proposed modification with baseline methods.
SP:0a036636575bd445f18928c438a9fb063f11b012
Regularizing Deep Multi-Task Networks using Orthogonal Gradients
1 INTRODUCTION . Deep neural networks have proven to be very successful at solving isolated tasks in a variety of fields ranging from computer vision to NLP . In contrast to this single task setup , multi-task learning aims to train one model on several problems simultaneously . This approach would incentivize it to transfer knowledge between tasks and obtain multi-purpose representations that are less likely to overfit to an individual problem . Apart from potentially achieving better overall performance ( Caruana , 1997 ) , using a multi-task approach offers the additional benefit of being more efficient in memory usage and inference speed than training several single-task models ( Teichmann et al. , 2018 ) . A popular design for deep multi-task networks involves hard parameter sharing ( Ruder , 2017 ) , where a model contains a common encoder , which is shared across all tasks and several problem specific decoders . Given a single input each of the decoders is then trained for a distinct task using a different objective function and evaluation metric . This approach allows the network to learn multi-purpose representations through the shared encoder which every decoder will then use differently according to the requirements of its task . Although this architecture has been successfully applied to multi-task learning ( Kendall et al. , 2018 ; Chen et al. , 2017 ) it also faces some challenges . From an architectural point of view it is unclear how to choose the task specific network capacity ( Vandenhende et al. , 2019 ; Misra et al. , 2016 ) as well as the complexity of representations to share between tasks . Additionally , optimizing multiple objectives simultaneously introduces difficulties based on the nature of those tasks and the way their gradients interact with each other ( Sener & Koltun , 2018 ) . The dissimilarity between tasks could cause negative transfer of knowledge ( Long et al. , 2017 ; Zhao et al. , 2018 ; Zamir et al. , 2018 ) or having task losses of different magnitudes might bias the network in favor of a subset of tasks ( Chen et al. , 2017 ; Kendall et al. , 2018 ) . It becomes clear that the overall success of multi-task learning is reliant on managing the interaction between tasks , and implicitly their gradients with respect to the shared parameters of the model . This work focuses on the second category of challenges facing networks that employ hard parameter sharing , namely the interaction between tasks when being jointly optimized . We concentrate on reducing task interference by regularizing the angle between gradients rather than their magni- tudes . Based on our empirical findings unregularized multi-task networks have high variation in the angles between task gradients , meaning gradients frequently point in similar or opposite directions . Additionally , well-performing models share the property that their distribution of cosines between task gradients is zero-centered and low in variance . Nearly orthogonal gradients will reduce task competition as individual task decoders learn to use different features of the encoder , thus not interfering with each other . Furthermore , we discover that popular regularization methods such as Dropout ( Srivastava et al. , 2014 ) and Batchnorm ( Ioffe & Szegedy , 2015 ) implicitly orthogonalize the task gradients . We propose a new gradient regularization term to the multi-task objective that explicitly minimizes the squared cosine between task gradients and show that our method obtains competitive results on the NYUv2 ( Nathan Silberman & Fergus , 2012 ) and SUN RGB-D ( Song et al. , 2015 ) datasets . 2 RELATED WORK . Multi-task learning is a sub-field of transfer learning ( Pan & Yang , 2009 ) and encompasses a variety of methods ( Caruana , 1997 ) . The recent focus on deep multi-task learning can be attributed to the neural network ’ s unparalleled success in computer vision ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ) and its capability to create hierarchical , multi-purpose representations ( Bengio et al. , 2013 ; Yosinski et al. , 2014 ) . Deep multi-task learning is commonly divided into hard or soft parameter sharing methods ( Caruana , 1997 ; Ruder , 2017 ) . Soft parameter sharing maintains separate models for each task but enforces constraints on the joint parameter set ( Yang & Hospedales , 2016 ) . In this work we focus solely on hard parameter sharing methods , which maintain a common encoder for all tasks but also contain task-specific decoders that use the learned generic representations . We further split deep multi-task approaches into architecture and loss focused methods . Architecture based methods aim at finding a network structure that allows optimal knowledge sharing between tasks by balancing the capacities of the shared encoder and the task specific decoders . Most multitask related work chooses the architecture on an ad hoc basis ( Teichmann et al. , 2018 ; Neven et al. , 2017 ) , but recent research looks to answer the question of how much and where to optimally share knowledge . Cross-stitch networks maintain separate models for all tasks but allow communication between arbitrary layers through specialized cross-stitch units ( Misra et al. , 2016 ) . Branched multitask networks allow for the decoders to also be shared by computing a task affinity matrix that indicates the usefulness of features at arbitrary depths and for different problems ( Vandenhende et al. , 2019 ) . Liu et al . ( 2019b ) introduces attention modules allowing task specific networks to learn which features from the shared feature network to use at distinct layers . Loss focused methods try to balance the impact of individual tasks on the training of the network by adaptively weighting the task specific losses and gradients . Certain tasks might have a disproportionate impact on the joint objective function forcing the shared encoder to be optimized entirely for a subset of problems , effectively starving other tasks of resources . Kendall et al . ( 2018 ) devise a weighting method dependent on the homoscedastic uncertainty inherently linked to each task while Chen et al . ( 2017 ) reduce the task imbalances by weighting task losses such that their gradients are similar in magnitude . Sener & Koltun ( 2018 ) cast multi-task learning as a multi-objective optimization problem and aim to find a Pareto optimal solution . They also analyze gradients but with the goal to then scale these such that their convex combination will satisfy the necessary conditions to reach the desired solution . In contrast to these approaches our method does not seek to scale gradients , neither directly nor via task weights , but conditions the optimization trajectory towards solutions that have orthogonal task gradients . The problem of conflicting gradients or task interference has been previously explored in multi-task learning as well as continual learning . Zhao et al . ( 2018 ) introduce a modulation module that reduces destructive gradient interference between tasks that are unrelated . Du et al . ( 2018 ) choose to ignore the gradients of auxiliary tasks if they are not sharing a similar direction with the main task . Riemer et al . ( 2018 ) maximize the dot product between task gradients in order to overcome catastrophic forgetting . These methods have in common the interpretation that two tasks are in conflict if the cosine between their gradients is negative , while alignment should be encouraged . Our work differs from this perspective by additionally penalizing task gradients that have a similar direction , arguing that by decorrelating updates the shared encoder is able to maximize its representational capacity . A similar observation regarding orthogonal parameters is made by Rodrı́guez et al . ( 2016 ) who propose a weight regularization term for single task learning that decorrelates filters in convolutional neural networks . Another way to minimize interference is to encourage sparsity ( French , 1991 ; Javed & White , 2019 ) , although doing so directly might interfere with the network ’ s ability to learn shared representations ( Ruder , 2017 ) . Finally our work is in line with recent research ( Liu et al. , 2019a ; Santurkar et al. , 2018 ) that emphasizes the benefit of analyzing gradients to understand neural networks and devise potential improvements to their training . We share elements with Drucker & Le Cun ( 1991 ) and more recently Varga et al . ( 2017 ) in that we propose explicit regularization methods for gradients . 3 ORTHOGONAL TASK GRADIENTS . In this work we present a novel gradient based regularization term that orthogonalizes the interaction between multiple tasks . We define a multi-task neural network as a shared encoder fθsh and a set of task-specific decoders fθti , for each of the T tasks T = { t1 , ... , tT } . The encoder creates a mapping between the input space X and a latent feature space Rd that is used by each of the decoders to predict the task specific labels Yti . Each of the inputs in X is associated to a set of labels for the tasks in T , forming the dataset D = { xi , yt1i , ... , y tT i } i∈N of N observations . For task t ∈ T we define the empirical loss as Lt ∆ = 1N ∑ i∈N Lt ( fθt ( fθsh ( xi ) ) , yi ) . The multitask objective can be then constructed as a convex combination of individual task losses using the weights wt ∈ R : LT = ∑ t∈T wtLt ( 1 ) Using gradient descent to minimize the multi-task loss in Equation 1 , we obtain the following update rule for the parameters θsh : θsh = θsh − γ ∑ t∈T wt ∂Lt ∂θsh ( 2 ) It becomes clear that the overall success of a multi-task network is dependent on the individual task gradients and their relationship to each other . Task gradients might cancel each other out or a certain task might dominate the direction of the encoder ’ s parameters . We further examine the interaction between two tasks ti and tj by looking at the cosine of their gradients with respect to the encoder : cos ( ti , tj ) = cos ( ∂Lti ∂θsh , ∂Ltj ∂θsh ) ( 3 ) Previous work argues that negative transfer , task interference or competition ( Du et al. , 2018 ; Sener & Koltun , 2018 ; Zhao et al. , 2018 ) happens when this cosine is negative , leading to tasks with smaller gradient magnitudes in fact increasing their error during training . The interference between tasks lies in the competition for resources in the shared encoder fθsh . Based on empirical observations we argue that multi-task networks not only benefit when the cosine is non-negative but more so when task gradients are close to orthogonal . In a continual learning setting it makes sense for task gradients to be as aligned as possible in order to avoid catastrophic forgetting ( Riemer et al. , 2018 ) . In a multi-task setting it is however unclear whether maximizing the transfer between tasks also leads to a superior solution for all objectives , especially since there is no risk of forgetting . In our experiments we observe a higher performance when orthogonalizing correlated tasks on the SUN RGB-D dataset . Standley et al . ( 2019 ) make a similar finding that learning related tasks does not necessarily improve the multi-task optimization . In our approach we minimize the squared cosine during training which diminishes competition as each task will be able to optimize different parameters of the encoder . By also orthogonalizing positive transfer the encoder will produce a richer feature space and multi-purpose representations . To minimize the cosine between two task gradients we simply add the squared cosine to the multitask objective function from Equation 1 with an additional hyper-parameter α ∈ R to adjust the penalty weight : Ltitj = wtiLti + wtjLtj + α cos2 ( ti , tj ) ( 4 ) We can generalize Equation 4 to T tasks by taking the squared Frobenius norm of the cosine distance matrix between gradients . We define∇θsh as the column vector of unit normalized partial derivatives of the task losses with respect to θsh . The distance matrix for T tasks ( cos2 ( ti , tj ) ) 1≤i , j≤T can be then efficiently computed by taking the outer product of ∇θsh with itself . Subsequently we subtract the constant distance between identical tasks and normalize by accounting for the matrix symmetry as well as the number of task pairs in order to ensure the same bounds for the penalty term . ∇θsh = ( ˆ∂Lt1 ∂θsh , ˆ∂Lt2 ∂θsh , ... , ˆ∂LtT ∂θsh ) LT = ∑ t∈T wtLt + α T ( T − 1 ) ‖∇ᵀθsh∇θsh − IT ‖ 2 F ( 5 ) The above equation generalizes the gradient regularization term for T tasks and maintains its range within [ 0 , 1 ] . Even though wt are hyper-parameters , our focus is solely on the cosine regularization and will therefore treat them as constants . In all the experiments we add our penalty term to the naive approach of having all tasks equally weighted . Computing the regularization term for each layer in the shared encoder is computationally prohibitive , so in practice we restrict ourselves to computing the loss with respect to only the last layer of the encoder . Finally , we will refer to Φ ( ti , tj ) as the distribution of cosines between the gradients of ti and tj throughout training , having mean µ ( ti , tj ) and standard deviation σ ( ti , tj ) . Our gradient regularization method minimizes σ ( ti , tj ) , which will be empirically shown later on .
This paper embraces the idea that better multi-task/lifelong learning can be achieved if tasks produce gradients that are orthogonal to the gradients produced by other tasks. The authors propose an approach to regularizing learning in order to incentivize this to happen. However, as they mention themselves, the regularized loss is computationally intractable in general and they only apply it to a subset of their network as a result. Given the computational scalability concerns, it is natural to wonder why researchers in the community would adopt this approach rather than other approaches that also aim to make gradients orthogonal.
SP:3aa14d5bb77c3cdf165b832dcc81f8b7867cefe6
Regularizing Deep Multi-Task Networks using Orthogonal Gradients
1 INTRODUCTION . Deep neural networks have proven to be very successful at solving isolated tasks in a variety of fields ranging from computer vision to NLP . In contrast to this single task setup , multi-task learning aims to train one model on several problems simultaneously . This approach would incentivize it to transfer knowledge between tasks and obtain multi-purpose representations that are less likely to overfit to an individual problem . Apart from potentially achieving better overall performance ( Caruana , 1997 ) , using a multi-task approach offers the additional benefit of being more efficient in memory usage and inference speed than training several single-task models ( Teichmann et al. , 2018 ) . A popular design for deep multi-task networks involves hard parameter sharing ( Ruder , 2017 ) , where a model contains a common encoder , which is shared across all tasks and several problem specific decoders . Given a single input each of the decoders is then trained for a distinct task using a different objective function and evaluation metric . This approach allows the network to learn multi-purpose representations through the shared encoder which every decoder will then use differently according to the requirements of its task . Although this architecture has been successfully applied to multi-task learning ( Kendall et al. , 2018 ; Chen et al. , 2017 ) it also faces some challenges . From an architectural point of view it is unclear how to choose the task specific network capacity ( Vandenhende et al. , 2019 ; Misra et al. , 2016 ) as well as the complexity of representations to share between tasks . Additionally , optimizing multiple objectives simultaneously introduces difficulties based on the nature of those tasks and the way their gradients interact with each other ( Sener & Koltun , 2018 ) . The dissimilarity between tasks could cause negative transfer of knowledge ( Long et al. , 2017 ; Zhao et al. , 2018 ; Zamir et al. , 2018 ) or having task losses of different magnitudes might bias the network in favor of a subset of tasks ( Chen et al. , 2017 ; Kendall et al. , 2018 ) . It becomes clear that the overall success of multi-task learning is reliant on managing the interaction between tasks , and implicitly their gradients with respect to the shared parameters of the model . This work focuses on the second category of challenges facing networks that employ hard parameter sharing , namely the interaction between tasks when being jointly optimized . We concentrate on reducing task interference by regularizing the angle between gradients rather than their magni- tudes . Based on our empirical findings unregularized multi-task networks have high variation in the angles between task gradients , meaning gradients frequently point in similar or opposite directions . Additionally , well-performing models share the property that their distribution of cosines between task gradients is zero-centered and low in variance . Nearly orthogonal gradients will reduce task competition as individual task decoders learn to use different features of the encoder , thus not interfering with each other . Furthermore , we discover that popular regularization methods such as Dropout ( Srivastava et al. , 2014 ) and Batchnorm ( Ioffe & Szegedy , 2015 ) implicitly orthogonalize the task gradients . We propose a new gradient regularization term to the multi-task objective that explicitly minimizes the squared cosine between task gradients and show that our method obtains competitive results on the NYUv2 ( Nathan Silberman & Fergus , 2012 ) and SUN RGB-D ( Song et al. , 2015 ) datasets . 2 RELATED WORK . Multi-task learning is a sub-field of transfer learning ( Pan & Yang , 2009 ) and encompasses a variety of methods ( Caruana , 1997 ) . The recent focus on deep multi-task learning can be attributed to the neural network ’ s unparalleled success in computer vision ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2014 ; He et al. , 2016 ) and its capability to create hierarchical , multi-purpose representations ( Bengio et al. , 2013 ; Yosinski et al. , 2014 ) . Deep multi-task learning is commonly divided into hard or soft parameter sharing methods ( Caruana , 1997 ; Ruder , 2017 ) . Soft parameter sharing maintains separate models for each task but enforces constraints on the joint parameter set ( Yang & Hospedales , 2016 ) . In this work we focus solely on hard parameter sharing methods , which maintain a common encoder for all tasks but also contain task-specific decoders that use the learned generic representations . We further split deep multi-task approaches into architecture and loss focused methods . Architecture based methods aim at finding a network structure that allows optimal knowledge sharing between tasks by balancing the capacities of the shared encoder and the task specific decoders . Most multitask related work chooses the architecture on an ad hoc basis ( Teichmann et al. , 2018 ; Neven et al. , 2017 ) , but recent research looks to answer the question of how much and where to optimally share knowledge . Cross-stitch networks maintain separate models for all tasks but allow communication between arbitrary layers through specialized cross-stitch units ( Misra et al. , 2016 ) . Branched multitask networks allow for the decoders to also be shared by computing a task affinity matrix that indicates the usefulness of features at arbitrary depths and for different problems ( Vandenhende et al. , 2019 ) . Liu et al . ( 2019b ) introduces attention modules allowing task specific networks to learn which features from the shared feature network to use at distinct layers . Loss focused methods try to balance the impact of individual tasks on the training of the network by adaptively weighting the task specific losses and gradients . Certain tasks might have a disproportionate impact on the joint objective function forcing the shared encoder to be optimized entirely for a subset of problems , effectively starving other tasks of resources . Kendall et al . ( 2018 ) devise a weighting method dependent on the homoscedastic uncertainty inherently linked to each task while Chen et al . ( 2017 ) reduce the task imbalances by weighting task losses such that their gradients are similar in magnitude . Sener & Koltun ( 2018 ) cast multi-task learning as a multi-objective optimization problem and aim to find a Pareto optimal solution . They also analyze gradients but with the goal to then scale these such that their convex combination will satisfy the necessary conditions to reach the desired solution . In contrast to these approaches our method does not seek to scale gradients , neither directly nor via task weights , but conditions the optimization trajectory towards solutions that have orthogonal task gradients . The problem of conflicting gradients or task interference has been previously explored in multi-task learning as well as continual learning . Zhao et al . ( 2018 ) introduce a modulation module that reduces destructive gradient interference between tasks that are unrelated . Du et al . ( 2018 ) choose to ignore the gradients of auxiliary tasks if they are not sharing a similar direction with the main task . Riemer et al . ( 2018 ) maximize the dot product between task gradients in order to overcome catastrophic forgetting . These methods have in common the interpretation that two tasks are in conflict if the cosine between their gradients is negative , while alignment should be encouraged . Our work differs from this perspective by additionally penalizing task gradients that have a similar direction , arguing that by decorrelating updates the shared encoder is able to maximize its representational capacity . A similar observation regarding orthogonal parameters is made by Rodrı́guez et al . ( 2016 ) who propose a weight regularization term for single task learning that decorrelates filters in convolutional neural networks . Another way to minimize interference is to encourage sparsity ( French , 1991 ; Javed & White , 2019 ) , although doing so directly might interfere with the network ’ s ability to learn shared representations ( Ruder , 2017 ) . Finally our work is in line with recent research ( Liu et al. , 2019a ; Santurkar et al. , 2018 ) that emphasizes the benefit of analyzing gradients to understand neural networks and devise potential improvements to their training . We share elements with Drucker & Le Cun ( 1991 ) and more recently Varga et al . ( 2017 ) in that we propose explicit regularization methods for gradients . 3 ORTHOGONAL TASK GRADIENTS . In this work we present a novel gradient based regularization term that orthogonalizes the interaction between multiple tasks . We define a multi-task neural network as a shared encoder fθsh and a set of task-specific decoders fθti , for each of the T tasks T = { t1 , ... , tT } . The encoder creates a mapping between the input space X and a latent feature space Rd that is used by each of the decoders to predict the task specific labels Yti . Each of the inputs in X is associated to a set of labels for the tasks in T , forming the dataset D = { xi , yt1i , ... , y tT i } i∈N of N observations . For task t ∈ T we define the empirical loss as Lt ∆ = 1N ∑ i∈N Lt ( fθt ( fθsh ( xi ) ) , yi ) . The multitask objective can be then constructed as a convex combination of individual task losses using the weights wt ∈ R : LT = ∑ t∈T wtLt ( 1 ) Using gradient descent to minimize the multi-task loss in Equation 1 , we obtain the following update rule for the parameters θsh : θsh = θsh − γ ∑ t∈T wt ∂Lt ∂θsh ( 2 ) It becomes clear that the overall success of a multi-task network is dependent on the individual task gradients and their relationship to each other . Task gradients might cancel each other out or a certain task might dominate the direction of the encoder ’ s parameters . We further examine the interaction between two tasks ti and tj by looking at the cosine of their gradients with respect to the encoder : cos ( ti , tj ) = cos ( ∂Lti ∂θsh , ∂Ltj ∂θsh ) ( 3 ) Previous work argues that negative transfer , task interference or competition ( Du et al. , 2018 ; Sener & Koltun , 2018 ; Zhao et al. , 2018 ) happens when this cosine is negative , leading to tasks with smaller gradient magnitudes in fact increasing their error during training . The interference between tasks lies in the competition for resources in the shared encoder fθsh . Based on empirical observations we argue that multi-task networks not only benefit when the cosine is non-negative but more so when task gradients are close to orthogonal . In a continual learning setting it makes sense for task gradients to be as aligned as possible in order to avoid catastrophic forgetting ( Riemer et al. , 2018 ) . In a multi-task setting it is however unclear whether maximizing the transfer between tasks also leads to a superior solution for all objectives , especially since there is no risk of forgetting . In our experiments we observe a higher performance when orthogonalizing correlated tasks on the SUN RGB-D dataset . Standley et al . ( 2019 ) make a similar finding that learning related tasks does not necessarily improve the multi-task optimization . In our approach we minimize the squared cosine during training which diminishes competition as each task will be able to optimize different parameters of the encoder . By also orthogonalizing positive transfer the encoder will produce a richer feature space and multi-purpose representations . To minimize the cosine between two task gradients we simply add the squared cosine to the multitask objective function from Equation 1 with an additional hyper-parameter α ∈ R to adjust the penalty weight : Ltitj = wtiLti + wtjLtj + α cos2 ( ti , tj ) ( 4 ) We can generalize Equation 4 to T tasks by taking the squared Frobenius norm of the cosine distance matrix between gradients . We define∇θsh as the column vector of unit normalized partial derivatives of the task losses with respect to θsh . The distance matrix for T tasks ( cos2 ( ti , tj ) ) 1≤i , j≤T can be then efficiently computed by taking the outer product of ∇θsh with itself . Subsequently we subtract the constant distance between identical tasks and normalize by accounting for the matrix symmetry as well as the number of task pairs in order to ensure the same bounds for the penalty term . ∇θsh = ( ˆ∂Lt1 ∂θsh , ˆ∂Lt2 ∂θsh , ... , ˆ∂LtT ∂θsh ) LT = ∑ t∈T wtLt + α T ( T − 1 ) ‖∇ᵀθsh∇θsh − IT ‖ 2 F ( 5 ) The above equation generalizes the gradient regularization term for T tasks and maintains its range within [ 0 , 1 ] . Even though wt are hyper-parameters , our focus is solely on the cosine regularization and will therefore treat them as constants . In all the experiments we add our penalty term to the naive approach of having all tasks equally weighted . Computing the regularization term for each layer in the shared encoder is computationally prohibitive , so in practice we restrict ourselves to computing the loss with respect to only the last layer of the encoder . Finally , we will refer to Φ ( ti , tj ) as the distribution of cosines between the gradients of ti and tj throughout training , having mean µ ( ti , tj ) and standard deviation σ ( ti , tj ) . Our gradient regularization method minimizes σ ( ti , tj ) , which will be empirically shown later on .
In this paper, the author analyzed gradient regularization in deep multitask learning. They empirically discovered a sharper concentration (low variance) in angles between the task gradient distributions could potentially improve the performance in multi-task learning. Then they proposed a new gradient regularization to enforce the gradient for each task be orthogonal. Empirical results (on Multi-digits MNIST and NYUv2 data-sets) indicate a marginal improvement, comparing with baselines.
SP:3aa14d5bb77c3cdf165b832dcc81f8b7867cefe6
Smoothness and Stability in GANs
1 INTRODUCTION : TAMING INSTABILITY WITH SMOOTHNESS . Generative adversarial networks ( Goodfellow et al. , 2014 ) , or GANs , are a powerful class of generative models defined through minimax game . GANs and their variants have shown impressive performance in synthesizing various types of datasets , especially natural images . Despite these successes , the training of GANs remains quite unstable in nature , and this instability remains difficult to understand theoretically . Since the introduction of GANs , there have been many techniques proposed to stabilize GANs training , including studies of new generator/discriminator architectures , loss functions , and regularization techniques . Notably , Arjovsky et al . ( 2017 ) proposed Wasserstein GAN ( WGAN ) , which in principle avoids instability caused by mismatched generator and data distribution supports . In practice , this is enforced by Lipschitz constraints , which in turn motivated developments like gradient penalties ( Gulrajani et al. , 2017 ) and spectral normalization ( Miyato et al. , 2018 ) . Indeed , these stabilization techniques have proven essential to achieving the latest state-of-the-art results ( Karras et al. , 2018 ; Brock et al. , 2019 ) . On the other hand , a solid theoretical understanding of training stability has not been established . Several empirical observations point to an incomplete understanding . For example , why does applying a gradient penalty together spectral norm seem to improve performance ( Miyato et al. , 2018 ) , even though in principle they serve the same purpose ? Why does applying only spectral normalization with the Wasserstein loss fail ( Miyato , 2018 ) , even though the analysis of Arjovsky et al . ( 2017 ) suggests it should be sufficient ? Why is applying gradient penalties effective , even outside their original context of the Wasserstein GAN ( Fedus et al. , 2018 ) ? In this work , we develop a framework to analyze the stability of GAN training that resolves these apparent contradictions and clarifies the roles of these regularization techniques . Our approach considers the smoothness of the loss function used . In optimization , smoothness is a well-known condition that ensures that gradient descent and its variants become stable ( see e.g. , Bertsekas ( 1999 ) ) . For example , the following well-known proposition is the starting point of our stability analysis : Proposition 1 ( Bertsekas ( 1999 ) , Proposition 1.2.3 ) . Suppose f : Rp → R is L-smooth and bounded below . Let xk+1 : = xk − 1L∇f ( xk ) . Then ||∇f ( xk ) || → 0 as k →∞ . This proposition says that under a smoothness condition on the function , gradient descent with a constant step size 1L approaches stationarity ( i.e. , the gradient norm approaches zero ) . This is a rather weak notion of convergence , as it does not guarantee that the iterates converge to a point , and even if the iterates do converge , the limit is a stationary point and not necessarily an minimizer . Nevertheless , empirically , not even this stationarity is satisfied by GANs , which are known to frequently destabilize and diverge during training . To diagnose this instability , we consider the smoothness of the GAN ’ s loss function . GANs are typically framed as minimax problems of the form inf θ sup ϕ J ( µθ , ϕ ) , ( 1 ) where J is a loss function that takes a generator distribution µθ and discriminator ϕ , and θ ∈ Rp denotes the parameters of the generator . Unfortunately , the minimax nature of this problem makes stability and convergence difficult to analyze . To make the analysis more tractable , we define J ( µ ) = supϕ J ( µ , ϕ ) , so that ( 1 ) becomes simply inf θ J ( µθ ) . ( 2 ) This choice corresponds to the common assumption that the discriminator is allowed to reach optimality at every training step . Now , the GAN algorithm can be regarded as simply gradient descent on the Rp → R function θ 7→ J ( µθ ) , which may be analyzed using Proposition 1 . In particular , if this function θ 7→ J ( µθ ) satisfies the smoothness assumption , then the GAN training should be stable in that it should approach stationarity under the assumption of an optimal discriminator . In the remainder of this paper , we investigate whether the smoothness assumption is satisfied for various GAN losses . Our analysis answers two questions : Q1 . Which existing GAN losses , if any , satisfy the smoothness condition in Proposition 1 ? Q2 . Are there choices of loss , regularization , or architecture that enforce smoothness in GANs ? As results of our analysis , our contributions are as follows : 1 . We derive sufficient conditions for the GAN algorithm to be stationary under certain assumptions ( Theorem 1 ) . Our conditions relate to the smoothness of GAN loss used as well as the parameterization of the generator . 2 . We show that most common GAN losses do not satisfy the all of the smoothness conditions , thereby corroborating their empirical instability . 3 . We develop regularization techniques that enforce the smoothness conditions . These regularizers recover common GAN stabilization techniques such as gradient penalties and spectral normalization , thereby placing their use on a firmer theoretical foundation . 4 . Our analysis provides several practical insights , suggesting for example the use of smooth activation functions , simultaneous spectral normalization and gradient penalties , and a particular learning rate for the generator . 1.1 RELATED WORK . Divergence minimization Our analysis regards the GAN algorithm as minimizing a divergence between the current generator distribution and the desired data distribution , under the assumption of an optimal discriminator at every training step . This perspective originates from the earliest GAN paper , in which Goodfellow et al . ( 2014 ) show that the original minimax GAN implicitly minimizes the Jensen–Shannon divergence . Since then , the community has introduced a large number of GAN or GAN-like variants that learn generative models by implicitly minimizing various divergences , including f -divergences ( Nowozin et al. , 2016 ) , Wasserstein distance ( Arjovsky et al. , 2017 ) , and maximum-mean discrepancy ( Li et al. , 2015 ; Unterthiner et al. , 2018 ) . Meanwhile , the non-saturating GAN ( Goodfellow et al. , 2014 ) has been shown to minimize a certain Kullback– Leibler divergence ( Arjovsky & Bottou , 2017 ) . Several more theoretical works consider the topological , geometric , and convexity properties of divergence minimization ( Arjovsky & Bottou , 2017 ; Liu et al. , 2017 ; Bottou et al. , 2018 ; Farnia & Tse , 2018 ; Chu et al. , 2019 ) , perspectives that we draw heavily upon . Sanjabi et al . ( 2018 ) also prove smoothness of GAN losses in the specific case of the regularized optimal transport loss . Their assumption for smoothness is entangled in that it involves a composite condition on generators and discriminators , while our analysis addresses them separately . Other approaches Even though many analyses , including ours , operate under the assumption of an optimal discriminator , this assumption is unrealistic in practice . Li et al . ( 2017b ) contrast this optimal discriminator dynamics with first-order dynamics , which assumes that the generator and discriminator use alternating gradient updates and is what is used computationally . As this is a differing approach from ours , we only briefly mention some results in this area , which typically rely on game-theoretic notions ( Kodali et al. , 2017 ; Grnarova et al. , 2018 ; Oliehoek et al. , 2018 ) or local analysis ( Nagarajan & Kolter , 2017 ; Mescheder et al. , 2018 ) . Some of these results rely on continuous dynamics approximations of gradient updates ; in contrast , our work focuses on discrete dynamics . 1.2 NOTATION . Let R̄ : = R ∪ { ∞ , −∞ } . We let P ( X ) denote the set of all probability measures on a compact set X ⊆ Rd . We letM ( X ) and C ( X ) denote the dual pair consisting of the set of all finite signed measures onX and the set of all continuous functionsX → R. For any statementA , we let χ { A } be 0 if A is true and∞ if A is false . For a Euclidean vector x , its Euclidean norm is denoted by ‖x‖2 , and the operator norm of a matrix A is denoted by ‖A‖2 , i.e. , ‖A‖2 = sup‖x‖2≤1 ‖Ax‖2/‖x‖2 . A function f : X → Y between two metric spaces is α-Lipschitz if dY ( f ( x1 ) , f ( x2 ) ) ≤ αdX ( x1 , x2 ) . A function f : Rd → R is β-smooth if its gradients are β-Lipschitz , that is , for all x , y ∈ Rd , ‖∇f ( x ) −∇f ( y ) ‖2 ≤ β‖x− y‖2 . 2 SMOOTHNESS OF GAN LOSSES . This section presents Theorem 1 , which provides concise criteria for the smoothness of GAN losses . In order to keep our analysis agnostic to the particular GAN used , let J : P ( X ) → R̄ be an arbitrary convex loss function , which takes a distribution over X ⊂ Rd and outputs a real number . Note that the typical minimax formulation of GANs can be recovered from just the loss function J using convex duality . In particular , recall that the convex conjugate J ? : C ( X ) → R̄ of J satisfies the following remarkable duality , known as the Fenchel–Moreau theorem : J ? ( ϕ ) : = sup µ∈M ( X ) ∫ ϕ ( x ) dµ− J ( µ ) , J ( µ ) = sup ϕ∈C ( X ) ∫ ϕ ( x ) dµ− J ? ( ϕ ) . ( 3 ) Based on this duality , minimizing J can be framed as the minimax problem inf µ∈P ( X ) J ( µ ) = inf µ∈P ( X ) sup ϕ∈C ( X ) ∫ ϕ ( x ) dµ− J ? ( ϕ ) : = inf µ∈P ( X ) sup ϕ∈C ( X ) J ( µ , ϕ ) , ( 4 ) recovering the well-known adversarial formulation of GANs . We now define the notion of an optimal discriminator for an arbitrary loss function J , based on this convex duality : Definition 1 ( Optimal discriminator ) . Let J : M ( X ) → R̄ be a convex , l.s.c. , proper function . An optimal discriminator for a probability distribution µ ∈ P ( X ) is a continuous function Φµ : X → R that attains the maximum of the second equation in ( 3 ) , i.e. , J ( µ ) = ∫ Φµ ( x ) dµ− J ? ( Φµ ) . This definition recovers the optimal discriminators of many existing GAN and GAN-like algorithms ( Farnia & Tse , 2018 ; Chu et al. , 2019 ) , most notably those in Table 1 . Our analysis will apply to any algorithm in this family of algorithms . See Appendix B for more details on this perspective . We also formalize the notion of a family of generators : Definition 2 ( Family of generators ) . A family of generators is a set of pushforward probability measures { µθ = fθ # ω : θ ∈ Rp } , where ω is a fixed probability distribution on Z ( the latent variable ) and fθ : Z → X is a measurable function ( the generator ) . Now , in light of Proposition 1 , we are interested in the smoothness of the mapping θ 7→ J ( µθ ) , which would guarantee the stationarity of gradient descent on this objective , which in turn implies stationarity of the GAN algorithm under the assumption of an optimal discriminator . The following theorem is our central result , which decomposes the smoothness of θ 7→ J ( µθ ) into conditions on optimal discriminators and the family of generators . Theorem 1 ( Smoothness decomposition for GANs ) . Let J : M ( X ) → R̄ be a convex function whose optimal discriminators Φµ : X → R satisfy the following regularity conditions : ( D1 ) x 7→ Φµ ( x ) is α-Lipschitz , ( D2 ) x 7→ ∇xΦµ ( x ) is β1-Lipschitz , ( D3 ) µ 7→ ∇xΦµ ( x ) is β2-Lipschitz w.r.t . the 1-Wasserstein distance . Also , let µθ = fθ # ω be a family of generators that satisfies : ( G1 ) θ 7→ fθ ( z ) is A-Lipschitz in expectation for z ∼ ω , i.e. , Ez∼ω [ ‖fθ1 ( z ) − fθ2 ( z ) ‖2 ] ≤ A‖θ1 − θ2‖2 , and ( G2 ) θ 7→ Dθfθ ( z ) is B-Lipschitz in expectation for z ∼ ω , i.e. , Ez∼ω [ ‖Dθ1fθ1 ( z ) − Dθ2fθ2 ( z ) ‖2 ] ≤ B‖θ1 − θ2‖2 . Then θ 7→ J ( µθ ) is L-smooth , with L = αB +A2 ( β1 + β2 ) . Theorem 1 connects the smoothness properties of the loss function J with the smoothness properties of the optimal discriminator Φµ , and once paired with Proposition 1 , it suggests a quantitative value 1 L for a stable generator learning rate . In order to obtain claims of stability for practically sized learning rates , it is important to tightly bound the relevant constants . In Sections 4 to 6 , we carefully analyze which GAN losses satisfy ( D1 ) , ( D2 ) , and ( D3 ) , and with what constants . We summarize our results in Table 2 : it turns out that none of the listed losses , except for one , satisfy ( D1 ) , ( D2 ) , and ( D3 ) simultaneously with a finite constant . The MMD-based loss satisfies the three conditions , but its constant for ( D1 ) grows as α = O ( √ d ) , which is an unfavorable dependence on the data dimension d that forces an unacceptably small learning rate . See for complete details of each condition . This failure of existing GANs to satisfy the stationarity conditions corroborates the observed instability of GANs . Theorem 1 decomposes smoothness into conditions on the generator and conditions on the discriminator , allowing a clean separation of concerns . In this paper , we focus on the discriminator conditions ( D1 ) , ( D2 ) , and ( D3 ) and only provide an extremely simple example of a generator that satisfies ( G1 ) and ( G2 ) , in Section 7 . Because analysis of the generator conditions may become quite complicated and will vary with the choice of architecture considered ( feedforward , convolutional , ResNet , etc . ) , we leave a detailed analysis of the generator conditions ( G1 ) and ( G2 ) as a promising avenue for future work . Indeed , such analyses may lead to new generator architectures or generator regularization techniques that stabilize GAN training .
This paper provides a unified theoretical framework for regularizing GAN losses. It accounts for most regularization technics especially spectral normalization and gradient penalty and explains how those two methods are in fact complementary. So far this was only observed experimentally but without any theoretical insight. The result goes beyond that as the criterion could be applied to general convex cost functional.
SP:f467d9904b9633d00e56dbc297caae6a21208b18
Smoothness and Stability in GANs
1 INTRODUCTION : TAMING INSTABILITY WITH SMOOTHNESS . Generative adversarial networks ( Goodfellow et al. , 2014 ) , or GANs , are a powerful class of generative models defined through minimax game . GANs and their variants have shown impressive performance in synthesizing various types of datasets , especially natural images . Despite these successes , the training of GANs remains quite unstable in nature , and this instability remains difficult to understand theoretically . Since the introduction of GANs , there have been many techniques proposed to stabilize GANs training , including studies of new generator/discriminator architectures , loss functions , and regularization techniques . Notably , Arjovsky et al . ( 2017 ) proposed Wasserstein GAN ( WGAN ) , which in principle avoids instability caused by mismatched generator and data distribution supports . In practice , this is enforced by Lipschitz constraints , which in turn motivated developments like gradient penalties ( Gulrajani et al. , 2017 ) and spectral normalization ( Miyato et al. , 2018 ) . Indeed , these stabilization techniques have proven essential to achieving the latest state-of-the-art results ( Karras et al. , 2018 ; Brock et al. , 2019 ) . On the other hand , a solid theoretical understanding of training stability has not been established . Several empirical observations point to an incomplete understanding . For example , why does applying a gradient penalty together spectral norm seem to improve performance ( Miyato et al. , 2018 ) , even though in principle they serve the same purpose ? Why does applying only spectral normalization with the Wasserstein loss fail ( Miyato , 2018 ) , even though the analysis of Arjovsky et al . ( 2017 ) suggests it should be sufficient ? Why is applying gradient penalties effective , even outside their original context of the Wasserstein GAN ( Fedus et al. , 2018 ) ? In this work , we develop a framework to analyze the stability of GAN training that resolves these apparent contradictions and clarifies the roles of these regularization techniques . Our approach considers the smoothness of the loss function used . In optimization , smoothness is a well-known condition that ensures that gradient descent and its variants become stable ( see e.g. , Bertsekas ( 1999 ) ) . For example , the following well-known proposition is the starting point of our stability analysis : Proposition 1 ( Bertsekas ( 1999 ) , Proposition 1.2.3 ) . Suppose f : Rp → R is L-smooth and bounded below . Let xk+1 : = xk − 1L∇f ( xk ) . Then ||∇f ( xk ) || → 0 as k →∞ . This proposition says that under a smoothness condition on the function , gradient descent with a constant step size 1L approaches stationarity ( i.e. , the gradient norm approaches zero ) . This is a rather weak notion of convergence , as it does not guarantee that the iterates converge to a point , and even if the iterates do converge , the limit is a stationary point and not necessarily an minimizer . Nevertheless , empirically , not even this stationarity is satisfied by GANs , which are known to frequently destabilize and diverge during training . To diagnose this instability , we consider the smoothness of the GAN ’ s loss function . GANs are typically framed as minimax problems of the form inf θ sup ϕ J ( µθ , ϕ ) , ( 1 ) where J is a loss function that takes a generator distribution µθ and discriminator ϕ , and θ ∈ Rp denotes the parameters of the generator . Unfortunately , the minimax nature of this problem makes stability and convergence difficult to analyze . To make the analysis more tractable , we define J ( µ ) = supϕ J ( µ , ϕ ) , so that ( 1 ) becomes simply inf θ J ( µθ ) . ( 2 ) This choice corresponds to the common assumption that the discriminator is allowed to reach optimality at every training step . Now , the GAN algorithm can be regarded as simply gradient descent on the Rp → R function θ 7→ J ( µθ ) , which may be analyzed using Proposition 1 . In particular , if this function θ 7→ J ( µθ ) satisfies the smoothness assumption , then the GAN training should be stable in that it should approach stationarity under the assumption of an optimal discriminator . In the remainder of this paper , we investigate whether the smoothness assumption is satisfied for various GAN losses . Our analysis answers two questions : Q1 . Which existing GAN losses , if any , satisfy the smoothness condition in Proposition 1 ? Q2 . Are there choices of loss , regularization , or architecture that enforce smoothness in GANs ? As results of our analysis , our contributions are as follows : 1 . We derive sufficient conditions for the GAN algorithm to be stationary under certain assumptions ( Theorem 1 ) . Our conditions relate to the smoothness of GAN loss used as well as the parameterization of the generator . 2 . We show that most common GAN losses do not satisfy the all of the smoothness conditions , thereby corroborating their empirical instability . 3 . We develop regularization techniques that enforce the smoothness conditions . These regularizers recover common GAN stabilization techniques such as gradient penalties and spectral normalization , thereby placing their use on a firmer theoretical foundation . 4 . Our analysis provides several practical insights , suggesting for example the use of smooth activation functions , simultaneous spectral normalization and gradient penalties , and a particular learning rate for the generator . 1.1 RELATED WORK . Divergence minimization Our analysis regards the GAN algorithm as minimizing a divergence between the current generator distribution and the desired data distribution , under the assumption of an optimal discriminator at every training step . This perspective originates from the earliest GAN paper , in which Goodfellow et al . ( 2014 ) show that the original minimax GAN implicitly minimizes the Jensen–Shannon divergence . Since then , the community has introduced a large number of GAN or GAN-like variants that learn generative models by implicitly minimizing various divergences , including f -divergences ( Nowozin et al. , 2016 ) , Wasserstein distance ( Arjovsky et al. , 2017 ) , and maximum-mean discrepancy ( Li et al. , 2015 ; Unterthiner et al. , 2018 ) . Meanwhile , the non-saturating GAN ( Goodfellow et al. , 2014 ) has been shown to minimize a certain Kullback– Leibler divergence ( Arjovsky & Bottou , 2017 ) . Several more theoretical works consider the topological , geometric , and convexity properties of divergence minimization ( Arjovsky & Bottou , 2017 ; Liu et al. , 2017 ; Bottou et al. , 2018 ; Farnia & Tse , 2018 ; Chu et al. , 2019 ) , perspectives that we draw heavily upon . Sanjabi et al . ( 2018 ) also prove smoothness of GAN losses in the specific case of the regularized optimal transport loss . Their assumption for smoothness is entangled in that it involves a composite condition on generators and discriminators , while our analysis addresses them separately . Other approaches Even though many analyses , including ours , operate under the assumption of an optimal discriminator , this assumption is unrealistic in practice . Li et al . ( 2017b ) contrast this optimal discriminator dynamics with first-order dynamics , which assumes that the generator and discriminator use alternating gradient updates and is what is used computationally . As this is a differing approach from ours , we only briefly mention some results in this area , which typically rely on game-theoretic notions ( Kodali et al. , 2017 ; Grnarova et al. , 2018 ; Oliehoek et al. , 2018 ) or local analysis ( Nagarajan & Kolter , 2017 ; Mescheder et al. , 2018 ) . Some of these results rely on continuous dynamics approximations of gradient updates ; in contrast , our work focuses on discrete dynamics . 1.2 NOTATION . Let R̄ : = R ∪ { ∞ , −∞ } . We let P ( X ) denote the set of all probability measures on a compact set X ⊆ Rd . We letM ( X ) and C ( X ) denote the dual pair consisting of the set of all finite signed measures onX and the set of all continuous functionsX → R. For any statementA , we let χ { A } be 0 if A is true and∞ if A is false . For a Euclidean vector x , its Euclidean norm is denoted by ‖x‖2 , and the operator norm of a matrix A is denoted by ‖A‖2 , i.e. , ‖A‖2 = sup‖x‖2≤1 ‖Ax‖2/‖x‖2 . A function f : X → Y between two metric spaces is α-Lipschitz if dY ( f ( x1 ) , f ( x2 ) ) ≤ αdX ( x1 , x2 ) . A function f : Rd → R is β-smooth if its gradients are β-Lipschitz , that is , for all x , y ∈ Rd , ‖∇f ( x ) −∇f ( y ) ‖2 ≤ β‖x− y‖2 . 2 SMOOTHNESS OF GAN LOSSES . This section presents Theorem 1 , which provides concise criteria for the smoothness of GAN losses . In order to keep our analysis agnostic to the particular GAN used , let J : P ( X ) → R̄ be an arbitrary convex loss function , which takes a distribution over X ⊂ Rd and outputs a real number . Note that the typical minimax formulation of GANs can be recovered from just the loss function J using convex duality . In particular , recall that the convex conjugate J ? : C ( X ) → R̄ of J satisfies the following remarkable duality , known as the Fenchel–Moreau theorem : J ? ( ϕ ) : = sup µ∈M ( X ) ∫ ϕ ( x ) dµ− J ( µ ) , J ( µ ) = sup ϕ∈C ( X ) ∫ ϕ ( x ) dµ− J ? ( ϕ ) . ( 3 ) Based on this duality , minimizing J can be framed as the minimax problem inf µ∈P ( X ) J ( µ ) = inf µ∈P ( X ) sup ϕ∈C ( X ) ∫ ϕ ( x ) dµ− J ? ( ϕ ) : = inf µ∈P ( X ) sup ϕ∈C ( X ) J ( µ , ϕ ) , ( 4 ) recovering the well-known adversarial formulation of GANs . We now define the notion of an optimal discriminator for an arbitrary loss function J , based on this convex duality : Definition 1 ( Optimal discriminator ) . Let J : M ( X ) → R̄ be a convex , l.s.c. , proper function . An optimal discriminator for a probability distribution µ ∈ P ( X ) is a continuous function Φµ : X → R that attains the maximum of the second equation in ( 3 ) , i.e. , J ( µ ) = ∫ Φµ ( x ) dµ− J ? ( Φµ ) . This definition recovers the optimal discriminators of many existing GAN and GAN-like algorithms ( Farnia & Tse , 2018 ; Chu et al. , 2019 ) , most notably those in Table 1 . Our analysis will apply to any algorithm in this family of algorithms . See Appendix B for more details on this perspective . We also formalize the notion of a family of generators : Definition 2 ( Family of generators ) . A family of generators is a set of pushforward probability measures { µθ = fθ # ω : θ ∈ Rp } , where ω is a fixed probability distribution on Z ( the latent variable ) and fθ : Z → X is a measurable function ( the generator ) . Now , in light of Proposition 1 , we are interested in the smoothness of the mapping θ 7→ J ( µθ ) , which would guarantee the stationarity of gradient descent on this objective , which in turn implies stationarity of the GAN algorithm under the assumption of an optimal discriminator . The following theorem is our central result , which decomposes the smoothness of θ 7→ J ( µθ ) into conditions on optimal discriminators and the family of generators . Theorem 1 ( Smoothness decomposition for GANs ) . Let J : M ( X ) → R̄ be a convex function whose optimal discriminators Φµ : X → R satisfy the following regularity conditions : ( D1 ) x 7→ Φµ ( x ) is α-Lipschitz , ( D2 ) x 7→ ∇xΦµ ( x ) is β1-Lipschitz , ( D3 ) µ 7→ ∇xΦµ ( x ) is β2-Lipschitz w.r.t . the 1-Wasserstein distance . Also , let µθ = fθ # ω be a family of generators that satisfies : ( G1 ) θ 7→ fθ ( z ) is A-Lipschitz in expectation for z ∼ ω , i.e. , Ez∼ω [ ‖fθ1 ( z ) − fθ2 ( z ) ‖2 ] ≤ A‖θ1 − θ2‖2 , and ( G2 ) θ 7→ Dθfθ ( z ) is B-Lipschitz in expectation for z ∼ ω , i.e. , Ez∼ω [ ‖Dθ1fθ1 ( z ) − Dθ2fθ2 ( z ) ‖2 ] ≤ B‖θ1 − θ2‖2 . Then θ 7→ J ( µθ ) is L-smooth , with L = αB +A2 ( β1 + β2 ) . Theorem 1 connects the smoothness properties of the loss function J with the smoothness properties of the optimal discriminator Φµ , and once paired with Proposition 1 , it suggests a quantitative value 1 L for a stable generator learning rate . In order to obtain claims of stability for practically sized learning rates , it is important to tightly bound the relevant constants . In Sections 4 to 6 , we carefully analyze which GAN losses satisfy ( D1 ) , ( D2 ) , and ( D3 ) , and with what constants . We summarize our results in Table 2 : it turns out that none of the listed losses , except for one , satisfy ( D1 ) , ( D2 ) , and ( D3 ) simultaneously with a finite constant . The MMD-based loss satisfies the three conditions , but its constant for ( D1 ) grows as α = O ( √ d ) , which is an unfavorable dependence on the data dimension d that forces an unacceptably small learning rate . See for complete details of each condition . This failure of existing GANs to satisfy the stationarity conditions corroborates the observed instability of GANs . Theorem 1 decomposes smoothness into conditions on the generator and conditions on the discriminator , allowing a clean separation of concerns . In this paper , we focus on the discriminator conditions ( D1 ) , ( D2 ) , and ( D3 ) and only provide an extremely simple example of a generator that satisfies ( G1 ) and ( G2 ) , in Section 7 . Because analysis of the generator conditions may become quite complicated and will vary with the choice of architecture considered ( feedforward , convolutional , ResNet , etc . ) , we leave a detailed analysis of the generator conditions ( G1 ) and ( G2 ) as a promising avenue for future work . Indeed , such analyses may lead to new generator architectures or generator regularization techniques that stabilize GAN training .
The work studies the relationship between the stability and the smoothness of GANs based on the proposition which was proposed by Bertsekas . It explains many nontrivial empirical observations when one is training GANs, including both of the necessities of the spectral normalization and the gradient penalty, in a theoretical perspective. And the work points out that most common GAN losses do not satisfy the all of the smoothness conditions, thereby corroborating their empirical instability. Meanwhile, it develops regularization techniques that enforce the smoothness conditions, which can lead to stability of the GAN.
SP:f467d9904b9633d00e56dbc297caae6a21208b18
Off-Policy Actor-Critic with Shared Experience Replay
1 INTRODUCTION . Value-based and actor-critic policy gradient methods are the two leading techniques of constructing general and scalable reinforcement learning agents ( Sutton et al. , 2018 ) . Both have been combined with non-linear function approximation ( Tesauro , 1995 ; Williams , 1992 ) , and have achieved remarkable successes on multiple challenging domains ; yet , these algorithms still require large amounts of data to determine good policies for any new environment . To improve data efficiency , experience replay agents store experience in a memory buffer ( replay ) ( Lin , 1992 ) , and reuse it multiple times to perform reinforcement learning updates ( Riedmiller , 2005 ) . Experience replay allows to generalize prioritized sweeping ( Moore & Atkeson , 1993 ) to the non-tabular setting ( Schaul et al. , 2015 ) , and can also be used to simplify exploration by including expert ( e.g. , human ) trajectories ( Hester et al. , 2017 ) . Overall , experience replay can be very effective at reducing the number of interactions with the environment otherwise required by deep reinforcement learning algorithms ( Schaul et al. , 2015 ) . Replay is often combined with the value-based Q-learning ( Mnih et al. , 2015 ) , as it is an off-policy algorithm by construction , and can perform well even if the sampling distribution from replay is not aligned with the latest agent ’ s policy . Combining experience replay with actor-critic algorithms can be harder due to their on-policy nature . Hence , most established actor-critic algorithms with replay such as ( Wang et al. , 2017 ; Gruslys et al. , 2018 ; Haarnoja et al. , 2018 ) employ and maintain Q-functions to learn from the replayed off-policy experience . In this paper , we demonstrate that off-policy actor-critic learning with experience replay can be achieved without surrogate Q-function approximators using V-trace by employing the following approaches : a ) off-policy replay experience needs to be mixed with a proportion of on-policy experience . We show experimentally ( Figure 2 ) and theoretically that the V-trace policy gradient is otherwise not guaranteed to converge to a locally optimal solution . b ) a trust region scheme ( Conn et al. , 2000 ; Schulman et al. , 2015 ; 2017 ) can mitigate bias and enable efficient learning in a strongly off-policy regime , where distinct agents share experience through a commonly shared replay module . Sharing experience permits the agents to benefit from parallel exploration ( Kretchmar , 2002 ) ( Figures 1 and 3 ) . Our paper is structured as follows : In Section 2 we revisit pure importance sampling for actor-critic agents ( Degris et al. , 2012 ) and V-trace , which is notable for allowing to trade off bias and variance in its estimates . We recall that variance reduction is necessary ( Figure 4 left ) but is biased in V-trace . We derive proposition 2 stating that off-policy V-trace is not guaranteed to converge to a locally optimal solution – not even in an idealized scenario when provided with the optimal value function . Through theoretical analysis ( Section 3 ) and experimental validation ( Figure 2 ) we determine that mixing on-policy experience into experience replay alleviates the problem . Furthermore we propose a trust region scheme ( Conn et al. , 2000 ; Schulman et al. , 2015 ; 2017 ) in Section 4 that enables efficient learning even in a strongly off-policy regime , where distinct agents share the experience replay module and learn from each others experience . We define the trust region in policy space and prove that the resulting estimator is correct ( i.e . estimates an improved return ) . As a result , we present state-of-the-art data efficiency in Section 5 in terms of median human normalized performance across 57 Atari games ( Bellemare et al. , 2013 ) , as well as improved learning efficiency on DMLab30 ( Beattie et al. , 2016 ) ( Table 1 ) . 2 THE ISSUE WITH IMPORTANCE SAMPLING : BIAS AND VARIANCE IN V-TRACE . V-trace importance sampling is a popular off-policy correction for actor-critic agents ( Espeholt et al. , 2018 ) . In this section we revisit how V-trace controls the ( potentially infinite ) variance that arises from naive importance sampling . We note that this comes at the cost of a biased estimate ( see Proposition 1 ) and creates a failure mode ( see Proposition 2 ) which makes the policy gradient biased . We discuss our solutions for said issues in Section 4 . 2.1 REINFORCEMENT LEARNING . We follow the notation of Sutton et al . ( 2018 ) where an agent interacts with its environment , to collect rewards . On each discrete time-step t , the agent selects an action at ; it receives in return a reward rt and an observation ot+1 , encoding a partial view of the environment ’ s state st+1 . In the fully observable case , the RL problem is formalized as a Markov Decision Process ( Bellman , 1957 ) : a tuple ( S , A , p , γ ) , where S , A denotes finite sets of states and actions , p models rewards and state transitions ( so that rt , st+1 ∼ p ( st , at ) ) , and γ is a fixed discount factor . A policy is a mapping π ( a|s ) from states to action probabilities . The agent seeks an optimal policy π∗ that maximizes the value , defined as the expectation of the cumulative discounted returns Gt = ∑∞ k=0 γ krt+k . Off-policy learning is the problem of finding , or evaluating , a policy π from data generated by a different policy µ . This arises in several settings . Experience replay ( Lin , 1992 ) mixes data from multiple iterations of policy improvement . In large-scale RL , decoupling acting from learning ( Nair et al. , 2015 ; Horgan et al. , 2018 ; Espeholt et al. , 2018 ) causes the experience to lag behind the latest agent policy . Finally , it is often useful to learn multiple general value functions ( Sutton et al. , 2011 ; Mankowitz et al. , 2018 ; Lample & Chaplot , 2016 ; Mirowski et al. , 2017 ; Jaderberg et al. , 2017b ) or options ( Sutton et al. , 1999 ; Bacon et al. , 2017 ) from a single stream of experience . 2.2 NAIVE IMPORTANCE SAMPLING . On-policy n-step bootstraps give more accurate value estimates in expectation with larger n ( Sutton et al. , 2018 ) . They are used in many reinforcement learning agents ( Mnih et al. , 2016 ; Schulman et al. , 2017 ; Hessel et al. , 2017 ) . Unfortunately n must be chosen suitably as the estimates variance increases with n too . It is desirable to obtain benefits akin to n-step returns in the off-policy case . To this end multi-step importance sampling ( Kahn , 1955 ) can be used . This however adds another source of ( potentially infinite ( Sutton et al. , 2018 ) ) variance to the estimate . Importance sampling can estimate the expected return V π from trajectories sampled from µ 6= π , as long as µ is non-zero whereever π is . We employ a previously estimated value function V as a bootstrap to estimate expected returns . Following Degris et al . ( 2012 ) , a multi-step formulation of the expected return is V π ( st ) = Eµ [ V ( st ) + K−1∑ k=0 γk ( k∏ i=0 πt+i µt+i ) δt+kV ] ( 1 ) where Eµ denotes the expectation under policy µ up to an episode termination , δtV = rt + γV ( st+1 ) − V ( st ) is the temporal difference error in consecutive states st+1 , st , and πt = πt ( at|st ) . Importance sampling estimates can have high variance . Tree Backup ( Precup et al. , 2000 ) , and Q ( λ ) ( Sutton et al. , 2014 ) address this , but reduce the number of steps before bootstrapping even when this is undesirable ( as in the on-policy case ) . RETRACE ( Munos et al. , 2016 ) makes use of full returns in the on-policy case , but it introduces a zero-mean random variable at each step , adding variance to empirical estimates in both on- and off-policy cases . 2.3 BIAS-VARIANCE ANALYSIS & FAILURE MODE OF V-TRACE IMPORTANCE SAMPLING . V-trace ( Espeholt et al. , 2018 ) reduces the variance of importance sampling by trading off variance for a biased estimate of the return – resulting in a failure mode ( see Proposition 2 ) . It uses clipped importance sampling ratios to approximate V π by V π̃ ( st ) = V ( st ) + ∑K−1 k=0 γ k ( ∏k−1 i=0 ci ) ρtδt+kV where V is a learned state value estimate used to bootstrap , and ρt = min [ πt/µt , ρ̄ ] , ct = min [ πt/µt , c̄ ] are the clipped importance ratios . Note that , differently from RETRACE , V-trace fully recovers the Monte Carlo return when on policy . It similarly reweights the policy gradient as : ∇V π̃ ( st ) def = Eµ [ ρt∇ ( log πt ) ( rt + γV π̃ ( st+1 ) ) ] ( 2 ) Note that ∇V π̃ ( st ) recovers the naively importance sampled policy gradient for ρ̄ → ∞ . In the literature , it is common to subtract a baseline from the action-value estimate rt + γV π̃ ( st+1 ) to reduce variance ( Williams , 1992 ) , omitted here for simplicity . The constants ρ̄ ≥ c̄ ≥ 1 ( typically chosen ρ̄ = c̄ = 1 ) define the level of clipping , and improve stability by ensuring a bounded variance . For any given ρ̄ , the bias introduced by V-trace in the value and policy gradient estimates increases with the difference between π and µ . We analyze this in the following propositions . Proposition 1 . The V-trace value estimate V π̃ is biased : It does not match the expected return of π but the return of a related implied policy π̃ defined by equation 3 that depends on the behaviour policy µ : π̃µ ( a|x ) = min [ ρ̄µ ( a|x ) , π ( a|x ) ] ∑ b∈A min [ ρ̄µ ( b|x ) , π ( b|x ) ] ( 3 ) Proof . See Espeholt et al . ( 2018 ) . Note that the biased policy π̃µ can be very different from π . Hence the V-trace value estimate V π̃ may be very different from V π as well . As an illustrative example , consider two policies over a set of two actions , e.g . “ left ” and “ right ” represented as a tuple of probabilities . Let us investigate µ = ( φ , 1− φ ) and π = ( 1− φ , φ ) defined for any suitably small φ ≤ 1 . Observe that π and µ share no trajectories ( state-action sequences ) in the limit as φ→ 0 and they get more focused on one action . A practical example of this could be two policies , one almost always taking a left turn and one always taking the right . Given sufficient data of either policy it is possible to estimate the value of the other e.g . with naive importance sampling . However observe that V-trace with ρ̄ = 1 will always estimate a biased value - even given infinite data . Observe that min [ µ ( a|x ) , π ( a|x ) ] = min [ φ , 1− φ ] for both actions . Thus π̃µ is uniform rather than resembling π the policy . The V-trace estimate V π̃ would thus compute the average value of `` left '' and `` right '' – poorly representing the true V π . Proposition 2 . The V-trace policy gradient is biased : given the the optimal value function V ∗ the V-trace policy gradient does not converge to a locally optimal π∗ for all off-policy behaviour distributions µ . Proof . See Appendix C .
This paper aims to improve the efficiency of the actor-critic method. The authors first analyzed the cause of instability in the prior work, from the perspective of bias and variance. Two remedies were then presented: (i) mixing the experience replay with online learning; (ii) proposing a trust region scheme to select the behavior policies. The authors finally tested the proposed method on Atari games, and showed the better results, compared with the state-of-the-art methods.
SP:e1317ed002e3e0f08ba90506cb2c38d65265a102
Off-Policy Actor-Critic with Shared Experience Replay
1 INTRODUCTION . Value-based and actor-critic policy gradient methods are the two leading techniques of constructing general and scalable reinforcement learning agents ( Sutton et al. , 2018 ) . Both have been combined with non-linear function approximation ( Tesauro , 1995 ; Williams , 1992 ) , and have achieved remarkable successes on multiple challenging domains ; yet , these algorithms still require large amounts of data to determine good policies for any new environment . To improve data efficiency , experience replay agents store experience in a memory buffer ( replay ) ( Lin , 1992 ) , and reuse it multiple times to perform reinforcement learning updates ( Riedmiller , 2005 ) . Experience replay allows to generalize prioritized sweeping ( Moore & Atkeson , 1993 ) to the non-tabular setting ( Schaul et al. , 2015 ) , and can also be used to simplify exploration by including expert ( e.g. , human ) trajectories ( Hester et al. , 2017 ) . Overall , experience replay can be very effective at reducing the number of interactions with the environment otherwise required by deep reinforcement learning algorithms ( Schaul et al. , 2015 ) . Replay is often combined with the value-based Q-learning ( Mnih et al. , 2015 ) , as it is an off-policy algorithm by construction , and can perform well even if the sampling distribution from replay is not aligned with the latest agent ’ s policy . Combining experience replay with actor-critic algorithms can be harder due to their on-policy nature . Hence , most established actor-critic algorithms with replay such as ( Wang et al. , 2017 ; Gruslys et al. , 2018 ; Haarnoja et al. , 2018 ) employ and maintain Q-functions to learn from the replayed off-policy experience . In this paper , we demonstrate that off-policy actor-critic learning with experience replay can be achieved without surrogate Q-function approximators using V-trace by employing the following approaches : a ) off-policy replay experience needs to be mixed with a proportion of on-policy experience . We show experimentally ( Figure 2 ) and theoretically that the V-trace policy gradient is otherwise not guaranteed to converge to a locally optimal solution . b ) a trust region scheme ( Conn et al. , 2000 ; Schulman et al. , 2015 ; 2017 ) can mitigate bias and enable efficient learning in a strongly off-policy regime , where distinct agents share experience through a commonly shared replay module . Sharing experience permits the agents to benefit from parallel exploration ( Kretchmar , 2002 ) ( Figures 1 and 3 ) . Our paper is structured as follows : In Section 2 we revisit pure importance sampling for actor-critic agents ( Degris et al. , 2012 ) and V-trace , which is notable for allowing to trade off bias and variance in its estimates . We recall that variance reduction is necessary ( Figure 4 left ) but is biased in V-trace . We derive proposition 2 stating that off-policy V-trace is not guaranteed to converge to a locally optimal solution – not even in an idealized scenario when provided with the optimal value function . Through theoretical analysis ( Section 3 ) and experimental validation ( Figure 2 ) we determine that mixing on-policy experience into experience replay alleviates the problem . Furthermore we propose a trust region scheme ( Conn et al. , 2000 ; Schulman et al. , 2015 ; 2017 ) in Section 4 that enables efficient learning even in a strongly off-policy regime , where distinct agents share the experience replay module and learn from each others experience . We define the trust region in policy space and prove that the resulting estimator is correct ( i.e . estimates an improved return ) . As a result , we present state-of-the-art data efficiency in Section 5 in terms of median human normalized performance across 57 Atari games ( Bellemare et al. , 2013 ) , as well as improved learning efficiency on DMLab30 ( Beattie et al. , 2016 ) ( Table 1 ) . 2 THE ISSUE WITH IMPORTANCE SAMPLING : BIAS AND VARIANCE IN V-TRACE . V-trace importance sampling is a popular off-policy correction for actor-critic agents ( Espeholt et al. , 2018 ) . In this section we revisit how V-trace controls the ( potentially infinite ) variance that arises from naive importance sampling . We note that this comes at the cost of a biased estimate ( see Proposition 1 ) and creates a failure mode ( see Proposition 2 ) which makes the policy gradient biased . We discuss our solutions for said issues in Section 4 . 2.1 REINFORCEMENT LEARNING . We follow the notation of Sutton et al . ( 2018 ) where an agent interacts with its environment , to collect rewards . On each discrete time-step t , the agent selects an action at ; it receives in return a reward rt and an observation ot+1 , encoding a partial view of the environment ’ s state st+1 . In the fully observable case , the RL problem is formalized as a Markov Decision Process ( Bellman , 1957 ) : a tuple ( S , A , p , γ ) , where S , A denotes finite sets of states and actions , p models rewards and state transitions ( so that rt , st+1 ∼ p ( st , at ) ) , and γ is a fixed discount factor . A policy is a mapping π ( a|s ) from states to action probabilities . The agent seeks an optimal policy π∗ that maximizes the value , defined as the expectation of the cumulative discounted returns Gt = ∑∞ k=0 γ krt+k . Off-policy learning is the problem of finding , or evaluating , a policy π from data generated by a different policy µ . This arises in several settings . Experience replay ( Lin , 1992 ) mixes data from multiple iterations of policy improvement . In large-scale RL , decoupling acting from learning ( Nair et al. , 2015 ; Horgan et al. , 2018 ; Espeholt et al. , 2018 ) causes the experience to lag behind the latest agent policy . Finally , it is often useful to learn multiple general value functions ( Sutton et al. , 2011 ; Mankowitz et al. , 2018 ; Lample & Chaplot , 2016 ; Mirowski et al. , 2017 ; Jaderberg et al. , 2017b ) or options ( Sutton et al. , 1999 ; Bacon et al. , 2017 ) from a single stream of experience . 2.2 NAIVE IMPORTANCE SAMPLING . On-policy n-step bootstraps give more accurate value estimates in expectation with larger n ( Sutton et al. , 2018 ) . They are used in many reinforcement learning agents ( Mnih et al. , 2016 ; Schulman et al. , 2017 ; Hessel et al. , 2017 ) . Unfortunately n must be chosen suitably as the estimates variance increases with n too . It is desirable to obtain benefits akin to n-step returns in the off-policy case . To this end multi-step importance sampling ( Kahn , 1955 ) can be used . This however adds another source of ( potentially infinite ( Sutton et al. , 2018 ) ) variance to the estimate . Importance sampling can estimate the expected return V π from trajectories sampled from µ 6= π , as long as µ is non-zero whereever π is . We employ a previously estimated value function V as a bootstrap to estimate expected returns . Following Degris et al . ( 2012 ) , a multi-step formulation of the expected return is V π ( st ) = Eµ [ V ( st ) + K−1∑ k=0 γk ( k∏ i=0 πt+i µt+i ) δt+kV ] ( 1 ) where Eµ denotes the expectation under policy µ up to an episode termination , δtV = rt + γV ( st+1 ) − V ( st ) is the temporal difference error in consecutive states st+1 , st , and πt = πt ( at|st ) . Importance sampling estimates can have high variance . Tree Backup ( Precup et al. , 2000 ) , and Q ( λ ) ( Sutton et al. , 2014 ) address this , but reduce the number of steps before bootstrapping even when this is undesirable ( as in the on-policy case ) . RETRACE ( Munos et al. , 2016 ) makes use of full returns in the on-policy case , but it introduces a zero-mean random variable at each step , adding variance to empirical estimates in both on- and off-policy cases . 2.3 BIAS-VARIANCE ANALYSIS & FAILURE MODE OF V-TRACE IMPORTANCE SAMPLING . V-trace ( Espeholt et al. , 2018 ) reduces the variance of importance sampling by trading off variance for a biased estimate of the return – resulting in a failure mode ( see Proposition 2 ) . It uses clipped importance sampling ratios to approximate V π by V π̃ ( st ) = V ( st ) + ∑K−1 k=0 γ k ( ∏k−1 i=0 ci ) ρtδt+kV where V is a learned state value estimate used to bootstrap , and ρt = min [ πt/µt , ρ̄ ] , ct = min [ πt/µt , c̄ ] are the clipped importance ratios . Note that , differently from RETRACE , V-trace fully recovers the Monte Carlo return when on policy . It similarly reweights the policy gradient as : ∇V π̃ ( st ) def = Eµ [ ρt∇ ( log πt ) ( rt + γV π̃ ( st+1 ) ) ] ( 2 ) Note that ∇V π̃ ( st ) recovers the naively importance sampled policy gradient for ρ̄ → ∞ . In the literature , it is common to subtract a baseline from the action-value estimate rt + γV π̃ ( st+1 ) to reduce variance ( Williams , 1992 ) , omitted here for simplicity . The constants ρ̄ ≥ c̄ ≥ 1 ( typically chosen ρ̄ = c̄ = 1 ) define the level of clipping , and improve stability by ensuring a bounded variance . For any given ρ̄ , the bias introduced by V-trace in the value and policy gradient estimates increases with the difference between π and µ . We analyze this in the following propositions . Proposition 1 . The V-trace value estimate V π̃ is biased : It does not match the expected return of π but the return of a related implied policy π̃ defined by equation 3 that depends on the behaviour policy µ : π̃µ ( a|x ) = min [ ρ̄µ ( a|x ) , π ( a|x ) ] ∑ b∈A min [ ρ̄µ ( b|x ) , π ( b|x ) ] ( 3 ) Proof . See Espeholt et al . ( 2018 ) . Note that the biased policy π̃µ can be very different from π . Hence the V-trace value estimate V π̃ may be very different from V π as well . As an illustrative example , consider two policies over a set of two actions , e.g . “ left ” and “ right ” represented as a tuple of probabilities . Let us investigate µ = ( φ , 1− φ ) and π = ( 1− φ , φ ) defined for any suitably small φ ≤ 1 . Observe that π and µ share no trajectories ( state-action sequences ) in the limit as φ→ 0 and they get more focused on one action . A practical example of this could be two policies , one almost always taking a left turn and one always taking the right . Given sufficient data of either policy it is possible to estimate the value of the other e.g . with naive importance sampling . However observe that V-trace with ρ̄ = 1 will always estimate a biased value - even given infinite data . Observe that min [ µ ( a|x ) , π ( a|x ) ] = min [ φ , 1− φ ] for both actions . Thus π̃µ is uniform rather than resembling π the policy . The V-trace estimate V π̃ would thus compute the average value of `` left '' and `` right '' – poorly representing the true V π . Proposition 2 . The V-trace policy gradient is biased : given the the optimal value function V ∗ the V-trace policy gradient does not converge to a locally optimal π∗ for all off-policy behaviour distributions µ . Proof . See Appendix C .
This paper investigates off-policy actor critic (AC) learning with experience replay using V-trace. It shows that V-trace policy gradient is not guaranteed to converge to a local optimal solution. To mitigate the bias and variance problem of V-trace and importance sampling, a trust region approach is proposed to adaptively selects only suitable behavior distributions when estimating the state-value of a policy. To this end, a behavior relevance function (KL divergence) is introduced to classify behavior as relevant. The proposed learning method LASER demonstrates the state-of-the-art data efficiency in Atari among agents trained up until 200M frames. In all, this paper is well motivated and technically sound. The draft can be improved by making it more self-contained by providing a sketch of the proof rather than refer everything to the appendix. Also it might be helpful to provide a pseudocode of LASER to help readers better understand the technical details.
SP:e1317ed002e3e0f08ba90506cb2c38d65265a102
State-only Imitation with Transition Dynamics Mismatch
1 INTRODUCTION . In the Reinforcement Learning ( RL ) framework , the objective is to train policies that maximize a certain reward criterion . Deep-RL , which combines RL with the recent advances in the field of deeplearning , has produced algorithms demonstrating remarkable success in areas such as games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) , continuous control ( Lillicrap et al. , 2015 ) , and robotics ( Levine et al. , 2016 ) , to name a few . However , the application of these algorithms beyond controlled simulation environments has been fairly modest ; one of the reasons being that manual specification of a good reward function is a hard problem . Imitation Learning ( IL ) algorithms ( Pomerleau , 1991 ; Ng et al. , 2000 ; Ziebart et al. , 2008 ; Ho & Ermon , 2016 ) address this issue by replacing reward functions with expert demonstrations , which are easier to collect in most scenarios . The conventional setting used in most of the IL literature is the availability of state-action trajectories from the expert , τ : = { s0 , a0 , . . . sT , aT } , collected in an environment modeled as a Markov decision process ( MDP ) with transition dynamics T exp . These dynamics govern the distribution over the next state , given the current state and action . The IL objective is to leverage τ to train an imitator policy in the same MDP as the expert . This is a severe requirement that impedes the wider applicability of IL algorithms . In many practical scenarios , the transition dynamics of the environment in which the imitator policy is learned ( henceforth denoted by T pol ) is different from the dynamics of the environment used to collect expert behavior , T exp . Consider self-driving cars as an example , where the goal is to learn autonomous navigation on a vehicle with slightly different gear-transmission characteristics than the vehicle used to obtain human driving demonstrations . We therefore strive 1Code for this paper is available at https : //github.com/tgangwani/RL-Indirect-imitation for an IL method that could train agents under a transition dynamics mismatch , T exp 6= T pol . We assume that other MDP attributes are the same for the expert and imitator environments . Beyond the dynamics equivalence , another assumption commonly used in IL literature is the availability of expert actions ( along with the states ) . A few recent works ( Torabi et al. , 2018a ; b ; Sun et al. , 2019 ) have proposed “ state-only ” IL algorithms , where expert demonstrations do not include the actions . This opens up the possibility of employing IL to situations such as kinesthetic teaching in robotics and learning from weak-supervision sources such as videos . Moreover , if T exp and T pol differ , then the expert actions , even if available , are not quite useful for imitation anyway , since the application of an expert action from any state leads to different next-state distributions for the expert and the imitator . Hence , our algorithm uses state-only expert demonstrations . We build on previous IL literature inspired by GAN-based adversarial learning - GAIL ( Ho & Ermon , 2016 ) and AIRL ( Fu et al. , 2017 ) . In both these methods , the objective is to minimize the distance between the visitation distributions ( ρ ) induced by the policy and expert , under some suitable metric d , such as Jensen-Shannon divergence . We classify GAIL and AIRL as direct imitation methods as they directly reduce d ( ρπ , ρ∗ ) . Different from these , we propose an indirect imitation approach which introduces another distribution ρ̃ as an intermediate or indirection step . In slight detail , starting with the Max-Entropy Inverse-RL objective ( Ziebart et al. , 2008 ) , we derive a lower bound which transforms the overall IL problem into two sub-parts which are solved iteratively : the first is to train a policy to imitate a distribution ρ̃ represented by a trajectory buffer , and the second is to move the buffer distribution closer to expert ’ s ( ρ∗ ) over the course of training . The first part , which is policy imitation by reducing d ( ρπ , ρ̃ ) is done with AIRL , while the second part , which is reducing d ( ρ̃ , ρ∗ ) , is achieved using a Wasserstein critic ( Arjovsky et al. , 2017 ) . We abbreviate our approach as I2L , for indirect-imitation-learning . We test the efficacy of our algorithm with continuous-control locomotion tasks from MuJoCo . Figure 1a depicts one example of the dynamics mismatch which we evaluate in our experiments . For the Ant agent , an expert walking policy π∗e is trained under the default dynamics provided in the OpenAI Gym , T exp = Earth . The dynamics under which to learn the imitator policy are curated by modifying the gravity parameter to half its default value ( i.e . 9.812 ) , T pol = PlanetX . Figure 1b plots the average episodic returns of π∗e in the original and modified environments , and proves that direct policy transfer is infeasible . For Figure 1c , we just assume access to state-only expert demonstrations from π∗e , and do IL with the GAIL algorithm . GAIL performs well if the imitator policy is learned in the same environment as the expert ( T exp = T pol = Earth ) , but does not succeed under mismatched transition dynamics , ( T exp = Earth , T pol = PlanetX ) . In our experiments section , we consider other sources of dynamics mismatch as well , such as agent-density and joint-friction . We show that I2L trains much better policies than baseline IL algorithms in these tasks , leading to successful transfer of expert skills to an imitator in an environment dissimilar to the expert . We start by reviewing the relevant background on Max-Entropy IRL , GAIL and AIRL , since these methods form an integral part of our overall algorithm . 2 BACKGROUND . An RL environment modeled as an MDP is characterized by the tuple ( S , A , R , T , γ ) , where S is the state-space , andA is the action-space . Given an action at ∈ A , the next state is governed by the transition dynamics st+1 ∼ T ( st+1|st , at ) , and reward is computed as rt = R ( rt|st , at ) . The RL objective is to maximie the expected discounted sum of rewards , η ( πθ ) = Ep0 , T , π [ ∑∞ t=0 γ tr ( st , at ) ] , where γ ∈ ( 0 , 1 ] is the discount factor , and p0 is the initial state distribution . We define the unnormalized γ-discounted state-visitation distribution for a policy π by ρπ ( s ) = ∑∞ t=0 γ tP ( st=s|π ) , where P ( st=s|π ) is the probability of being in state s at time t , when following policy π and starting state s0 ∼ p0 . The expected policy return η ( πθ ) can then be written as Eρπ ( s , a ) [ r ( s , a ) ] , where ρπ ( s , a ) = ρπ ( s ) π ( a|s ) is the state-action visitation distribution ( also referred to as the occupancy measure ) . For any policy π , there is a one-to-one correspondence between π and its occupancy measure ( Puterman , 1994 ) . 2.1 MAXIMUM ENTROPY IRL . Designing reward functions that adequately capture the task intentions is a laborious and error-prone procedure . An alternative is to train agents to solve a particular task by leveraging demonstrations of that task by experts . Inverse Reinforcement Learning ( IRL ) algorithms ( Ng et al. , 2000 ; Russell , 1998 ) aim to infer the reward function from expert demonstrations , and then use it for RL or planning . The IRL method , however , has an inherent ambiguity , since many expert policies could explain a set of provided demonstrations . To resolve this , Ziebart ( 2010 ) proposed the Maximum Causal Entropy ( MaxEnt ) IRL framework , where the objective is to learn a reward function such that the resulting policy matches the provided expert demonstrations in the expected feature counts f , while being as random as possible : max π H ( π ) s.t . Es , a∼π [ f ( s , a ) ] = f̂demo whereH ( π ) = Eπ [ − log π ( a|s ) ] is the γ-discounted causal entropy , and f̂demo denotes the empirical feature counts of the expert . This constrained optimization problem is solved by minimizing the Lagrangian dual , resulting in the maximum entropy policy : πθ ( a|s ) = exp ( Qsoftθ ( s , a ) − V softθ ( s ) ) , where θ is the Lagrangian multiplier on the feature matching constraint , and Qsoftθ , V soft θ are the soft value functions such that the following equations hold ( please see Theorem 6.8 in Ziebart ( 2010 ) ) : Qsoftθ ( s , a ) = θ T f ( s , a ) ︸ ︷︷ ︸ r ( s , a ) +Ep ( s′|s , a ) [ V softθ ( s′ ) ] , V softθ ( s ) = softmaxaQsoftθ ( s , a ) Inspired by the energy-based formulation of the maximum entropy policy described above , πθ ( a|s ) = exp ( Qsoftθ ( s , a ) − V softθ ( s ) ) , recent methods ( Finn et al. , 2016 ; Haarnoja et al. , 2017 ; Fu et al. , 2017 ) have proposed to model complex , multi-modal action distributions using energy-based policies , π ( a|s ) ∝ exp ( fω ( s , a ) ) , where fω ( s , a ) is represented by a universal function approximator , such as a deep neural network . We can then interpret the IRL problem as a maximum likelihood estimation problem : max ω Eτ∼demo [ log pω ( τ ) ] with , pω ( τ ) = p ( s0 ) ∏ t p ( st+1|st , at ) efω ( st , at ) Z ( ω ) ( 1 ) 2.2 ADVERSARIAL IRL . An important implication of casting IRL as maximum likelihood estimation is that it connects IRL to adversarial training . We now briefly discuss AIRL ( Fu et al. , 2017 ) since it forms a component of our proposed algorithm . AIRL builds on GAIL ( Ho & Ermon , 2016 ) , a well-known adversarial imitation learning algorithm . GAIL frames IL as an occupancy-measure matching ( or divergence minimization ) problem . Let ρπ ( s , a ) and ρE ( s , a ) represent the state-action visitation distributions of the policy and the expert , respectively . Minimizing the Jenson-Shanon divergence minπDJS [ ρπ ( s , a ) || ρE ( s , a ) ] recovers a policy with a similar trajectory distribution as the expert . GAIL iteratively trains a policy ( πθ ) and a discriminator ( Dω : S × A → ( 0 , 1 ) ) to optimize the min-max objective similar to GANs ( Goodfellow et al. , 2014 ) : min θ max ω E ( s , a ) ∼ρE [ logDω ( s , a ) ] + E ( s , a ) ∼πθ [ log ( 1−Dω ( s , a ) ) ] − λH ( πθ ) ( 2 ) GAIL attempts to learn a policy that behaves similar to the expert demonstrations , but it bypasses the process of recovering the expert reward function . Finn et al . ( 2016 ) showed that imposing a special structure on the discriminator makes the adversarial GAN training equivalent to optimizing the MLE objective ( Equation 1 ) . Furthermore , if trained to optimality , it is proved that the expert reward ( up to a constant ) can be recovered from the discriminator . They operate in a trajectorycentric formulation which can be inefficient for high dimensional state- and action-spaces . Fu et al . ( 2017 ) present AIRL which remedies this by proposing analogous changes to the discriminator , but operating on a single state-action pair : Dω ( s , a ) = efω ( s , a ) efω ( s , a ) + πθ ( a|s ) ( 3 ) Similar to GAIL , the discriminator is trained to maximize the objective in Equation 2 ; fω is learned , whereas the value of π ( a|s ) is “ filled in ” . The policy is optimized jointly using any RL algorithm with logDω − log ( 1 − Dω ) as rewards . When trained to optimality , exp ( fω ( s , a ) ) = π∗ ( a|s ) = exp ( A∗soft ( s , a ) /α ) ; hence fω recovers the soft advantage of the expert policy ( up to a constant ) .
The manuscript considers the problem of imitation learning when the system dynamics of the agent are different from the dynamics of the expert. The paper proposes Indirect Imitation Learning (I2L), which aims to perform imitation learning with respect to a trajectory buffer that contains some of the previous trajectories of the agent. The trajectory buffer has limited capacity and adds trajectories based on a priority-queue that prefers trajectories that have a similar state distribution to the expert. Similarity is hereby measured by the score of a WGAN-critic trained to approximate the W1-Wasserstein distance between the previous buffer and the expert distribution. By performing imitation learning with respect to a trajectory buffer, state-action trajectories of the agent's MDP are available, which enables I2L to apply AIRL (Fu et al. 2017). By using those trajectories for the transition buffer that have state-marginals close to the expert's trajectory, I2L produces similar behavior compared to the expert. I2L is compared to state-only GAIL, state-action-GAIL and AIRL on four MuJoCo tasks with modified dynamics compared to the expert policy. The experiments show that I2L may learn significantly better policies if the dynamics of agent and the expert do not match.
SP:001e57e71bafdb52d6511bdd6aa73b78d60248f2
State-only Imitation with Transition Dynamics Mismatch
1 INTRODUCTION . In the Reinforcement Learning ( RL ) framework , the objective is to train policies that maximize a certain reward criterion . Deep-RL , which combines RL with the recent advances in the field of deeplearning , has produced algorithms demonstrating remarkable success in areas such as games ( Mnih et al. , 2015 ; Silver et al. , 2016 ) , continuous control ( Lillicrap et al. , 2015 ) , and robotics ( Levine et al. , 2016 ) , to name a few . However , the application of these algorithms beyond controlled simulation environments has been fairly modest ; one of the reasons being that manual specification of a good reward function is a hard problem . Imitation Learning ( IL ) algorithms ( Pomerleau , 1991 ; Ng et al. , 2000 ; Ziebart et al. , 2008 ; Ho & Ermon , 2016 ) address this issue by replacing reward functions with expert demonstrations , which are easier to collect in most scenarios . The conventional setting used in most of the IL literature is the availability of state-action trajectories from the expert , τ : = { s0 , a0 , . . . sT , aT } , collected in an environment modeled as a Markov decision process ( MDP ) with transition dynamics T exp . These dynamics govern the distribution over the next state , given the current state and action . The IL objective is to leverage τ to train an imitator policy in the same MDP as the expert . This is a severe requirement that impedes the wider applicability of IL algorithms . In many practical scenarios , the transition dynamics of the environment in which the imitator policy is learned ( henceforth denoted by T pol ) is different from the dynamics of the environment used to collect expert behavior , T exp . Consider self-driving cars as an example , where the goal is to learn autonomous navigation on a vehicle with slightly different gear-transmission characteristics than the vehicle used to obtain human driving demonstrations . We therefore strive 1Code for this paper is available at https : //github.com/tgangwani/RL-Indirect-imitation for an IL method that could train agents under a transition dynamics mismatch , T exp 6= T pol . We assume that other MDP attributes are the same for the expert and imitator environments . Beyond the dynamics equivalence , another assumption commonly used in IL literature is the availability of expert actions ( along with the states ) . A few recent works ( Torabi et al. , 2018a ; b ; Sun et al. , 2019 ) have proposed “ state-only ” IL algorithms , where expert demonstrations do not include the actions . This opens up the possibility of employing IL to situations such as kinesthetic teaching in robotics and learning from weak-supervision sources such as videos . Moreover , if T exp and T pol differ , then the expert actions , even if available , are not quite useful for imitation anyway , since the application of an expert action from any state leads to different next-state distributions for the expert and the imitator . Hence , our algorithm uses state-only expert demonstrations . We build on previous IL literature inspired by GAN-based adversarial learning - GAIL ( Ho & Ermon , 2016 ) and AIRL ( Fu et al. , 2017 ) . In both these methods , the objective is to minimize the distance between the visitation distributions ( ρ ) induced by the policy and expert , under some suitable metric d , such as Jensen-Shannon divergence . We classify GAIL and AIRL as direct imitation methods as they directly reduce d ( ρπ , ρ∗ ) . Different from these , we propose an indirect imitation approach which introduces another distribution ρ̃ as an intermediate or indirection step . In slight detail , starting with the Max-Entropy Inverse-RL objective ( Ziebart et al. , 2008 ) , we derive a lower bound which transforms the overall IL problem into two sub-parts which are solved iteratively : the first is to train a policy to imitate a distribution ρ̃ represented by a trajectory buffer , and the second is to move the buffer distribution closer to expert ’ s ( ρ∗ ) over the course of training . The first part , which is policy imitation by reducing d ( ρπ , ρ̃ ) is done with AIRL , while the second part , which is reducing d ( ρ̃ , ρ∗ ) , is achieved using a Wasserstein critic ( Arjovsky et al. , 2017 ) . We abbreviate our approach as I2L , for indirect-imitation-learning . We test the efficacy of our algorithm with continuous-control locomotion tasks from MuJoCo . Figure 1a depicts one example of the dynamics mismatch which we evaluate in our experiments . For the Ant agent , an expert walking policy π∗e is trained under the default dynamics provided in the OpenAI Gym , T exp = Earth . The dynamics under which to learn the imitator policy are curated by modifying the gravity parameter to half its default value ( i.e . 9.812 ) , T pol = PlanetX . Figure 1b plots the average episodic returns of π∗e in the original and modified environments , and proves that direct policy transfer is infeasible . For Figure 1c , we just assume access to state-only expert demonstrations from π∗e , and do IL with the GAIL algorithm . GAIL performs well if the imitator policy is learned in the same environment as the expert ( T exp = T pol = Earth ) , but does not succeed under mismatched transition dynamics , ( T exp = Earth , T pol = PlanetX ) . In our experiments section , we consider other sources of dynamics mismatch as well , such as agent-density and joint-friction . We show that I2L trains much better policies than baseline IL algorithms in these tasks , leading to successful transfer of expert skills to an imitator in an environment dissimilar to the expert . We start by reviewing the relevant background on Max-Entropy IRL , GAIL and AIRL , since these methods form an integral part of our overall algorithm . 2 BACKGROUND . An RL environment modeled as an MDP is characterized by the tuple ( S , A , R , T , γ ) , where S is the state-space , andA is the action-space . Given an action at ∈ A , the next state is governed by the transition dynamics st+1 ∼ T ( st+1|st , at ) , and reward is computed as rt = R ( rt|st , at ) . The RL objective is to maximie the expected discounted sum of rewards , η ( πθ ) = Ep0 , T , π [ ∑∞ t=0 γ tr ( st , at ) ] , where γ ∈ ( 0 , 1 ] is the discount factor , and p0 is the initial state distribution . We define the unnormalized γ-discounted state-visitation distribution for a policy π by ρπ ( s ) = ∑∞ t=0 γ tP ( st=s|π ) , where P ( st=s|π ) is the probability of being in state s at time t , when following policy π and starting state s0 ∼ p0 . The expected policy return η ( πθ ) can then be written as Eρπ ( s , a ) [ r ( s , a ) ] , where ρπ ( s , a ) = ρπ ( s ) π ( a|s ) is the state-action visitation distribution ( also referred to as the occupancy measure ) . For any policy π , there is a one-to-one correspondence between π and its occupancy measure ( Puterman , 1994 ) . 2.1 MAXIMUM ENTROPY IRL . Designing reward functions that adequately capture the task intentions is a laborious and error-prone procedure . An alternative is to train agents to solve a particular task by leveraging demonstrations of that task by experts . Inverse Reinforcement Learning ( IRL ) algorithms ( Ng et al. , 2000 ; Russell , 1998 ) aim to infer the reward function from expert demonstrations , and then use it for RL or planning . The IRL method , however , has an inherent ambiguity , since many expert policies could explain a set of provided demonstrations . To resolve this , Ziebart ( 2010 ) proposed the Maximum Causal Entropy ( MaxEnt ) IRL framework , where the objective is to learn a reward function such that the resulting policy matches the provided expert demonstrations in the expected feature counts f , while being as random as possible : max π H ( π ) s.t . Es , a∼π [ f ( s , a ) ] = f̂demo whereH ( π ) = Eπ [ − log π ( a|s ) ] is the γ-discounted causal entropy , and f̂demo denotes the empirical feature counts of the expert . This constrained optimization problem is solved by minimizing the Lagrangian dual , resulting in the maximum entropy policy : πθ ( a|s ) = exp ( Qsoftθ ( s , a ) − V softθ ( s ) ) , where θ is the Lagrangian multiplier on the feature matching constraint , and Qsoftθ , V soft θ are the soft value functions such that the following equations hold ( please see Theorem 6.8 in Ziebart ( 2010 ) ) : Qsoftθ ( s , a ) = θ T f ( s , a ) ︸ ︷︷ ︸ r ( s , a ) +Ep ( s′|s , a ) [ V softθ ( s′ ) ] , V softθ ( s ) = softmaxaQsoftθ ( s , a ) Inspired by the energy-based formulation of the maximum entropy policy described above , πθ ( a|s ) = exp ( Qsoftθ ( s , a ) − V softθ ( s ) ) , recent methods ( Finn et al. , 2016 ; Haarnoja et al. , 2017 ; Fu et al. , 2017 ) have proposed to model complex , multi-modal action distributions using energy-based policies , π ( a|s ) ∝ exp ( fω ( s , a ) ) , where fω ( s , a ) is represented by a universal function approximator , such as a deep neural network . We can then interpret the IRL problem as a maximum likelihood estimation problem : max ω Eτ∼demo [ log pω ( τ ) ] with , pω ( τ ) = p ( s0 ) ∏ t p ( st+1|st , at ) efω ( st , at ) Z ( ω ) ( 1 ) 2.2 ADVERSARIAL IRL . An important implication of casting IRL as maximum likelihood estimation is that it connects IRL to adversarial training . We now briefly discuss AIRL ( Fu et al. , 2017 ) since it forms a component of our proposed algorithm . AIRL builds on GAIL ( Ho & Ermon , 2016 ) , a well-known adversarial imitation learning algorithm . GAIL frames IL as an occupancy-measure matching ( or divergence minimization ) problem . Let ρπ ( s , a ) and ρE ( s , a ) represent the state-action visitation distributions of the policy and the expert , respectively . Minimizing the Jenson-Shanon divergence minπDJS [ ρπ ( s , a ) || ρE ( s , a ) ] recovers a policy with a similar trajectory distribution as the expert . GAIL iteratively trains a policy ( πθ ) and a discriminator ( Dω : S × A → ( 0 , 1 ) ) to optimize the min-max objective similar to GANs ( Goodfellow et al. , 2014 ) : min θ max ω E ( s , a ) ∼ρE [ logDω ( s , a ) ] + E ( s , a ) ∼πθ [ log ( 1−Dω ( s , a ) ) ] − λH ( πθ ) ( 2 ) GAIL attempts to learn a policy that behaves similar to the expert demonstrations , but it bypasses the process of recovering the expert reward function . Finn et al . ( 2016 ) showed that imposing a special structure on the discriminator makes the adversarial GAN training equivalent to optimizing the MLE objective ( Equation 1 ) . Furthermore , if trained to optimality , it is proved that the expert reward ( up to a constant ) can be recovered from the discriminator . They operate in a trajectorycentric formulation which can be inefficient for high dimensional state- and action-spaces . Fu et al . ( 2017 ) present AIRL which remedies this by proposing analogous changes to the discriminator , but operating on a single state-action pair : Dω ( s , a ) = efω ( s , a ) efω ( s , a ) + πθ ( a|s ) ( 3 ) Similar to GAIL , the discriminator is trained to maximize the objective in Equation 2 ; fω is learned , whereas the value of π ( a|s ) is “ filled in ” . The policy is optimized jointly using any RL algorithm with logDω − log ( 1 − Dω ) as rewards . When trained to optimality , exp ( fω ( s , a ) ) = π∗ ( a|s ) = exp ( A∗soft ( s , a ) /α ) ; hence fω recovers the soft advantage of the expert policy ( up to a constant ) .
The paper proposes an imitation method, I2L, that learns from state-only demonstrations generated in an expert MDP that may have different transition dynamics than the agent MDP. I2L modifies the existing adversarial inverse RL algorithm: instead of training the disciminator to distinguish demonstrations vs. samples, I2L trains the discriminator to distinguish samples that are close (in terms of the Wasserstein metric) to the demonstrations vs. other samples. This approach maximizes a lower bound on the likelihood of the demonstrations. Experiments comparing I2L to a state-only GAIL baseline show that I2L performs significantly better under dynamics mismatch in several low-dimensional, continuous MuJoCo tasks.
SP:001e57e71bafdb52d6511bdd6aa73b78d60248f2
Meta-Q-Learning
M E TA - Q - L E A R N I N G Rasool Fakoor1 , Pratik Chaudhari2∗ , Stefano Soatto1 , Alexander Smola1 1 Amazon Web Services 2 University of Pennsylvania Email : { fakoor , soattos , smola } @ amazon.com , pratikac @ seas.upenn.edu A B S T R A C T This paper introduces Meta-Q-Learning ( MQL ) , a new off-policy algorithm for meta-Reinforcement Learning ( meta-RL ) . MQL builds upon three simple ideas . First , we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory . Second , a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies . Third , past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates . MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation . Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL . 1 I N T R O D U C T I O N Reinforcement Learning ( RL ) algorithms have demonstrated good performance on simulated data . There are however two main challenges in translating this performance to real robots : ( i ) robots are complex and fragile which precludes extensive data collection , and ( ii ) a real robot may face an environment that is different than the simulated environment it was trained in . This has fueled research into MetaReinforcement Learning ( meta-RL ) which develops algorithms that “ meta-train ” on a large number of different environments , e.g. , simulated ones , and aim to adapt to a new environment with few data . How well does meta-RL work today ? Fig . 1 shows the performance of two prototypical meta-RL algorithms on four standard continuous-control benchmarks.1 We compared them to the following simple baseline : an off-policy RL algorithm ( TD3 by Fujimoto et al . ( 2018b ) ) and which was trained to maximize the average reward over all training tasks and modified to use a “ context variable ” that represents the trajectory . All algorithms in this figure use the same evaluation protocol . It is surprising that this simple non-meta-learning-based method is competitive with state-of-the-art meta-RL algorithms . This is the first contribution of our paper : we demonstrate that it is not necessary to meta-train policies to do well on existing benchmarks . Our second contribution is an off-policy meta-RL algorithm named Meta-Q-Learning ( MQL ) that builds upon the above result . MQL uses a simple meta-training procedure : it maximizes the average ∗Work done while at Amazon Web Services 1We obtained the numbers for MAML and PEARL from training logs published by Rakelly et al . ( 2019 ) . rewards across all meta-training tasks using off-policy updates to obtain θ̂meta = arg max θ 1 n n∑ k=1 E τ∼Dk [ ` k ( θ ) ] ( 1 ) where ` k ( θ ) is the objective evaluated on the transition τ obtained from the task Dk ( θ ) , e.g. , 1-step temporal-difference ( TD ) error would set ` k ( θ ) = TD2 ( θ ; τ ) . This objective , which we call the multi-task objective , is the simplest form of meta-training . For adapting the policy to a new task , MQL samples transitions from the meta-training replay buffer that are similar to those from the new task . This amplifies the amount of data available for adaptation but it is difficult to do because of the large potential bias . We use techniques from the propensity estimation literature for performing this adaptation and the off-policy updates of MQL are crucial to doing so . The adaptation phase of MQL solves arg max θ { E τ∼Dnew [ ` new ( θ ) ] + E τ∼Dmeta [ β ( τ ; Dnew , Dmeta ) ` new ( θ ) ] − ( 1− ÊSS ) ‖θ − θ̂meta‖22 } ( 2 ) whereDmeta is the meta-training replay buffer , the propensity score β ( τ ; Dnew , Dmeta ) is the odds of a transition τ belonging to Dnew versusDmeta , and ÊSS is the Effective Sample Size between Dnew and Dmeta that is a measure of the similarly of the new task with the meta-training tasks . The first term computes off-policy updates on the new task , the second term performs β ( · ) -weighted off-policy updates on old data , while the third term is an automatically adapting proximal term that prevents degradation of the policy during adaptation . We perform extensive experiments in Sec . 4.2 including ablation studies using standard meta-RL benchmarks that demonstrate that MQL policies obtain higher average returns on new tasks even if they are meta-trained for fewer time-steps than state-of-the-art algorithms . 2 B A C K G R O U N D This section introduces notation and formalizes the meta-RL problem . We discuss techniques for estimating the importance ratio between two probability distributions in Sec . 2.2 . Consider a Markov Decision Processes ( MDP ) denoted by xt+1 = f k ( xt , ut , ξt ) x0 ∼ pk0 , ( 3 ) where xt ∈ X ⊂ Rd are the states and ut ∈ U ⊂ Rp are the actions . The dynamics fk is parameterized by k ∈ { 1 , . . . , n } where each k corresponds to a different task . The domain of all these tasks , X for the states and U for the actions , is the same . The distribution pk0 denotes the initial state distribution and ξt is the noise in the dynamics . Given a deterministic policy uθ ( xt ) , the actionvalue function for γ-discounted future rewards rkt : = r k ( xt , uθ ( xt ) ) over an infinite time-horizon is qk ( x , u ) = E ξ ( · ) [ ∞∑ t=0 γt rkt |x0 = x , u0 = u , ut = uθ ( xt ) ] . ( 4 ) Note that we have assumed that different tasks have the same state and action space and may only differ in their dynamics fk and reward function rk . Given one task k ∈ { 1 , . . . , n } , the standard Reinforcement Learning ( RL ) formalism solves for θ̂k = arg max θ ` k ( θ ) where ` k ( θ ) = E x∼p0 [ qk ( x , uθ ( x ) ) ] . ( 5 ) Let us denote the dataset of all states , actions and rewards pertaining to a task k and policy uθ ( x ) by Dk ( θ ) = { xt , uθ ( xt ) , r k , xt+1 = f k ( xt , uθ ( xt ) , ξt ) } t≥0 , x ( 0 ) ∼pk0 , ξ ( · ) ; we will often refer toDk as the “ task ” itself . The Deterministic Policy Gradient ( DPG ) algorithm ( Silver et al. , 2014 ) for solving ( 5 ) learns a ϕ-parameterized approximation qϕ to the optimal value func- tion qk by minimizing the Bellman error and the optimal policy uθ that maximizes this approximation by solving the coupled optimization problem ϕ̂k = arg min ϕ E τ∼Dk [ ( qϕ ( x , u ) − rk − γ qϕ ( x′ , uθ̂k ( x ′ ) ) ) 2 ] , θ̂k = arg max θ E τ∼Dk [ q ϕ̂k ( x , uθ ( x ) ) ] . ( 6 ) The 1-step temporal difference error ( TD error ) is defined as TD2 ( θ ) = ( qϕ ( x , u ) − rk − γ qϕ ( x′ , uθ ( x′ ) ) ) 2 ( 7 ) where we keep the dependence of TD ( · ) on ϕ implicit . DPG , or its deep network-based variant DDPG ( Lillicrap et al. , 2015 ) , is an off-policy algorithm . This means that the expectations in ( 6 ) are computed using data that need not be generated by the policy being optimized ( uθ ) , this data can come from some other policy . In the sequel , we will focus on the parameters θ parameterizing the policy . The parameters ϕ of the value function are always updated to minimize the TD-error and are omitted for clarity . 2 . 1 M E TA - R E I N F O R C E M E N T L E A R N I N G ( M E TA - R L ) Meta-RL is a technique to learn an inductive bias that accelerates the learning of a new task by training on a large of number of training tasks . Formally , meta-training on tasks from the meta-training set Dmeta = { Dk } k=1 , ... , n involves learning a policy θ̂meta = arg max θ 1 n n∑ k=1 ` kmeta ( θ ) ( 8 ) where ` kmeta ( θ ) is a meta-training loss that depends on the particular method . Gradient-based meta-RL , let us take MAML by Finn et al . ( 2017 ) as a concrete example , sets ` kmeta ( θ ) = ` k ( θ + α∇θ ` k ( θ ) ) ( 9 ) for a step-size α > 0 ; ` k ( θ ) is the objective of non-meta-RL ( 5 ) . In this case ` kmeta is the objective obtained on the task Dk after one ( or in general , more ) updates of the policy on the task . The idea behind this is that even if the policy θ̂meta does not perform well on all tasks in Dmeta it may be updated quickly on a new task Dnew to obtain a well-performing policy . This can either be done using the same procedure as that of meta-training time , i.e. , by maximizing ` newmeta ( θ ) with the policy θ̂meta as the initialization , or by some other adaptation procedure . The meta-training method and the adaptation method in meta-RL , and meta-learning in general , can be different from each other . 2 . 2 L O G I S T I C R E G R E S S I O N F O R E S T I M AT I N G T H E P R O P E N S I T Y S C O R E Consider standard supervised learning : given two distributions q ( x ) ( say , train ) and p ( x ) ( say , test ) , we would like to estimate how a model ’ s predictions ŷ ( x ) change across them . This is formally done using importance sampling : E x∼p ( x ) E y|x [ ` ( y , ŷ ( x ) ) ] = E x∼q ( x ) E y|x [ β ( x ) ` ( y , ŷ ( x ) ) ] ; ( 10 ) where y|x are the true labels of data , the predictions of the model are ŷ ( x ) and ` ( y , ŷ ( x ) ) is the loss for each datum ( x , y ) . The importance ratio β ( x ) = dpdq ( x ) , also known as the propensity score , is the Radon-Nikodym derivative ( Resnick , 2013 ) of the two data densities and measures the odds of a sample x coming from the distribution p versus the distribution q . In practice , we do not know the densities q ( x ) and p ( x ) and therefore need to estimate β ( x ) using some finite data Xq = { x1 , . . . , xm } drawn from q and Xp = { x′1 , . . . , x′m } drawn from p. As Agarwal et al . ( 2011 ) show , this is easy to do using logistic regression . Set zk = 1 to be the labels for the data in Xq and zk = −1 to be the labels of the data in Xp for k ≤ m and fit a logistic classifier on the combined 2m samples by solving w∗ = min w 1 2m ∑ ( x , z ) log ( 1 + e−zw > x ) + c ‖w‖2 . ( 11 ) This gives β ( x ) = P ( z = −1|x ) P ( z = 1|x ) = e−w ∗ > x . ( 12 ) Normalized Effective Sample Size ( ÊSS ) : A related quantity to β ( x ) is the normalized Effective Sample Size ( ÊSS ) which we define as the relative number of samples from the target distribution p ( x ) required to obtain an estimator with performance ( say , variance ) equal to that of the importance sampling estimator ( 10 ) . It is not possible to compute the ÊSS without knowing both densities q ( x ) and p ( x ) but there are many heuristics for estimating it . A popular one in the Monte Carlo literature ( Kong , 1992 ; Smith , 2013 ; Elvira et al. , 2018 ) is ÊSS = 1 m ( ∑m k=1 β ( xk ) ) 2∑m k=1 β ( xk ) 2 ∈ [ 0 , 1 ] ( 13 ) where X = { x1 , . . . , xm } is some finite batch of data . Observe that if two distributions q and p are close then the ÊSS is close to one ; if they are far apart the ÊSS is close to zero . 3 M Q L This section describes the MQL algorithm . We begin by describing the meta-training procedure of MQL including a discussion of multi-task training in Sec . 3.1 . The adaptation procedure is described in Sec . 3.2 . 3 . 1 M E TA - T R A I N I N G MQL performs meta-training using the multi-task objective . Note that if one sets ` kmeta ( θ ) , ` k ( θ ) = E x∼pk0 [ qk ( x , uθ ( x ) ) ] ( 14 ) in ( 8 ) then the parameters θ̂meta are such that they maximize the average returns over all tasks from the meta-training set . We use an off-policy algorithm named TD3 ( Fujimoto et al. , 2018b ) as the building block and solve for θ̂meta = arg min θ 1 n n∑ k=1 E τ∼Dk [ TD2 ( θ ) ] ; ( 15 ) where TD ( · ) is defined in ( 7 ) . As is standard in TD3 , we use two action-value functions parameterized by ϕ1 and ϕ2 and take their minimum to compute the target in ( 7 ) . This trick known as “ doubleQ-learning ” reduces the over-estimation bias . Let us emphasize that ( 14 ) is a special case of the procedure outlined in ( 8 ) . The following remark explains why MQL uses the multi-task objective as opposed to the meta-training objective used , for instance , in existing gradient-based meta-RL algorithms . Remark 1 . Let us compare the critical points of the m-step MAML objective ( 9 ) to those of the multi-task objective which uses ( 14 ) . As is done by the authors in Nichol et al . ( 2018 ) , we can perform a Taylor series expansion around the parameters θ to obtain ∇ ` kmeta ( θ ) = ∇ ` k ( θ ) + 2α ( m− 1 ) ( ∇2 ` k ( θ ) ) ∇ ` k ( θ ) +O ( α2 ) . ( 16 ) Further , note that∇ ` kmeta in ( 16 ) is also the gradient of the loss ` k ( θ ) + α ( m− 1 ) ‖∇ ` k ( θ ) ‖22 ( 17 ) up to first order . This lends a new interpretation that MAML is attracted towards regions in the loss landscape that under-fit on individual tasks : parameters with large ‖∇ ` k‖2 will be far from the local maxima of ` k ( θ ) . The parameters α and m control this under-fitting . Larger the number of gradient steps , larger the under-fitting effect . This remark suggests that the adaptation speed of gradient-based meta-learning comes at the cost of under-fitting on the tasks . 3 . 1 . 1 D E S I G N I N G C O N T E X T As discussed in Sec . 1 and 4.4 , the identity of the task in meta-RL can be thought of as the hidden variable of an underlying partially-observable MDP . The optimal policy on the entire trajectory of the states , actions and the rewards . We therefore design a recurrent context variable zt that depends on { ( xi , ui , ri ) } i≤t . We set zt to the hidden state at time t of a Gated Recurrent Unit ( GRU by Cho et al . ( 2014 ) ) model . All the policies uθ ( x ) and value functions qϕ ( x , u ) in MQL are conditioned on the context and implemented as uθ ( x , z ) and qϕ ( x , u , z ) . Any other recurrent model can be used to design the context ; we used a GRU because it offers a good trade-off between a rich representation and computational complexity . Remark 2 ( MQL uses a deterministic context that is not permutation invariant ) . We have aimed for simplicity while designing the context . The context in MQL is built using an off-the-shelf model like GRU and is not permutation invariant . Indeed , the direction of time affords crucial information about the dynamics of a task to the agent , e.g. , a Half-Cheetah running forward versus backward has arguably the same state trajectory but in a different order . Further , the context in MQL is a deterministic function of the trajectory . Both these aspects are different than the context used by Rakelly et al . ( 2019 ) who design an inference network and sample a probabilistic context conditioned on a moving window . RL algorithms are quite complex and challenging to reproduce . Current meta-RL techniques which build upon them further exacerbate this complexity . Our demonstration that a simple context variable is enough is an important contribution .
The authors propose meta Q-learning, an algorithm for off-policy meta RL. The idea is to meta-train a context-dependent policy to maximize the expected return averaged over all training tasks, and then adapt this policy to any new task by leveraging both novel and past experience using importance sampling corrections. The proposed approach is evaluated on standard Mujoco benchmarks and compared to other relevant meta-rl algorithms.
SP:39e2b5a77cf6a3cf90efd0e78b2041855c4139fa
Meta-Q-Learning
M E TA - Q - L E A R N I N G Rasool Fakoor1 , Pratik Chaudhari2∗ , Stefano Soatto1 , Alexander Smola1 1 Amazon Web Services 2 University of Pennsylvania Email : { fakoor , soattos , smola } @ amazon.com , pratikac @ seas.upenn.edu A B S T R A C T This paper introduces Meta-Q-Learning ( MQL ) , a new off-policy algorithm for meta-Reinforcement Learning ( meta-RL ) . MQL builds upon three simple ideas . First , we show that Q-learning is competitive with state-of-the-art meta-RL algorithms if given access to a context variable that is a representation of the past trajectory . Second , a multi-task objective to maximize the average reward across the training tasks is an effective method to meta-train RL policies . Third , past data from the meta-training replay buffer can be recycled to adapt the policy on a new task using off-policy updates . MQL draws upon ideas in propensity estimation to do so and thereby amplifies the amount of available data for adaptation . Experiments on standard continuous-control benchmarks suggest that MQL compares favorably with the state of the art in meta-RL . 1 I N T R O D U C T I O N Reinforcement Learning ( RL ) algorithms have demonstrated good performance on simulated data . There are however two main challenges in translating this performance to real robots : ( i ) robots are complex and fragile which precludes extensive data collection , and ( ii ) a real robot may face an environment that is different than the simulated environment it was trained in . This has fueled research into MetaReinforcement Learning ( meta-RL ) which develops algorithms that “ meta-train ” on a large number of different environments , e.g. , simulated ones , and aim to adapt to a new environment with few data . How well does meta-RL work today ? Fig . 1 shows the performance of two prototypical meta-RL algorithms on four standard continuous-control benchmarks.1 We compared them to the following simple baseline : an off-policy RL algorithm ( TD3 by Fujimoto et al . ( 2018b ) ) and which was trained to maximize the average reward over all training tasks and modified to use a “ context variable ” that represents the trajectory . All algorithms in this figure use the same evaluation protocol . It is surprising that this simple non-meta-learning-based method is competitive with state-of-the-art meta-RL algorithms . This is the first contribution of our paper : we demonstrate that it is not necessary to meta-train policies to do well on existing benchmarks . Our second contribution is an off-policy meta-RL algorithm named Meta-Q-Learning ( MQL ) that builds upon the above result . MQL uses a simple meta-training procedure : it maximizes the average ∗Work done while at Amazon Web Services 1We obtained the numbers for MAML and PEARL from training logs published by Rakelly et al . ( 2019 ) . rewards across all meta-training tasks using off-policy updates to obtain θ̂meta = arg max θ 1 n n∑ k=1 E τ∼Dk [ ` k ( θ ) ] ( 1 ) where ` k ( θ ) is the objective evaluated on the transition τ obtained from the task Dk ( θ ) , e.g. , 1-step temporal-difference ( TD ) error would set ` k ( θ ) = TD2 ( θ ; τ ) . This objective , which we call the multi-task objective , is the simplest form of meta-training . For adapting the policy to a new task , MQL samples transitions from the meta-training replay buffer that are similar to those from the new task . This amplifies the amount of data available for adaptation but it is difficult to do because of the large potential bias . We use techniques from the propensity estimation literature for performing this adaptation and the off-policy updates of MQL are crucial to doing so . The adaptation phase of MQL solves arg max θ { E τ∼Dnew [ ` new ( θ ) ] + E τ∼Dmeta [ β ( τ ; Dnew , Dmeta ) ` new ( θ ) ] − ( 1− ÊSS ) ‖θ − θ̂meta‖22 } ( 2 ) whereDmeta is the meta-training replay buffer , the propensity score β ( τ ; Dnew , Dmeta ) is the odds of a transition τ belonging to Dnew versusDmeta , and ÊSS is the Effective Sample Size between Dnew and Dmeta that is a measure of the similarly of the new task with the meta-training tasks . The first term computes off-policy updates on the new task , the second term performs β ( · ) -weighted off-policy updates on old data , while the third term is an automatically adapting proximal term that prevents degradation of the policy during adaptation . We perform extensive experiments in Sec . 4.2 including ablation studies using standard meta-RL benchmarks that demonstrate that MQL policies obtain higher average returns on new tasks even if they are meta-trained for fewer time-steps than state-of-the-art algorithms . 2 B A C K G R O U N D This section introduces notation and formalizes the meta-RL problem . We discuss techniques for estimating the importance ratio between two probability distributions in Sec . 2.2 . Consider a Markov Decision Processes ( MDP ) denoted by xt+1 = f k ( xt , ut , ξt ) x0 ∼ pk0 , ( 3 ) where xt ∈ X ⊂ Rd are the states and ut ∈ U ⊂ Rp are the actions . The dynamics fk is parameterized by k ∈ { 1 , . . . , n } where each k corresponds to a different task . The domain of all these tasks , X for the states and U for the actions , is the same . The distribution pk0 denotes the initial state distribution and ξt is the noise in the dynamics . Given a deterministic policy uθ ( xt ) , the actionvalue function for γ-discounted future rewards rkt : = r k ( xt , uθ ( xt ) ) over an infinite time-horizon is qk ( x , u ) = E ξ ( · ) [ ∞∑ t=0 γt rkt |x0 = x , u0 = u , ut = uθ ( xt ) ] . ( 4 ) Note that we have assumed that different tasks have the same state and action space and may only differ in their dynamics fk and reward function rk . Given one task k ∈ { 1 , . . . , n } , the standard Reinforcement Learning ( RL ) formalism solves for θ̂k = arg max θ ` k ( θ ) where ` k ( θ ) = E x∼p0 [ qk ( x , uθ ( x ) ) ] . ( 5 ) Let us denote the dataset of all states , actions and rewards pertaining to a task k and policy uθ ( x ) by Dk ( θ ) = { xt , uθ ( xt ) , r k , xt+1 = f k ( xt , uθ ( xt ) , ξt ) } t≥0 , x ( 0 ) ∼pk0 , ξ ( · ) ; we will often refer toDk as the “ task ” itself . The Deterministic Policy Gradient ( DPG ) algorithm ( Silver et al. , 2014 ) for solving ( 5 ) learns a ϕ-parameterized approximation qϕ to the optimal value func- tion qk by minimizing the Bellman error and the optimal policy uθ that maximizes this approximation by solving the coupled optimization problem ϕ̂k = arg min ϕ E τ∼Dk [ ( qϕ ( x , u ) − rk − γ qϕ ( x′ , uθ̂k ( x ′ ) ) ) 2 ] , θ̂k = arg max θ E τ∼Dk [ q ϕ̂k ( x , uθ ( x ) ) ] . ( 6 ) The 1-step temporal difference error ( TD error ) is defined as TD2 ( θ ) = ( qϕ ( x , u ) − rk − γ qϕ ( x′ , uθ ( x′ ) ) ) 2 ( 7 ) where we keep the dependence of TD ( · ) on ϕ implicit . DPG , or its deep network-based variant DDPG ( Lillicrap et al. , 2015 ) , is an off-policy algorithm . This means that the expectations in ( 6 ) are computed using data that need not be generated by the policy being optimized ( uθ ) , this data can come from some other policy . In the sequel , we will focus on the parameters θ parameterizing the policy . The parameters ϕ of the value function are always updated to minimize the TD-error and are omitted for clarity . 2 . 1 M E TA - R E I N F O R C E M E N T L E A R N I N G ( M E TA - R L ) Meta-RL is a technique to learn an inductive bias that accelerates the learning of a new task by training on a large of number of training tasks . Formally , meta-training on tasks from the meta-training set Dmeta = { Dk } k=1 , ... , n involves learning a policy θ̂meta = arg max θ 1 n n∑ k=1 ` kmeta ( θ ) ( 8 ) where ` kmeta ( θ ) is a meta-training loss that depends on the particular method . Gradient-based meta-RL , let us take MAML by Finn et al . ( 2017 ) as a concrete example , sets ` kmeta ( θ ) = ` k ( θ + α∇θ ` k ( θ ) ) ( 9 ) for a step-size α > 0 ; ` k ( θ ) is the objective of non-meta-RL ( 5 ) . In this case ` kmeta is the objective obtained on the task Dk after one ( or in general , more ) updates of the policy on the task . The idea behind this is that even if the policy θ̂meta does not perform well on all tasks in Dmeta it may be updated quickly on a new task Dnew to obtain a well-performing policy . This can either be done using the same procedure as that of meta-training time , i.e. , by maximizing ` newmeta ( θ ) with the policy θ̂meta as the initialization , or by some other adaptation procedure . The meta-training method and the adaptation method in meta-RL , and meta-learning in general , can be different from each other . 2 . 2 L O G I S T I C R E G R E S S I O N F O R E S T I M AT I N G T H E P R O P E N S I T Y S C O R E Consider standard supervised learning : given two distributions q ( x ) ( say , train ) and p ( x ) ( say , test ) , we would like to estimate how a model ’ s predictions ŷ ( x ) change across them . This is formally done using importance sampling : E x∼p ( x ) E y|x [ ` ( y , ŷ ( x ) ) ] = E x∼q ( x ) E y|x [ β ( x ) ` ( y , ŷ ( x ) ) ] ; ( 10 ) where y|x are the true labels of data , the predictions of the model are ŷ ( x ) and ` ( y , ŷ ( x ) ) is the loss for each datum ( x , y ) . The importance ratio β ( x ) = dpdq ( x ) , also known as the propensity score , is the Radon-Nikodym derivative ( Resnick , 2013 ) of the two data densities and measures the odds of a sample x coming from the distribution p versus the distribution q . In practice , we do not know the densities q ( x ) and p ( x ) and therefore need to estimate β ( x ) using some finite data Xq = { x1 , . . . , xm } drawn from q and Xp = { x′1 , . . . , x′m } drawn from p. As Agarwal et al . ( 2011 ) show , this is easy to do using logistic regression . Set zk = 1 to be the labels for the data in Xq and zk = −1 to be the labels of the data in Xp for k ≤ m and fit a logistic classifier on the combined 2m samples by solving w∗ = min w 1 2m ∑ ( x , z ) log ( 1 + e−zw > x ) + c ‖w‖2 . ( 11 ) This gives β ( x ) = P ( z = −1|x ) P ( z = 1|x ) = e−w ∗ > x . ( 12 ) Normalized Effective Sample Size ( ÊSS ) : A related quantity to β ( x ) is the normalized Effective Sample Size ( ÊSS ) which we define as the relative number of samples from the target distribution p ( x ) required to obtain an estimator with performance ( say , variance ) equal to that of the importance sampling estimator ( 10 ) . It is not possible to compute the ÊSS without knowing both densities q ( x ) and p ( x ) but there are many heuristics for estimating it . A popular one in the Monte Carlo literature ( Kong , 1992 ; Smith , 2013 ; Elvira et al. , 2018 ) is ÊSS = 1 m ( ∑m k=1 β ( xk ) ) 2∑m k=1 β ( xk ) 2 ∈ [ 0 , 1 ] ( 13 ) where X = { x1 , . . . , xm } is some finite batch of data . Observe that if two distributions q and p are close then the ÊSS is close to one ; if they are far apart the ÊSS is close to zero . 3 M Q L This section describes the MQL algorithm . We begin by describing the meta-training procedure of MQL including a discussion of multi-task training in Sec . 3.1 . The adaptation procedure is described in Sec . 3.2 . 3 . 1 M E TA - T R A I N I N G MQL performs meta-training using the multi-task objective . Note that if one sets ` kmeta ( θ ) , ` k ( θ ) = E x∼pk0 [ qk ( x , uθ ( x ) ) ] ( 14 ) in ( 8 ) then the parameters θ̂meta are such that they maximize the average returns over all tasks from the meta-training set . We use an off-policy algorithm named TD3 ( Fujimoto et al. , 2018b ) as the building block and solve for θ̂meta = arg min θ 1 n n∑ k=1 E τ∼Dk [ TD2 ( θ ) ] ; ( 15 ) where TD ( · ) is defined in ( 7 ) . As is standard in TD3 , we use two action-value functions parameterized by ϕ1 and ϕ2 and take their minimum to compute the target in ( 7 ) . This trick known as “ doubleQ-learning ” reduces the over-estimation bias . Let us emphasize that ( 14 ) is a special case of the procedure outlined in ( 8 ) . The following remark explains why MQL uses the multi-task objective as opposed to the meta-training objective used , for instance , in existing gradient-based meta-RL algorithms . Remark 1 . Let us compare the critical points of the m-step MAML objective ( 9 ) to those of the multi-task objective which uses ( 14 ) . As is done by the authors in Nichol et al . ( 2018 ) , we can perform a Taylor series expansion around the parameters θ to obtain ∇ ` kmeta ( θ ) = ∇ ` k ( θ ) + 2α ( m− 1 ) ( ∇2 ` k ( θ ) ) ∇ ` k ( θ ) +O ( α2 ) . ( 16 ) Further , note that∇ ` kmeta in ( 16 ) is also the gradient of the loss ` k ( θ ) + α ( m− 1 ) ‖∇ ` k ( θ ) ‖22 ( 17 ) up to first order . This lends a new interpretation that MAML is attracted towards regions in the loss landscape that under-fit on individual tasks : parameters with large ‖∇ ` k‖2 will be far from the local maxima of ` k ( θ ) . The parameters α and m control this under-fitting . Larger the number of gradient steps , larger the under-fitting effect . This remark suggests that the adaptation speed of gradient-based meta-learning comes at the cost of under-fitting on the tasks . 3 . 1 . 1 D E S I G N I N G C O N T E X T As discussed in Sec . 1 and 4.4 , the identity of the task in meta-RL can be thought of as the hidden variable of an underlying partially-observable MDP . The optimal policy on the entire trajectory of the states , actions and the rewards . We therefore design a recurrent context variable zt that depends on { ( xi , ui , ri ) } i≤t . We set zt to the hidden state at time t of a Gated Recurrent Unit ( GRU by Cho et al . ( 2014 ) ) model . All the policies uθ ( x ) and value functions qϕ ( x , u ) in MQL are conditioned on the context and implemented as uθ ( x , z ) and qϕ ( x , u , z ) . Any other recurrent model can be used to design the context ; we used a GRU because it offers a good trade-off between a rich representation and computational complexity . Remark 2 ( MQL uses a deterministic context that is not permutation invariant ) . We have aimed for simplicity while designing the context . The context in MQL is built using an off-the-shelf model like GRU and is not permutation invariant . Indeed , the direction of time affords crucial information about the dynamics of a task to the agent , e.g. , a Half-Cheetah running forward versus backward has arguably the same state trajectory but in a different order . Further , the context in MQL is a deterministic function of the trajectory . Both these aspects are different than the context used by Rakelly et al . ( 2019 ) who design an inference network and sample a probabilistic context conditioned on a moving window . RL algorithms are quite complex and challenging to reproduce . Current meta-RL techniques which build upon them further exacerbate this complexity . Our demonstration that a simple context variable is enough is an important contribution .
This paper proposes Meta Q-Learning (MQL), an algorithm for efficient off-policy meta-learning. The method relies on a simple multi-task objective which provides initial parameter values for the adaptation phase. Adaptation is performed by gradient descent, minimizing TD-error on the new validation task (regularizing towards initial parameter values). To make adaptation data efficient, the method makes heavy use of off-policy data generated during meta-training, by minimizing its importance weighted TD-error. Importance weights are estimated via a likelihood ratio estimator, and are also used to derive the effective sample size of the meta-training batch, which is used to adaptively weight the regularization term. Intuitively, this has the effect of turning off regularization when meta-training trajectories are “close” to validation trajectories. One important but somewhat orthogonal contribution of the paper is to highlight the importance of context in meta-learning and fast adaptation. Concretely, the authors show that a simple actor-critic algorithm (TD3), whose policy and value are conditioned on a context variable derived from a recurrent network performs surprisingly well in comparison to SoTA meta-learning algorithms like PEARL. MQL is evaluated on benchmark meta-RL environments from continuous control tasks and is shown to perform competitively with PEARL.
SP:39e2b5a77cf6a3cf90efd0e78b2041855c4139fa
Discriminability Distillation in Group Representation Learning
1 INTRODUCTION . With the rapid development of deep learning and the easy access to large-scale group data , recognition tasks using group information have drawn great attention in the computer vision community . The rich information provided by different elements can complement each other to boost the performance of tasks such as face recognition , action recognition , and person re-identification ( Wang et al. , 2017b ; Zhong et al. , 2018 ; Girdhar et al. , 2017 ; Simonyan & Zisserman , 2014 ; Yang et al. , 2017 ; Liu et al. , 2019a ; Rao et al. , 2017b ) . While traditional practice for group-based recognition is to either aggregate the whole set by average ( Li et al. , 2014 ; Taigman et al. , 2014 ) or max pooling ( Chowdhury et al. , 2016 ) , or just sampling randomly ( Wang et al. , 2016 ) , the fact that certain elements contribute negatively in recognition tasks has been ignored . Thus , an important issue is to select representatives from sets for efficient group understanding . To tackle such cases , previous methods aim at defining the “ quality ” or “ saliency ” for each element in a group ( Liu et al. , 2017c ; Yang et al. , 2017 ; Rao et al. , 2017b ; Nikitin et al. , 2017 ) . The weights for each element can be automatically learned by self-attention . For example , Liu et al . ( 2017c ) proposes the Quality Aware Network ( QAN ) to learn quality score for each image inside an image set during network training . Other works adopt the same idea and extend to specific tasks such as video-based person re-identification ( Li et al. , 2018 ; Wu et al. , 2018 ) and action recognition ( Wang et al. , 2018c ) by learning spatial-temporal attentions . However , the whole online quality or attention learning procedures are either manually designed or learned through a black box , which lacks explainability . In this work , we explore deeper into the underlying mechanism for defining effective elements instead of relying on self-learned attention . Assuming that a base network has already been trained for element-based recognition using class labels , we define the “ discriminability ” of one sample by how difficult it is for the network to discriminate its class . As pointed out by Liu et al . ( 2018 ) that the feature embedding of elements lies close to the centroid of their corresponding class are the representatives , while features far away or closer to other classes are the confusing ones which are not discriminative enough . Inspired by this observation , we identify a successful discriminability indicator by measuring one embedding ’ s distance with class centroids and compute the ratio of between positive and hardest-negative , where the positive is its distance with its class ’ s corresponding centroid and the hardest-negative is the closest counterpart . This indicator is defined as the discriminability distillation regulation ( DDR ) . Armed with recent theories on the homogeneity between class centroids and projection weights of classifiers ( Wang et al. , 2017a ; Liu et al. , 2017b ; 2018 ; Deng et al. , 2019a ) , the entire distancemeasuring procedure can be easily accomplished by simply encoding all elements in one group . Thus , the DDR scores can be assessed for each element after the training of the base network . This assessing procedure is highly flexible without human supervision nor re-training the base network , so it can be adapted to any existing base . With our explicitly designed discriminability indicator on the training set , the distillation of such discriminability can be successfully performed with a lightweight discriminability distillation network ( DDNet ) , which shows the superiority of our proposed indicator . We call the whole procedure uniformly as discriminability distillation learning ( DDL ) . The next step is towards finding a better aggregation policy . At the test phase , all elements are firstly sent to the light-weight DDNet . Then element features will be weighted aggregated by their DDR score into group representation . Moreover , in order to achieve the trade-off between accuracy and efficiency , we can filter elements by DDR score and only extract element features of high score . Since the base model tends to be heavy , the filter can save much computation consumption . We evaluate the effectiveness of our proposed DDL on several classical yet challenging tasks . Comprehensive experiments show the advantage of our method on both recognition accuracy and computation efficiency . We achieve state-of-the-art results without modifying the base networks . We highlight our contributions as follows : ( 1 ) We define the discriminability of one element within a group from a more essential and explicable view , and propose an efficient indicator . ( 2 ) We verify that a light-weight network has the capacity of distilling discriminability from the assessed elements . Combining the post-processing with the network , the great computation burden can be saved comparing with existing methods . ( 3 ) We validate the effectiveness of DDL for both efficiency and accuracy on set-to-set face recognition and action recognition through extensive studies . State-ofthe-art results can be achieved . 2 RELATED WORK . 2.1 SET-TO-SET RECOGNITION . Set-to-set recognition which utilizes a group of data of the same class , has been proved efficient on various tasks and drawn much attention these years since the more videos and group datasets are available . Compared with recognition with a single image , set-to-set recognition can further explore the complementary information among set elements and benefit from it . Particularly in this paper , we care for the basic task of face and action recognition . Face Recognition . To tackle set-to-set face recognition problem . ( Wolf et al. , 2011 ; Kalka et al. , 2018 ; Beveridge et al. , 2013 ; Klare et al. , 2015 ) , traditional methods directly estimate the feature similarity among sets of feature vectors ( Arandjelovic et al. , 2005 ; Harandi et al. , 2011 ; Cevikalp & Triggs , 2010 ) . Other works seek to aggregate element features by simply applying max pooling ( Chowdhury et al. , 2016 ) or average pooling ( Li et al. , 2014 ; Taigman et al. , 2014 ) among set features to form a compact representation . However , since most set images are under unconstrained scenes , huge variations on blur , resolution , and occlusion appear , which will degrade the set feature discrimination . How to design a proper aggregation method for set face representation has been the key problem for this approach . Recently , a few methods explore the quality or attention mechanism to form set representation . GhostVLAD ( Zhong et al. , 2018 ) improves traditional VLAD and down weight low quality element features . While Rao et al . ( 2017a ) combine LSTM and reinforcement learning to discard low quality element features . Liu et al . ( 2017c ) and Yang et al . ( 2017 ) introduce attention mechanisms to assign quality scores for different elements and aggregate feature vectors by quality weighted sum . To predict the quality score , an online attention network module is added and co-optimized by the target set-to-set recognition task . However , the definition of generated ’ quality ’ score remains unclear and the learning procedures are learned through a black box , which lacks explainability . In our work , we claim that the most significant indicator to show whether the group representation can be benefited from an element is not the quality or an inexplicable score , but the discriminability . And a novel discriminability distillation learning procedure is proposed . Action recognition . With the advance of multimedia era , millions of hours of videos are uploaded to video platforms every day , so video understanding task like action recognition has become a popular research topic . Real-world videos contain variable frames , so it is not practical to put the whole video to a memory limited GPU . The most usual approach for video understanding is to sample frames or clips and design late fusion strategy to form video-level prediction . Frame-based methods ( Yue-Hei Ng et al. , 2015 ; Simonyan & Zisserman , 2014 ; Girdhar et al. , 2017 ) firstly extract frame features and aggregate them . Simonyan & Zisserman ( 2014 ) propose the twostream network to simultaneously capture the appearance and motion information . Wang et al . ( 2017b ) add attention module and learn to discard unrelated frames . Frames-based methods are computational efficient , but only aggregate high-level frame feature tends to limit the model ability to handle complex motion and temporal relation . Clip-based method ( Tran et al. , 2015 ; 2018 ; Feichtenhofer et al. , 2018 ) use 3D convolutional neural network to jointly capture spatio-temporal features , which perform better on action recognition . However , clip-based methods highly rely on the dense sample strategy , which introduces huge computational consumption and makes it unpractical to application . In this article , we show that by combing our DDL , the clip-based methods can achieve both excellent performance and computation efficiency . 3 DISCRIMINABILITY DISTILLATION LEARNING . In this section , we first formulate the problem of group representation learning in section 3.1 and then define the discriminability distillation regulation ( DDR ) in section 3.2 . Next , we introduce the whole discriminability distillation learning ( DDL ) procedure in section 3.3 . In sections 3.4 and 3.5 , we discuss the aggregation method and the advantage of our DDL , respectively . 3.1 FORMULATION OF GROUP REPRESENTATION LEARNING . Group representation learning focuses on formulating a uniform representation for a whole set of elements . The core of either verification task or classification task is how to aggregate features of a given element group . Define fi as the embedded feature of element Ii in a group IS , then the uniform feature representation of the whole group is FIS = G ( f1 , f2 , · · · , fi ) , ( 1 ) where G indicates the feature aggregation module . While previous research has revealed that conducting G with quality scores ( Liu et al. , 2017c ) has priority over simple aggregation , this kind of methods is not explainable . In this article , we propose a discriminability distillation learning ( DDL ) process to generate the discriminability of feature representation . 3.2 DISCRIMINABILITY DISTILLATION REGULATION . Towards learning efficient and accurate G , we design the discriminability distillation regulation ( DDR ) to generate the discriminability to replace the traditional quality score . In DDR , we jointly consider the feature space distribution and explicitly distill the discriminability by encoding the intra-class distance and inter-class distance with class centroids . Let X denote the training set with the identities C and Wj , j ∈ [ 1 , C ] is the class centroid . For feature fi , i ∈ [ 1 , s ] with class c where Under review as a conference paper at ICLR 2020 Enhanced quality aware network s denotes the size of X , the intra-class distance and inter-class distance are formulated as Cic = fi ·Wc ‖fi‖2 ‖Wc‖2 , Cij = fi ·Wj ‖fi‖2 ‖Wj‖2 , j 6= c. ( 2 ) The intra-class distance Cic and inter-class distance Cij are shown in Figure 2 . After training the base model on classification task , features of the elements from the same class are projected to hyperspace tightly in order to form an explicit decision boundary . Furthermore , elements close to the centroid of their corresponding class are the representative ones , while elements far away from their corresponding class or closer to other classes are not discriminative enough . Based on this observation , we define the discriminability Qi of fi as : Qi = Cic max { Cij | j ∈ [ 1 , C ] , j 6= c } , ( 3 ) i.e. , the ratio of feature distance between the centroids of its class and the hardest-negative class . Considering the variant number of elements in different groups , we further normalize the discriminability by : Di = τ ( Qi − µ ( { Qj | j ∈ [ 1 , s ] } ) σ ( { Qj | j ∈ [ 1 , s ] } ) ) ( 4 ) where τ ( · ) , µ ( · ) and σ ( · ) denote the sigmoid function , the mean value and the standard deviation value of { Qj | j ∈ [ 1 , s ] } , respectively . We denote Di as the discriminability distillation regulation ( DDR ) score . Cooperated with the feature space distribution , the DDR score Di is more interpretable and reasonable than the quality score in traditional quality learning . It can discriminate features better by explicitly encoding the intra- and inter-class distances with class centroids .
This paper studies how to aggregate features from group inputs. The paper proposes Discriminability Distillation Learning (DDL) to compute the aggregation coefficients. The method assumes that each sample has a discriminability property that is directly related to the task. The authors define this property and propose to learn such property by an auxiliary network. Such a network can be used in many models without affecting their original training procedure and is able to improve the performances on many tasks, including set-to-set face recognition and action recognition. The experimental results are comprehensive and convincing.
SP:97138888ff5e94c0c460690fd21246ab1bf5a39b
Discriminability Distillation in Group Representation Learning
1 INTRODUCTION . With the rapid development of deep learning and the easy access to large-scale group data , recognition tasks using group information have drawn great attention in the computer vision community . The rich information provided by different elements can complement each other to boost the performance of tasks such as face recognition , action recognition , and person re-identification ( Wang et al. , 2017b ; Zhong et al. , 2018 ; Girdhar et al. , 2017 ; Simonyan & Zisserman , 2014 ; Yang et al. , 2017 ; Liu et al. , 2019a ; Rao et al. , 2017b ) . While traditional practice for group-based recognition is to either aggregate the whole set by average ( Li et al. , 2014 ; Taigman et al. , 2014 ) or max pooling ( Chowdhury et al. , 2016 ) , or just sampling randomly ( Wang et al. , 2016 ) , the fact that certain elements contribute negatively in recognition tasks has been ignored . Thus , an important issue is to select representatives from sets for efficient group understanding . To tackle such cases , previous methods aim at defining the “ quality ” or “ saliency ” for each element in a group ( Liu et al. , 2017c ; Yang et al. , 2017 ; Rao et al. , 2017b ; Nikitin et al. , 2017 ) . The weights for each element can be automatically learned by self-attention . For example , Liu et al . ( 2017c ) proposes the Quality Aware Network ( QAN ) to learn quality score for each image inside an image set during network training . Other works adopt the same idea and extend to specific tasks such as video-based person re-identification ( Li et al. , 2018 ; Wu et al. , 2018 ) and action recognition ( Wang et al. , 2018c ) by learning spatial-temporal attentions . However , the whole online quality or attention learning procedures are either manually designed or learned through a black box , which lacks explainability . In this work , we explore deeper into the underlying mechanism for defining effective elements instead of relying on self-learned attention . Assuming that a base network has already been trained for element-based recognition using class labels , we define the “ discriminability ” of one sample by how difficult it is for the network to discriminate its class . As pointed out by Liu et al . ( 2018 ) that the feature embedding of elements lies close to the centroid of their corresponding class are the representatives , while features far away or closer to other classes are the confusing ones which are not discriminative enough . Inspired by this observation , we identify a successful discriminability indicator by measuring one embedding ’ s distance with class centroids and compute the ratio of between positive and hardest-negative , where the positive is its distance with its class ’ s corresponding centroid and the hardest-negative is the closest counterpart . This indicator is defined as the discriminability distillation regulation ( DDR ) . Armed with recent theories on the homogeneity between class centroids and projection weights of classifiers ( Wang et al. , 2017a ; Liu et al. , 2017b ; 2018 ; Deng et al. , 2019a ) , the entire distancemeasuring procedure can be easily accomplished by simply encoding all elements in one group . Thus , the DDR scores can be assessed for each element after the training of the base network . This assessing procedure is highly flexible without human supervision nor re-training the base network , so it can be adapted to any existing base . With our explicitly designed discriminability indicator on the training set , the distillation of such discriminability can be successfully performed with a lightweight discriminability distillation network ( DDNet ) , which shows the superiority of our proposed indicator . We call the whole procedure uniformly as discriminability distillation learning ( DDL ) . The next step is towards finding a better aggregation policy . At the test phase , all elements are firstly sent to the light-weight DDNet . Then element features will be weighted aggregated by their DDR score into group representation . Moreover , in order to achieve the trade-off between accuracy and efficiency , we can filter elements by DDR score and only extract element features of high score . Since the base model tends to be heavy , the filter can save much computation consumption . We evaluate the effectiveness of our proposed DDL on several classical yet challenging tasks . Comprehensive experiments show the advantage of our method on both recognition accuracy and computation efficiency . We achieve state-of-the-art results without modifying the base networks . We highlight our contributions as follows : ( 1 ) We define the discriminability of one element within a group from a more essential and explicable view , and propose an efficient indicator . ( 2 ) We verify that a light-weight network has the capacity of distilling discriminability from the assessed elements . Combining the post-processing with the network , the great computation burden can be saved comparing with existing methods . ( 3 ) We validate the effectiveness of DDL for both efficiency and accuracy on set-to-set face recognition and action recognition through extensive studies . State-ofthe-art results can be achieved . 2 RELATED WORK . 2.1 SET-TO-SET RECOGNITION . Set-to-set recognition which utilizes a group of data of the same class , has been proved efficient on various tasks and drawn much attention these years since the more videos and group datasets are available . Compared with recognition with a single image , set-to-set recognition can further explore the complementary information among set elements and benefit from it . Particularly in this paper , we care for the basic task of face and action recognition . Face Recognition . To tackle set-to-set face recognition problem . ( Wolf et al. , 2011 ; Kalka et al. , 2018 ; Beveridge et al. , 2013 ; Klare et al. , 2015 ) , traditional methods directly estimate the feature similarity among sets of feature vectors ( Arandjelovic et al. , 2005 ; Harandi et al. , 2011 ; Cevikalp & Triggs , 2010 ) . Other works seek to aggregate element features by simply applying max pooling ( Chowdhury et al. , 2016 ) or average pooling ( Li et al. , 2014 ; Taigman et al. , 2014 ) among set features to form a compact representation . However , since most set images are under unconstrained scenes , huge variations on blur , resolution , and occlusion appear , which will degrade the set feature discrimination . How to design a proper aggregation method for set face representation has been the key problem for this approach . Recently , a few methods explore the quality or attention mechanism to form set representation . GhostVLAD ( Zhong et al. , 2018 ) improves traditional VLAD and down weight low quality element features . While Rao et al . ( 2017a ) combine LSTM and reinforcement learning to discard low quality element features . Liu et al . ( 2017c ) and Yang et al . ( 2017 ) introduce attention mechanisms to assign quality scores for different elements and aggregate feature vectors by quality weighted sum . To predict the quality score , an online attention network module is added and co-optimized by the target set-to-set recognition task . However , the definition of generated ’ quality ’ score remains unclear and the learning procedures are learned through a black box , which lacks explainability . In our work , we claim that the most significant indicator to show whether the group representation can be benefited from an element is not the quality or an inexplicable score , but the discriminability . And a novel discriminability distillation learning procedure is proposed . Action recognition . With the advance of multimedia era , millions of hours of videos are uploaded to video platforms every day , so video understanding task like action recognition has become a popular research topic . Real-world videos contain variable frames , so it is not practical to put the whole video to a memory limited GPU . The most usual approach for video understanding is to sample frames or clips and design late fusion strategy to form video-level prediction . Frame-based methods ( Yue-Hei Ng et al. , 2015 ; Simonyan & Zisserman , 2014 ; Girdhar et al. , 2017 ) firstly extract frame features and aggregate them . Simonyan & Zisserman ( 2014 ) propose the twostream network to simultaneously capture the appearance and motion information . Wang et al . ( 2017b ) add attention module and learn to discard unrelated frames . Frames-based methods are computational efficient , but only aggregate high-level frame feature tends to limit the model ability to handle complex motion and temporal relation . Clip-based method ( Tran et al. , 2015 ; 2018 ; Feichtenhofer et al. , 2018 ) use 3D convolutional neural network to jointly capture spatio-temporal features , which perform better on action recognition . However , clip-based methods highly rely on the dense sample strategy , which introduces huge computational consumption and makes it unpractical to application . In this article , we show that by combing our DDL , the clip-based methods can achieve both excellent performance and computation efficiency . 3 DISCRIMINABILITY DISTILLATION LEARNING . In this section , we first formulate the problem of group representation learning in section 3.1 and then define the discriminability distillation regulation ( DDR ) in section 3.2 . Next , we introduce the whole discriminability distillation learning ( DDL ) procedure in section 3.3 . In sections 3.4 and 3.5 , we discuss the aggregation method and the advantage of our DDL , respectively . 3.1 FORMULATION OF GROUP REPRESENTATION LEARNING . Group representation learning focuses on formulating a uniform representation for a whole set of elements . The core of either verification task or classification task is how to aggregate features of a given element group . Define fi as the embedded feature of element Ii in a group IS , then the uniform feature representation of the whole group is FIS = G ( f1 , f2 , · · · , fi ) , ( 1 ) where G indicates the feature aggregation module . While previous research has revealed that conducting G with quality scores ( Liu et al. , 2017c ) has priority over simple aggregation , this kind of methods is not explainable . In this article , we propose a discriminability distillation learning ( DDL ) process to generate the discriminability of feature representation . 3.2 DISCRIMINABILITY DISTILLATION REGULATION . Towards learning efficient and accurate G , we design the discriminability distillation regulation ( DDR ) to generate the discriminability to replace the traditional quality score . In DDR , we jointly consider the feature space distribution and explicitly distill the discriminability by encoding the intra-class distance and inter-class distance with class centroids . Let X denote the training set with the identities C and Wj , j ∈ [ 1 , C ] is the class centroid . For feature fi , i ∈ [ 1 , s ] with class c where Under review as a conference paper at ICLR 2020 Enhanced quality aware network s denotes the size of X , the intra-class distance and inter-class distance are formulated as Cic = fi ·Wc ‖fi‖2 ‖Wc‖2 , Cij = fi ·Wj ‖fi‖2 ‖Wj‖2 , j 6= c. ( 2 ) The intra-class distance Cic and inter-class distance Cij are shown in Figure 2 . After training the base model on classification task , features of the elements from the same class are projected to hyperspace tightly in order to form an explicit decision boundary . Furthermore , elements close to the centroid of their corresponding class are the representative ones , while elements far away from their corresponding class or closer to other classes are not discriminative enough . Based on this observation , we define the discriminability Qi of fi as : Qi = Cic max { Cij | j ∈ [ 1 , C ] , j 6= c } , ( 3 ) i.e. , the ratio of feature distance between the centroids of its class and the hardest-negative class . Considering the variant number of elements in different groups , we further normalize the discriminability by : Di = τ ( Qi − µ ( { Qj | j ∈ [ 1 , s ] } ) σ ( { Qj | j ∈ [ 1 , s ] } ) ) ( 4 ) where τ ( · ) , µ ( · ) and σ ( · ) denote the sigmoid function , the mean value and the standard deviation value of { Qj | j ∈ [ 1 , s ] } , respectively . We denote Di as the discriminability distillation regulation ( DDR ) score . Cooperated with the feature space distribution , the DDR score Di is more interpretable and reasonable than the quality score in traditional quality learning . It can discriminate features better by explicitly encoding the intra- and inter-class distances with class centroids .
In this paper, the authors proposed a discriminability distillation learning (DDL) method for the group representation learning, such as action recognition recognition and face recognition. The main insight of DDL is to explicitly design the discrimiability using embedded class centroids on a proxy set, and show the discrimiability distribution w.r.t. the element space can be distilled by a light-weight auxiliary distillation network. The experimental results on the action recognition task and face recognition task show that the proposed method appears to be effective compared with some related methods. The detailed comments are listed as follows,
SP:97138888ff5e94c0c460690fd21246ab1bf5a39b
Learning to Prove Theorems by Learning to Generate Theorems
1 INTRODUCTION . Automated theorem proving is a key task in Artificial Intelligence . The goal is to automatically generate a proof , given a conjecture ( the target theorem ) and a knowledge base of known facts , all expressed in a formal language . Automated theorem proving is useful in a wide range of applications , including the verification and synthesis of software and hardware systems ( Gu et al. , 2016 ; Darvas et al. , 2005 ; Kern & Greenstreet , 1999 ) . Automated theorem proving boils down to a search problem : finding the sequence of symbol manipulations that generate a valid proof . A prover typically works backward : starting from the theorem statement , it searches for a path that connects the theorem to known facts in the knowledge base . The fundamental challenge lies in the explosion of search space , in particular with long proofs and large knowledge bases . The success of theorem proving thus relies on effective heuristics that guide the search by deciding the next step the prover should take . Deep learning has emerged as a promising approach to learning search heuristics in a automated theorem prover ( Irving et al. , 2016 ; Yang & Deng , 2019 ; Whalen , 2016 ; Loos et al. , 2017 ; Bansal et al. , 2019a ) . The search process fundamentally reduces to a sequence of actions on manipulating a set of symbols . Thus a deep network can be trained to select the best action at each step . A key challenge is how to train such networks . Prior work has used human-written theorems and proofs to perform imitation learning and has shown promising results ( Loos et al. , 2017 ; Yang & Deng , 2019 ; Whalen , 2016 ; Paliwal et al. , 2019 ) . The training data consists of theorems and proofs manually written by human experts in a formal language , and the prover is trained to imitate the proof steps demonstrated by humans . However , relying on human-written data has a major drawback , that is , such data has limited availability and scalability . Writing theorems and proofs in a formal language requires highly specialized knowledge and skill , including mathematics , computer programming , and proficiency in the particular formal language . For a computer science graduate student , it can take months to master a new formal language such as Mizar , Metamath or HOLight ( Wiedijk , 2003 ) , after which it can take days to formalize a single page of a math textbook . This makes it impractical to crowdsource human-written proofs at large scale . An alternative to imitation learning is reinforcement learning , which requires only formalized theorem statements but not their proofs . During training , the prover estimates the value of each action through exploration . This reinforcement learning approach substantially reduces the amount of manual formalization needed , but at the expense of sample efficiency . The prover needs positive rewards to assess past attempts , but positive rewards are only available when the prover finds a com- plete proof , which is rare because it involves a combination of multiple correct steps . This leads to extremely sparse positive rewards , and in turn very low sample efficiency . In this paper , we propose to learn search heuristics using synthetic data . The basic idea is to construct a generator that automatically synthesizes new theorems and their proofs , which are then used to augment human-written data . To generate a new theorem and its proof , the generator applies an inference rule on a set of existing theorems and combines their proofs to form the proof of the new theorem . Similar to the prover , the generator performs a sequence of symbol manipulations , albeit in the inverse direction , going forward from existing theorems to a new theorem instead of from a target theorem to existing ones . A key question is how to construct a generator such that the generated data is useful . The space of new theorems and proofs is infinite , but a prover can only process a finite amount of data during training . Thus , to maximize the utility of the generate data , we make the generator learnable by parametrizing it with deep networks . We hypothesize that the generated data will be more useful if they are similar to human-written data . Thus we use human-written data to train a generator . We consider two scenarios . If the humanwritten data consists of both theorem statements and their proofs , we train the generator to follow the proof steps in the forward direction , so that a well-trained generator would derive theorems humans tend to derive . If the human-written data consists of only theorem statements but not their proofs , we use reinforcement learning to train the generator such that the generated theorems are similar to the human-written theorems . We measure similarity using the language model trained on the human-written theorem . We instantiate our approach in Metamath ( Megill , 2019 ) , a popular language for formal mathematics , and with Holophrasm ( Whalen , 2016 ) , a Metamath neural prover . We propose a neural theorem generator we call “ MetaGen “ , which synthesizes new theorems and their proofs expressed in the formalism of Metamath . To the best of our knowledge , MetaGen is the first neural generator of synthetic training data for theorem proving . Experiments on real-world Metamath tasks demonstrate that synthetic data from MetaGen helps the prover prove more human-written theorems , achieving state of the art results . Experiments also show that our approach can synthesize useful data , even when there are only human-written theorems but zero proofs during training . 2 RELATED WORK . Automated theorem proving Our work is related to prior work on learning to prove theorems ( Whalen , 2016 ; Gauthier et al. , 2018 ; Bansal et al. , 2019a ; Yang & Deng , 2019 ; Loos et al. , 2017 ; Balunovic et al. , 2018 ; Kaliszyk et al. , 2018 ; Bansal et al. , 2019b ) . Our work directly builds off of Holophrasm ( Whalen , 2016 ) , a neural-augmented theorem prover for Metamath . It contains three deep networks to generate actions and initial values to guide proof search following the UCT algorithm ( Kocsis & Szepesvári , 2006 ) . TacticToe ( Gauthier et al. , 2018 ) , DeepHOL ( Bansal et al. , 2019a ) and ASTactic ( Yang & Deng , 2019 ) are learning-based theorem provers for higher-order logic based on various interactive theorem provers , including HOL4 ( Slind & Norrish , 2008 ) , HOL Light ( HOLLight ) and Coq ( Bertot & Castéran , 2004 ) . Paliwal et al . ( 2019 ) improves DeepHOL by representing formulas as graphs . Loos et al . ( 2017 ) propose to learn clause selection by deep learning inside the first-order logic prover E ( Schulz , 2002 ) . FastSMT ( Balunovic et al. , 2018 ) learns to compose search heuristics as programs with branches for the SMT solver ( De Moura & Bjørner , 2008 ) . All of these methods are othogonal to our approach because all of their provers are learned from human-written training data , whereas our contribution is on training a neural generator of synthetic training data for theorem proving . Kaliszyk et al . ( 2018 ) ; Bansal et al . ( 2019a ; b ) use reinforcement learning to train provers with only human-written theorems but not their proofs . During training , a prover only collects rewards only upon finding full proofs . In contrast , we always train our prover using imitation learning . Under the same setting with only human-written theorems but not proofs , we use reinforcement learning to train our generator , whose reward is the similarity between a generated theorem and a humanwritten theorem , as measured by a language model of human-written theorems . Our reinforcement learning task is much easier because the reward is continuous and there are many ways to generate theorems similar to human-written ones . Automatic goal generation by self-play Our work is similar to the line of work in reinforcement learning ( Florensa et al. , 2018 ; Sukhbaatar et al. , 2017 ; 2018 ; Durugkar & Stone , 2018 ) that deploys one agent to generate tasks for another agent to accomplish . Sukhbaatar et al . ( 2017 ) ; Florensa et al . ( 2018 ) propose to train these two agents by adversary self-play , where the generation agent learns to produce difficult goals for another agent . With self-play , the generator learns to increase the difficulty of goals and build a learning curriculum automatically . We pursue similar ideas in the new context of theorem proving by learning to generate synthetic theorems to train the prover . Also of note is that we have no adversarial self-play . The goal of the generator is to discover novel theorems similar to human-written ones , not to beat the prover . Recently , Huang ( 2019 ) introduced a two-player game which encourages players to learn to predict the consistency of formulas in first-order logic by self-play . These two players behave symmetrically and complete with each other in the game . In contrast , our generator and prover execute different tasks , and are co-operative . In addition , their game remains a theoretical proposal without any empirical validation , whereas we have performed experiments on large-scale data . 3 BACKGROUND ON METAMATH . Metamath is a language for developing formal mathematics . It is one of the simplest formal systems . It has only one inference rule , called substitution , but is universally applicable in formalizing a large portion of mathematics 1 and different types of logic ( Megill , 2019 ) . A knowledge base in Metamath consists of a set of theorems including axioms , which are admitted to be true , and others that are derived from proofs . Each theorem has one or more expressions , including one assertion and zero or more hypotheses . The hypotheses provide the preconditions , such as x = y2 and y is an even number , to prove the assertion , such as x is divisible by 4 . Following Whalen ( 2016 ) , an expression is represented as a tree of tokens , whose nodes are either constants or variables . A constant node has a fixed number of children ( including zero ) and a variable has no children . Therefore , we represent each expression as a unique sequence of tokens by traversing its parse tree in pre-order . A proof is a sequence of steps using substitution . A proof step has two parts , a theorem that is declared earlier than the current theorem in the knowledge base , and a substitution that maps a variable in this theorem to a new expression . For example , we have a theorem t , hypotheses : A = B ( 1 ) assertion : CFA = CFB ( 2 ) { A , B , C , F } is the set of variables in t. Let φ be a substitution to map each variable in t to a new expression , A→ 2 B → ( 1 + 1 ) C → 2 F → + ( 3 ) By replacing variables in the t with their corresponding expressions from φ , we have a new assertion and a set of new hypotheses , new hypotheses : 2 = ( 1 + 1 ) ( 4 ) new assertion : 2 + 2 = 2 + ( 1 + 1 ) ( 5 ) and this proof step ( t , φ ) demonstrates that the new assertion 2 + 2 = 2 + ( 1 + 1 ) is entailed by the new hypothesis 2 = ( 1 + 1 ) . Note that we need to substitute all occurrences of the same variable with the same expression in both the assertion and hypotheses 2 . Formally , let e ∈ E be an expression ( a tree of tokens ) with l unique variables fe = ( f1 , f2 , ... , fl ) ∈ F l , and let φ ∈ F → E be a substitution . Let e ( φ ) denote the new expression obtained by replacing 1Its largest knowledge base , set.mm ranks 3rd in the ” Formalizing 100 Theorems ” challenge ( Wiedijk ) . 2Variables in Metamath are called metavariables , which are different from variables bound by quantifiers in the first-order and higher-order logic . fi in e with φ ( fi ) , for i = 1 , 2 ... , l. And given k expressions e = ( e1 , e2 , ... , ek ) ∈ Ek , let e ( φ ) = ( e1 ( φ ) , e2 ( φ ) , ... , ek ( φ ) ) represent applying the substitution to all k expressions . Given a theorem t , let at be its assertion and ht = ( ht,1 , ht,2 , ... , ht , m ) ∈ Em be its hypotheses . Let φt be a substitution for variables in t. A proof step s = ( t , φt ) ∈ Em+1 × ( F → E ) demonstrates an entailment of assertion at ( φt ) by hypotheses ht ( φt ) . In this formal system , proving a theorem τ means finding a tree such that ( 1 ) the root node is the assertion of τ , ( 2 ) each leaf node is either empty or one of the hypotheses of τ , and ( 3 ) each internal node is an expression associated with a proof step that demonstrates an entailment of the node by its children . To prove a target theorem τ , it is the most straightforward to reason backward . We start by selecting a proof step that will demonstrate an entailment of the assertion of the target theorem , that is , at ( φ ) = aτ . It is worth noting that if we pick a particular theorem t , if a valid φ exists , φ ( f ) is uniquely determined for any variable f that occurs in the assertion ( recall that each expression is a tree of tokens ) . But φ ( f ) is not uniquely determined if f only occurs in the hypotheses ( f is called a hypothesis variable , or a assertion variable if it only occurs in the assertion ) , because it can be replaced with anything . Once this initial proof step ( t , φ ) is fully specified , the assertion aτ is entailed by a set of new hypotheses hτ ( φ ) , and the goal of proving theorem τ has now been decomposed to the subgoals of finding entailment of the new hypotheses hτ ( φ ) by the original hypotheses hτ .
This paper proposes a generative model for proofs in Metamath, a language for formalizing mathematics. The model includes neural networks, which provide guidance about which fact to try to prove next and how to prove the fact from the facts derived so far. The parameters of these networks are learned from existing proofs or theorem statements. The main purpose of this model is to generate synthetic theorems and proofs that can be used to train the neural networks of a data-driven search-based theorem prover. The experiments with the Metamath set.mm knowledge base show the benefits of the synthetically generated proofs for building a data-driven theorem prover.
SP:ef00d1cc6981df5591b757e7a46ba9179d1fc50a
Learning to Prove Theorems by Learning to Generate Theorems
1 INTRODUCTION . Automated theorem proving is a key task in Artificial Intelligence . The goal is to automatically generate a proof , given a conjecture ( the target theorem ) and a knowledge base of known facts , all expressed in a formal language . Automated theorem proving is useful in a wide range of applications , including the verification and synthesis of software and hardware systems ( Gu et al. , 2016 ; Darvas et al. , 2005 ; Kern & Greenstreet , 1999 ) . Automated theorem proving boils down to a search problem : finding the sequence of symbol manipulations that generate a valid proof . A prover typically works backward : starting from the theorem statement , it searches for a path that connects the theorem to known facts in the knowledge base . The fundamental challenge lies in the explosion of search space , in particular with long proofs and large knowledge bases . The success of theorem proving thus relies on effective heuristics that guide the search by deciding the next step the prover should take . Deep learning has emerged as a promising approach to learning search heuristics in a automated theorem prover ( Irving et al. , 2016 ; Yang & Deng , 2019 ; Whalen , 2016 ; Loos et al. , 2017 ; Bansal et al. , 2019a ) . The search process fundamentally reduces to a sequence of actions on manipulating a set of symbols . Thus a deep network can be trained to select the best action at each step . A key challenge is how to train such networks . Prior work has used human-written theorems and proofs to perform imitation learning and has shown promising results ( Loos et al. , 2017 ; Yang & Deng , 2019 ; Whalen , 2016 ; Paliwal et al. , 2019 ) . The training data consists of theorems and proofs manually written by human experts in a formal language , and the prover is trained to imitate the proof steps demonstrated by humans . However , relying on human-written data has a major drawback , that is , such data has limited availability and scalability . Writing theorems and proofs in a formal language requires highly specialized knowledge and skill , including mathematics , computer programming , and proficiency in the particular formal language . For a computer science graduate student , it can take months to master a new formal language such as Mizar , Metamath or HOLight ( Wiedijk , 2003 ) , after which it can take days to formalize a single page of a math textbook . This makes it impractical to crowdsource human-written proofs at large scale . An alternative to imitation learning is reinforcement learning , which requires only formalized theorem statements but not their proofs . During training , the prover estimates the value of each action through exploration . This reinforcement learning approach substantially reduces the amount of manual formalization needed , but at the expense of sample efficiency . The prover needs positive rewards to assess past attempts , but positive rewards are only available when the prover finds a com- plete proof , which is rare because it involves a combination of multiple correct steps . This leads to extremely sparse positive rewards , and in turn very low sample efficiency . In this paper , we propose to learn search heuristics using synthetic data . The basic idea is to construct a generator that automatically synthesizes new theorems and their proofs , which are then used to augment human-written data . To generate a new theorem and its proof , the generator applies an inference rule on a set of existing theorems and combines their proofs to form the proof of the new theorem . Similar to the prover , the generator performs a sequence of symbol manipulations , albeit in the inverse direction , going forward from existing theorems to a new theorem instead of from a target theorem to existing ones . A key question is how to construct a generator such that the generated data is useful . The space of new theorems and proofs is infinite , but a prover can only process a finite amount of data during training . Thus , to maximize the utility of the generate data , we make the generator learnable by parametrizing it with deep networks . We hypothesize that the generated data will be more useful if they are similar to human-written data . Thus we use human-written data to train a generator . We consider two scenarios . If the humanwritten data consists of both theorem statements and their proofs , we train the generator to follow the proof steps in the forward direction , so that a well-trained generator would derive theorems humans tend to derive . If the human-written data consists of only theorem statements but not their proofs , we use reinforcement learning to train the generator such that the generated theorems are similar to the human-written theorems . We measure similarity using the language model trained on the human-written theorem . We instantiate our approach in Metamath ( Megill , 2019 ) , a popular language for formal mathematics , and with Holophrasm ( Whalen , 2016 ) , a Metamath neural prover . We propose a neural theorem generator we call “ MetaGen “ , which synthesizes new theorems and their proofs expressed in the formalism of Metamath . To the best of our knowledge , MetaGen is the first neural generator of synthetic training data for theorem proving . Experiments on real-world Metamath tasks demonstrate that synthetic data from MetaGen helps the prover prove more human-written theorems , achieving state of the art results . Experiments also show that our approach can synthesize useful data , even when there are only human-written theorems but zero proofs during training . 2 RELATED WORK . Automated theorem proving Our work is related to prior work on learning to prove theorems ( Whalen , 2016 ; Gauthier et al. , 2018 ; Bansal et al. , 2019a ; Yang & Deng , 2019 ; Loos et al. , 2017 ; Balunovic et al. , 2018 ; Kaliszyk et al. , 2018 ; Bansal et al. , 2019b ) . Our work directly builds off of Holophrasm ( Whalen , 2016 ) , a neural-augmented theorem prover for Metamath . It contains three deep networks to generate actions and initial values to guide proof search following the UCT algorithm ( Kocsis & Szepesvári , 2006 ) . TacticToe ( Gauthier et al. , 2018 ) , DeepHOL ( Bansal et al. , 2019a ) and ASTactic ( Yang & Deng , 2019 ) are learning-based theorem provers for higher-order logic based on various interactive theorem provers , including HOL4 ( Slind & Norrish , 2008 ) , HOL Light ( HOLLight ) and Coq ( Bertot & Castéran , 2004 ) . Paliwal et al . ( 2019 ) improves DeepHOL by representing formulas as graphs . Loos et al . ( 2017 ) propose to learn clause selection by deep learning inside the first-order logic prover E ( Schulz , 2002 ) . FastSMT ( Balunovic et al. , 2018 ) learns to compose search heuristics as programs with branches for the SMT solver ( De Moura & Bjørner , 2008 ) . All of these methods are othogonal to our approach because all of their provers are learned from human-written training data , whereas our contribution is on training a neural generator of synthetic training data for theorem proving . Kaliszyk et al . ( 2018 ) ; Bansal et al . ( 2019a ; b ) use reinforcement learning to train provers with only human-written theorems but not their proofs . During training , a prover only collects rewards only upon finding full proofs . In contrast , we always train our prover using imitation learning . Under the same setting with only human-written theorems but not proofs , we use reinforcement learning to train our generator , whose reward is the similarity between a generated theorem and a humanwritten theorem , as measured by a language model of human-written theorems . Our reinforcement learning task is much easier because the reward is continuous and there are many ways to generate theorems similar to human-written ones . Automatic goal generation by self-play Our work is similar to the line of work in reinforcement learning ( Florensa et al. , 2018 ; Sukhbaatar et al. , 2017 ; 2018 ; Durugkar & Stone , 2018 ) that deploys one agent to generate tasks for another agent to accomplish . Sukhbaatar et al . ( 2017 ) ; Florensa et al . ( 2018 ) propose to train these two agents by adversary self-play , where the generation agent learns to produce difficult goals for another agent . With self-play , the generator learns to increase the difficulty of goals and build a learning curriculum automatically . We pursue similar ideas in the new context of theorem proving by learning to generate synthetic theorems to train the prover . Also of note is that we have no adversarial self-play . The goal of the generator is to discover novel theorems similar to human-written ones , not to beat the prover . Recently , Huang ( 2019 ) introduced a two-player game which encourages players to learn to predict the consistency of formulas in first-order logic by self-play . These two players behave symmetrically and complete with each other in the game . In contrast , our generator and prover execute different tasks , and are co-operative . In addition , their game remains a theoretical proposal without any empirical validation , whereas we have performed experiments on large-scale data . 3 BACKGROUND ON METAMATH . Metamath is a language for developing formal mathematics . It is one of the simplest formal systems . It has only one inference rule , called substitution , but is universally applicable in formalizing a large portion of mathematics 1 and different types of logic ( Megill , 2019 ) . A knowledge base in Metamath consists of a set of theorems including axioms , which are admitted to be true , and others that are derived from proofs . Each theorem has one or more expressions , including one assertion and zero or more hypotheses . The hypotheses provide the preconditions , such as x = y2 and y is an even number , to prove the assertion , such as x is divisible by 4 . Following Whalen ( 2016 ) , an expression is represented as a tree of tokens , whose nodes are either constants or variables . A constant node has a fixed number of children ( including zero ) and a variable has no children . Therefore , we represent each expression as a unique sequence of tokens by traversing its parse tree in pre-order . A proof is a sequence of steps using substitution . A proof step has two parts , a theorem that is declared earlier than the current theorem in the knowledge base , and a substitution that maps a variable in this theorem to a new expression . For example , we have a theorem t , hypotheses : A = B ( 1 ) assertion : CFA = CFB ( 2 ) { A , B , C , F } is the set of variables in t. Let φ be a substitution to map each variable in t to a new expression , A→ 2 B → ( 1 + 1 ) C → 2 F → + ( 3 ) By replacing variables in the t with their corresponding expressions from φ , we have a new assertion and a set of new hypotheses , new hypotheses : 2 = ( 1 + 1 ) ( 4 ) new assertion : 2 + 2 = 2 + ( 1 + 1 ) ( 5 ) and this proof step ( t , φ ) demonstrates that the new assertion 2 + 2 = 2 + ( 1 + 1 ) is entailed by the new hypothesis 2 = ( 1 + 1 ) . Note that we need to substitute all occurrences of the same variable with the same expression in both the assertion and hypotheses 2 . Formally , let e ∈ E be an expression ( a tree of tokens ) with l unique variables fe = ( f1 , f2 , ... , fl ) ∈ F l , and let φ ∈ F → E be a substitution . Let e ( φ ) denote the new expression obtained by replacing 1Its largest knowledge base , set.mm ranks 3rd in the ” Formalizing 100 Theorems ” challenge ( Wiedijk ) . 2Variables in Metamath are called metavariables , which are different from variables bound by quantifiers in the first-order and higher-order logic . fi in e with φ ( fi ) , for i = 1 , 2 ... , l. And given k expressions e = ( e1 , e2 , ... , ek ) ∈ Ek , let e ( φ ) = ( e1 ( φ ) , e2 ( φ ) , ... , ek ( φ ) ) represent applying the substitution to all k expressions . Given a theorem t , let at be its assertion and ht = ( ht,1 , ht,2 , ... , ht , m ) ∈ Em be its hypotheses . Let φt be a substitution for variables in t. A proof step s = ( t , φt ) ∈ Em+1 × ( F → E ) demonstrates an entailment of assertion at ( φt ) by hypotheses ht ( φt ) . In this formal system , proving a theorem τ means finding a tree such that ( 1 ) the root node is the assertion of τ , ( 2 ) each leaf node is either empty or one of the hypotheses of τ , and ( 3 ) each internal node is an expression associated with a proof step that demonstrates an entailment of the node by its children . To prove a target theorem τ , it is the most straightforward to reason backward . We start by selecting a proof step that will demonstrate an entailment of the assertion of the target theorem , that is , at ( φ ) = aτ . It is worth noting that if we pick a particular theorem t , if a valid φ exists , φ ( f ) is uniquely determined for any variable f that occurs in the assertion ( recall that each expression is a tree of tokens ) . But φ ( f ) is not uniquely determined if f only occurs in the hypotheses ( f is called a hypothesis variable , or a assertion variable if it only occurs in the assertion ) , because it can be replaced with anything . Once this initial proof step ( t , φ ) is fully specified , the assertion aτ is entailed by a set of new hypotheses hτ ( φ ) , and the goal of proving theorem τ has now been decomposed to the subgoals of finding entailment of the new hypotheses hτ ( φ ) by the original hypotheses hτ .
This paper focuses on the problem of developing deep learning systems that can prove theorems in a mathematical formalism -- in this case, MetaMath. This has been a rapidly growing topic in the past few years, as evidenced by the numerous cited works. What sets this work apart from others is its focus on the instrumental task of generating data to train a prover, rather than directly training the prover on human theorems (via reinforcement learning) or human proofs (via imitation learning).
SP:ef00d1cc6981df5591b757e7a46ba9179d1fc50a
Collapsed amortized variational inference for switching nonlinear dynamical systems
1 INTRODUCTION . Consider watching from above an airplane flying across country or a car driving through a field . The vehicle ’ s motion is composed of straight , linear dynamics and curving , nonlinear dynamics . This is illustrated in fig . 1 ( a ) . In this paper , we propose a new inference algorithm for fitting switching nonlinear dynamical systems ( SNLDS ) , which can be used to segment time series data as sequences of images , or lower dimensional signals , such as ( x , y ) locations into meaningful discrete temporal “ modes ” or “ regimes ” . The transitions between these modes may correspond to the changes in internal goals of the agent ( e.g. , a mouse switching from running to resting , as in Johnson et al . ( 2016 ) ) or may be caused by external factors ( e.g. , changes in the road curvature ) . Discovering such discrete modes is useful for scientific applications ( c.f. , Wiltschko et al . ( 2015 ) ; Linderman et al . ( 2019 ) ) as well as for planning in the context of hierarchical reinforcement learning ( c.f. , Kipf et al . ( 2019 ) ) . There has been extensive previous work , some of which we review in Section 2 , on modeling temporal data using various forms of state space models ( SSM ) . We are interested in the class of SSM which has both discrete and continuous latent variables , which we denote by st and zt , where t is the discrete time index . The discrete state , st ∈ { 1 , 2 , . . . , K } , represents the mode of the system at time t , and the continuous state , zi ∈ RH , represents other factors of variation , such as location and velocity . The observed data is denoted by xt ∈ RD , and can either be a low dimensional projection of zt , such as the current location , or a high dimensional signal that is informative about zt , such as an image . We may optionally have observed input or control signals ut ∈ RU , which drive the system in addition to unobserved stochastic noise . We are interested in learning a generative model of the form pθ ( s1 : T , z1 : T , x1 : T |u1 : T ) from partial observations , namely ( x1 : T , u1 : T ) . This requires inferring the posterior over the latent states , pθ ( s1 : T , z1 : T |v1 : T ) , where vt = ( xt , ut ) contains all the visible variables at time t. For training purposes , we usually assume that we have multiple such trajectories , possibly of different lengths , but we omit the sequence indices from our notations for simplicity . This problem is very challenging , because the model contains both discrete and continuous latent variables ( a so-called “ hybrid system ” ) , and has nonlinear transition and observation models . The main contribution of our paper is a new way to perform efficient approximate inference in this class of SNLDS models . The key observation is that , conditioned on knowing z1 : T as well as v1 : T , we can marginalize out s1 : T in linear time using the forward-backward algorithm . In particular , we can efficiently compute the gradient of the log marginal likelihood , ∇ ∑ s1 : T log p ( s1 : T |z̃1 : T , v1 : T ) , where z̃1 : T is a posterior sample that we need for model fitting . To efficiently compute posterior samples z̃1 : T , we learn an amortized inference network qφ ( z1 : T |v1 : T ) for the “ collapsed ” NLDS model p ( z1 : T , v1 : T ) . The collapsing trick removes the discrete variables , and allows us to use the reparameterization trick for the continuous z . These tricks let us use stochastic gradient descent ( SGD ) to learn p and q jointly , as explained in Section 3 . We can then use q as a proposal distribution inside a Rao-Blackwellised particle filter ( Doucet et al. , 2000 ) , although in this paper , we just use a single posterior sample , as is common with Variational AutoEncoders ( VAEs , Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) ) . Although the above “ trick ” allows us efficiently perform inference and learning , we find that in challenging problems ( e.g. , when the dynamical model p ( zt|zt−1 , vt ) is very flexible ) , the model ignores the discrete latent variables , and does not perform mode switching . This is a form of “ posterior collapse ” , similar to VAEs , where powerful decoders can cause the latent variables to be ignored , as explained in Alemi et al . ( 2018 ) . Our second contribution is a new form of posterior regularization , which prevents the aforementioned problem and results in a significantly improved segmentation . We apply our method , as well as various existing methods , to two previously proposed lowdimensional time series segmentation problems , namely a 1d bouncing ball , and a 2d moving arm . In the 1d case , the dynamics are piecewise linear , and all methods perform perfectly . In the 2d case , the dynamics are piecewise nonlinear , and we show that our method infers much better segmentation than previous approaches for comparable computational cost . We also apply our method to a simple new video dataset ( see fig . 1 for an example ) , and find that it performs well , provided we use our proposed regularization method . In summary , our main contributions are • Learning switching nonlinear dynamical systems parameterized with neural networks by marginalizing out discrete variables . • Using entropy regularization and annealing to encourage discrete state transitions . • Demonstrating that the discrete states of nonlinear models are more interpretable . 2 RELATED WORK . In this section , we briefly summarize some related work . 2.1 STATE SPACE MODELS . We consider the following state space model : pθ ( x , z , s ) = p ( x1|z1 ) p ( z1|s1 ) [ T∏ t=2 p ( xt|zt ) p ( zt|zt−1 , st ) p ( st|st−1 , xt−1 ) ] , ( 1 ) where st ∈ { 1 , . . . , K } is the discrete hidden state , zt ∈ RL is the continuous hidden state , and xt ∈ RD is the observed output , as in fig . 2 ( a ) . For notational simplicity , we ignore any observed inputs or control signals ut , but these can be trivially added to our model . Note that the discrete state influences the latent dynamics zt , but we could trivially make it influence the observations xt as well . More interesting are which edges we choose to add as parents of the discrete state st. We consider the case where st depends on the previous discrete state , st−1 , as in a hidden Markov model ( HMM ) , but also depends on the previous observation , xt−1 . We can trivially depend on multiple previous observations ; we assume first-order Markov for simplicity . This means that state changes do not have to happen “ open loop ” , but instead may be triggered by signals from the environment . We can also condition zt on xt−1 , and st on zt−1 . It is straightforward to handle such additional dependencies ( shown by dashed lines in fig . 2 ( a ) ) in our inference method , which is not true for some of the other methods we discuss below . We still need to specify the functional forms of the conditional probability distributions . In this paper , we make the following fairly weak assumptions : p ( xt|zt ) = N ( xt|fx ( zt ) , R ) , ( 2 ) p ( zt|zt−1 , st = k ) = N ( zt|fz ( zt−1 , k ) , Q ) , ( 3 ) p ( st|st−1 = j , xt−1 ) = Cat ( st|S ( fs ( xt−1 , j ) ) , ( 4 ) where fx , z , s are nonlinear functions ( MLPs or RNNs ) , N ( · , · ) is a multivariate Gaussian distribution , Cat ( · ) is a categorical distribution , and S ( · ) is a softmax function . R ∈ RD×D and Q ∈ RH×H are learned covariance matrices for the Gaussian emission and transition noise . If fx and fz are both linear , and p ( st|st−1 ) is first-order Markov without dependence on zt−1 , the model is called a switching linear dynamical system ( SLDS ) . If we allow st to depend on zt−1 , the model is called a recurrent SLDS ( Linderman et al. , 2017 ; Linderman & Johnson , 2017 ) . We will compare to rSLDS in our experiments . If fz is linear , but fx is nonlinear , the model is sometimes called a “ structured variational autoencoder ” ( SVAE ) ( Johnson et al. , 2016 ) , although that term is ambiguous , since there are many forms of structure . We will compare to SVAEs in our experiments . If fz is a linear function , the model may need to use lots of discrete states in order to approximate the nonlinear dynamics , as illustrated in fig . 1 ( d ) . We therefore allow fz ( and fx ) to be nonlinear . The resulting model is called a switching nonlinear dynamical system ( SNLDS ) , or Nonlinear RegimeSwitching State-Space Model ( RSSSM ) ( Chow & Zhang , 2013 ) . Prior work typically assumes fz is a simple nonlinear model , such as polynomial regression . If we let fz be a very flexible neural network , there is a risk that the model will not need to use the discrete states at all . We discuss a solution to this in Section 3.3 . The discrete dynamics can be modeled as a semi-Markov process , where states have explicit durations ( see e.g. , Duong et al . ( 2005 ) ; Chiappa ( 2014 ) ) . One recurrent , variational version is the recurrent hidden semi-Markov model ( rHSMM , Dai et al . ( 2017 ) ) . Rather than having a stochastic continuous variable at every timestep , rHSMM instead stochastically switches between states with deterministic dynamics . The semi-Markovian structures in this work have an explicit maximum duration , which makes them less flexible . A revised method , ( Kipf et al. , 2019 ) , is able to better handle unknown durations , but produces a potentially infinite number of distinct states , each with deterministic dynamics . The deterministic dynamics of these works may limit their ability to handle noise . 2.2 VARIATIONAL INFERENCE AND LEARNING . A common approach to learning latent variable models is to maximize the evidence lower bound ( ELBO ) on the log marginal likelihood ( see e.g. , Blei et al . ( 2016 ) ) . This is given by log p ( x ) ≤ L ( x ; θ , φ ) = Eqφ ( z , s|x ) [ log pθ ( x , z , s ) − log qφ ( z , s|x ) ] , where qφ ( z , s|x ) is an approximate posterior.1 Rather than computing q using optimization for each x , we can train an inference network , fφ ( x ) , which emits the parameters of q . This is known as `` amortized inference '' ( see e.g. , Kingma & Welling ( 2014 ) ) . If the posterior distribution qφ ( z , s|x ) is reparameterizable , then we can make the noise independent of φ , and hence apply the standard SGD to optimize θ , φ . Unfortunately , the discrete distribution p ( s|x ) is not reparameterizable . In such cases , we can either resort to higher variance methods for estimating the gradient , such as REINFORCE , or we can use continuous relaxations of the discrete variables , such as Gumbel Softmax ( Jang et al. , 2017 ) , Concrete ( Maddison et al. , 2017b ) , or combining both , such as REBAR ( Tucker et al. , 2017 ) . We will compare against a Gumbel-Softmax version of SNLDS in our experiments . The continuous relaxation approach was applied to SLDS models in ( Becker-Ehmck et al. , 2019 ) and HSSM models in ( Liu et al. , 2018a ; Kipf et al. , 2019 ) . However , the relaxation can lose many of the benefits of having discrete variables ( Le et al. , 2019 ) . Relaxing the distribution to a soft mixture of dynamics results in the Kalman VAE ( KVAE ) model of Fraccaro et al . ( 2017 ) . A concern is that soft models may use a mixture of dynamics for distinct ground truth states rather than assigning a distinct mode of dynamics at each step as a discrete model must do . We will compare to KVAE in our experiments . In Section 3 , we propose a new method to avoid these issues , in which we collapse out s so that the entire model is differentiable . The SVAE model of Johnson et al . ( 2016 ) also uses the forward-backward algorithm to compute q ( s|v ) ; however , they assume the dynamics of z are linear Gaussian , so they can apply the Kalman smoother to compute q ( z|v ) . Assuming linear dynamics can result in over-segmentation , as we have discussed . A forward-backward algorithm is applied once to the discrete states and once to the continuous states to compute a structured mean field posterior q ( z ) q ( s ) . In contrast , we perform approximate inference for z using one forward-backward pass and then exact inference for s using a second pass , as we explain in Section 3 .
In this paper, the authors consider the problem of learning model parameters of a switching nonlinear dynamical system from a dataset. They propose a new variational inference algorithm for this model-learning problem that marginalizes all discrete random variables in the model using the forward-backward algorithm and, in so doing, converts the model to one with a differentiable density, so that the gradient of the variational objective can be estimated with the low-variance reparameterization estimator. The authors also point out an issue in choosing a variational objective; the standard ELBO objective is not suitable for their learning problem, because it leads to a model that does not use discrete random variables meaningfully. To overcome this issue, they suggest a new improved objective and a learning procedure, which encourage the learned model to use discrete variables for capturing different modes of dynamics. The proposed variational inference algorithm was applied to three datasets, and in all these cases, it showed promising results.
SP:85cc769dc87f910a4aff638f833764ebbac63418
Collapsed amortized variational inference for switching nonlinear dynamical systems
1 INTRODUCTION . Consider watching from above an airplane flying across country or a car driving through a field . The vehicle ’ s motion is composed of straight , linear dynamics and curving , nonlinear dynamics . This is illustrated in fig . 1 ( a ) . In this paper , we propose a new inference algorithm for fitting switching nonlinear dynamical systems ( SNLDS ) , which can be used to segment time series data as sequences of images , or lower dimensional signals , such as ( x , y ) locations into meaningful discrete temporal “ modes ” or “ regimes ” . The transitions between these modes may correspond to the changes in internal goals of the agent ( e.g. , a mouse switching from running to resting , as in Johnson et al . ( 2016 ) ) or may be caused by external factors ( e.g. , changes in the road curvature ) . Discovering such discrete modes is useful for scientific applications ( c.f. , Wiltschko et al . ( 2015 ) ; Linderman et al . ( 2019 ) ) as well as for planning in the context of hierarchical reinforcement learning ( c.f. , Kipf et al . ( 2019 ) ) . There has been extensive previous work , some of which we review in Section 2 , on modeling temporal data using various forms of state space models ( SSM ) . We are interested in the class of SSM which has both discrete and continuous latent variables , which we denote by st and zt , where t is the discrete time index . The discrete state , st ∈ { 1 , 2 , . . . , K } , represents the mode of the system at time t , and the continuous state , zi ∈ RH , represents other factors of variation , such as location and velocity . The observed data is denoted by xt ∈ RD , and can either be a low dimensional projection of zt , such as the current location , or a high dimensional signal that is informative about zt , such as an image . We may optionally have observed input or control signals ut ∈ RU , which drive the system in addition to unobserved stochastic noise . We are interested in learning a generative model of the form pθ ( s1 : T , z1 : T , x1 : T |u1 : T ) from partial observations , namely ( x1 : T , u1 : T ) . This requires inferring the posterior over the latent states , pθ ( s1 : T , z1 : T |v1 : T ) , where vt = ( xt , ut ) contains all the visible variables at time t. For training purposes , we usually assume that we have multiple such trajectories , possibly of different lengths , but we omit the sequence indices from our notations for simplicity . This problem is very challenging , because the model contains both discrete and continuous latent variables ( a so-called “ hybrid system ” ) , and has nonlinear transition and observation models . The main contribution of our paper is a new way to perform efficient approximate inference in this class of SNLDS models . The key observation is that , conditioned on knowing z1 : T as well as v1 : T , we can marginalize out s1 : T in linear time using the forward-backward algorithm . In particular , we can efficiently compute the gradient of the log marginal likelihood , ∇ ∑ s1 : T log p ( s1 : T |z̃1 : T , v1 : T ) , where z̃1 : T is a posterior sample that we need for model fitting . To efficiently compute posterior samples z̃1 : T , we learn an amortized inference network qφ ( z1 : T |v1 : T ) for the “ collapsed ” NLDS model p ( z1 : T , v1 : T ) . The collapsing trick removes the discrete variables , and allows us to use the reparameterization trick for the continuous z . These tricks let us use stochastic gradient descent ( SGD ) to learn p and q jointly , as explained in Section 3 . We can then use q as a proposal distribution inside a Rao-Blackwellised particle filter ( Doucet et al. , 2000 ) , although in this paper , we just use a single posterior sample , as is common with Variational AutoEncoders ( VAEs , Kingma & Welling ( 2014 ) ; Rezende et al . ( 2014 ) ) . Although the above “ trick ” allows us efficiently perform inference and learning , we find that in challenging problems ( e.g. , when the dynamical model p ( zt|zt−1 , vt ) is very flexible ) , the model ignores the discrete latent variables , and does not perform mode switching . This is a form of “ posterior collapse ” , similar to VAEs , where powerful decoders can cause the latent variables to be ignored , as explained in Alemi et al . ( 2018 ) . Our second contribution is a new form of posterior regularization , which prevents the aforementioned problem and results in a significantly improved segmentation . We apply our method , as well as various existing methods , to two previously proposed lowdimensional time series segmentation problems , namely a 1d bouncing ball , and a 2d moving arm . In the 1d case , the dynamics are piecewise linear , and all methods perform perfectly . In the 2d case , the dynamics are piecewise nonlinear , and we show that our method infers much better segmentation than previous approaches for comparable computational cost . We also apply our method to a simple new video dataset ( see fig . 1 for an example ) , and find that it performs well , provided we use our proposed regularization method . In summary , our main contributions are • Learning switching nonlinear dynamical systems parameterized with neural networks by marginalizing out discrete variables . • Using entropy regularization and annealing to encourage discrete state transitions . • Demonstrating that the discrete states of nonlinear models are more interpretable . 2 RELATED WORK . In this section , we briefly summarize some related work . 2.1 STATE SPACE MODELS . We consider the following state space model : pθ ( x , z , s ) = p ( x1|z1 ) p ( z1|s1 ) [ T∏ t=2 p ( xt|zt ) p ( zt|zt−1 , st ) p ( st|st−1 , xt−1 ) ] , ( 1 ) where st ∈ { 1 , . . . , K } is the discrete hidden state , zt ∈ RL is the continuous hidden state , and xt ∈ RD is the observed output , as in fig . 2 ( a ) . For notational simplicity , we ignore any observed inputs or control signals ut , but these can be trivially added to our model . Note that the discrete state influences the latent dynamics zt , but we could trivially make it influence the observations xt as well . More interesting are which edges we choose to add as parents of the discrete state st. We consider the case where st depends on the previous discrete state , st−1 , as in a hidden Markov model ( HMM ) , but also depends on the previous observation , xt−1 . We can trivially depend on multiple previous observations ; we assume first-order Markov for simplicity . This means that state changes do not have to happen “ open loop ” , but instead may be triggered by signals from the environment . We can also condition zt on xt−1 , and st on zt−1 . It is straightforward to handle such additional dependencies ( shown by dashed lines in fig . 2 ( a ) ) in our inference method , which is not true for some of the other methods we discuss below . We still need to specify the functional forms of the conditional probability distributions . In this paper , we make the following fairly weak assumptions : p ( xt|zt ) = N ( xt|fx ( zt ) , R ) , ( 2 ) p ( zt|zt−1 , st = k ) = N ( zt|fz ( zt−1 , k ) , Q ) , ( 3 ) p ( st|st−1 = j , xt−1 ) = Cat ( st|S ( fs ( xt−1 , j ) ) , ( 4 ) where fx , z , s are nonlinear functions ( MLPs or RNNs ) , N ( · , · ) is a multivariate Gaussian distribution , Cat ( · ) is a categorical distribution , and S ( · ) is a softmax function . R ∈ RD×D and Q ∈ RH×H are learned covariance matrices for the Gaussian emission and transition noise . If fx and fz are both linear , and p ( st|st−1 ) is first-order Markov without dependence on zt−1 , the model is called a switching linear dynamical system ( SLDS ) . If we allow st to depend on zt−1 , the model is called a recurrent SLDS ( Linderman et al. , 2017 ; Linderman & Johnson , 2017 ) . We will compare to rSLDS in our experiments . If fz is linear , but fx is nonlinear , the model is sometimes called a “ structured variational autoencoder ” ( SVAE ) ( Johnson et al. , 2016 ) , although that term is ambiguous , since there are many forms of structure . We will compare to SVAEs in our experiments . If fz is a linear function , the model may need to use lots of discrete states in order to approximate the nonlinear dynamics , as illustrated in fig . 1 ( d ) . We therefore allow fz ( and fx ) to be nonlinear . The resulting model is called a switching nonlinear dynamical system ( SNLDS ) , or Nonlinear RegimeSwitching State-Space Model ( RSSSM ) ( Chow & Zhang , 2013 ) . Prior work typically assumes fz is a simple nonlinear model , such as polynomial regression . If we let fz be a very flexible neural network , there is a risk that the model will not need to use the discrete states at all . We discuss a solution to this in Section 3.3 . The discrete dynamics can be modeled as a semi-Markov process , where states have explicit durations ( see e.g. , Duong et al . ( 2005 ) ; Chiappa ( 2014 ) ) . One recurrent , variational version is the recurrent hidden semi-Markov model ( rHSMM , Dai et al . ( 2017 ) ) . Rather than having a stochastic continuous variable at every timestep , rHSMM instead stochastically switches between states with deterministic dynamics . The semi-Markovian structures in this work have an explicit maximum duration , which makes them less flexible . A revised method , ( Kipf et al. , 2019 ) , is able to better handle unknown durations , but produces a potentially infinite number of distinct states , each with deterministic dynamics . The deterministic dynamics of these works may limit their ability to handle noise . 2.2 VARIATIONAL INFERENCE AND LEARNING . A common approach to learning latent variable models is to maximize the evidence lower bound ( ELBO ) on the log marginal likelihood ( see e.g. , Blei et al . ( 2016 ) ) . This is given by log p ( x ) ≤ L ( x ; θ , φ ) = Eqφ ( z , s|x ) [ log pθ ( x , z , s ) − log qφ ( z , s|x ) ] , where qφ ( z , s|x ) is an approximate posterior.1 Rather than computing q using optimization for each x , we can train an inference network , fφ ( x ) , which emits the parameters of q . This is known as `` amortized inference '' ( see e.g. , Kingma & Welling ( 2014 ) ) . If the posterior distribution qφ ( z , s|x ) is reparameterizable , then we can make the noise independent of φ , and hence apply the standard SGD to optimize θ , φ . Unfortunately , the discrete distribution p ( s|x ) is not reparameterizable . In such cases , we can either resort to higher variance methods for estimating the gradient , such as REINFORCE , or we can use continuous relaxations of the discrete variables , such as Gumbel Softmax ( Jang et al. , 2017 ) , Concrete ( Maddison et al. , 2017b ) , or combining both , such as REBAR ( Tucker et al. , 2017 ) . We will compare against a Gumbel-Softmax version of SNLDS in our experiments . The continuous relaxation approach was applied to SLDS models in ( Becker-Ehmck et al. , 2019 ) and HSSM models in ( Liu et al. , 2018a ; Kipf et al. , 2019 ) . However , the relaxation can lose many of the benefits of having discrete variables ( Le et al. , 2019 ) . Relaxing the distribution to a soft mixture of dynamics results in the Kalman VAE ( KVAE ) model of Fraccaro et al . ( 2017 ) . A concern is that soft models may use a mixture of dynamics for distinct ground truth states rather than assigning a distinct mode of dynamics at each step as a discrete model must do . We will compare to KVAE in our experiments . In Section 3 , we propose a new method to avoid these issues , in which we collapse out s so that the entire model is differentiable . The SVAE model of Johnson et al . ( 2016 ) also uses the forward-backward algorithm to compute q ( s|v ) ; however , they assume the dynamics of z are linear Gaussian , so they can apply the Kalman smoother to compute q ( z|v ) . Assuming linear dynamics can result in over-segmentation , as we have discussed . A forward-backward algorithm is applied once to the discrete states and once to the continuous states to compute a structured mean field posterior q ( z ) q ( s ) . In contrast , we perform approximate inference for z using one forward-backward pass and then exact inference for s using a second pass , as we explain in Section 3 .
This paper proposes a method to segment time series into discrete intervals in an unsupervised way. The data is modeled using a state space model where each state consists of a discrete and a continuous part. The discrete state denotes the segment the system is currently in and the continuous state which is conditioned on the discrete one denotes an uninterpretable feature vector. The transition distributions are non-linear. The observation at each time step is high-dimensional and produced by an emission distribution whose parameters are given by a neural network which takes in the continuous state. Learning and inference is done by maximizing the evidence lower bound (ELBO). Problems with the discreteness of latent variables is circumvented by marginalizing (collapsing) them out using the forward-backward algorithm. Problems with making discrete states meaningful when there are non-linear transitions/emissions is addressed by annealing. This annealing scheme forces the conditional distributions on the discrete state to have high entropy (be close to a uniform distribution) at the start by adding a term to the ELBO objective and the multiplier of this term is decreased as the training progresses. There are actually two terms to do this since one alone didn't work.
SP:85cc769dc87f910a4aff638f833764ebbac63418
Pruned Graph Scattering Transforms
1 INTRODUCTION . The abundance of graph-structured data calls for advanced learning techniques , and complements nicely standard machine learning tools that can not be directly applied to irregular data domains . Permeating the benefits of deep learning to the graph domain , graph convolutional networks ( GCNs ) provide a versatile and powerful framework to learn from complex graph data ( Bronstein et al. , 2017 ) . GCNs and variants thereof have attained remarkable success in social network analysis , 3D point cloud processing , recommender systems and action recognition . However , researchers have recently reported inconsistent perspectives on the appropriate designs for GCN architectures . For example , experiments in social network analysis have argued that deeper GCNs marginally increase the learning performance ( Wu et al. , 2019 ) , whereas a method for 3D point cloud segmentation achieves state-ofthe-art performance with a 56-layer GCN network ( Li et al. , 2019 ) . These ‘ controversial ’ empirical findings motivate theoretical analysis to understand the fundamental performance factors and the architecture design choices for GCNs . Aiming to bestow GCNs with theoretical guarantees , one promising research direction is to study graph scattering transforms ( GSTs ) . GSTs are non-trainable GCNs comprising a cascade of graph filter banks followed by nonlinear activation functions . The graph filter banks are mathematically designed and are adopted to scatter an input graph signal into multiple channels . GSTs extract scattering features that can be utilized towards graph learning tasks ( Gao et al. , 2019 ) , with competitive performance especially when the number of training examples is small . Under certain conditions on the graph filter banks , GSTs are endowed with energy conservation properties ( Zou & Lerman , ∗This work was mainly done while V. N. Ioanndis was working at Mitsubishi Electric Research Laboratories . 2019 ) , as well as stability meaning robustness to graph topology deformations ( Gama et al. , 2019a ) . However , GSTs are associated with exponential complexity in space and time that increases with the number of layers . This discourages deployment of GSTs when a deep architecture is needed . Furthermore , stability should not come at odds with sensitivity . A filter ’ s output should be sensitive to and “ detect ” perturbations of large magnitude . Lastly , graph data in different domains ( social networks , 3D point clouds ) have distinct properties , which encourages GSTs with domain-adaptive architectures . The present paper develops a data-adaptive pruning framework for the GST to systematically retain important features . Specifically , the contribution of this work is threefold . C1 . We put forth a pruning approach to select informative GST features that we naturally term pruned graph scattering transform ( pGST ) . The pruning decisions are guided by a criterion promoting alignment ( matching ) of the input graph spectrum with that of the graph filters . The optimal pruning decisions are provided on-the-fly , and alleviate the exponential complexity of GSTs . C2 . We prove that the pGST is stable to perturbations of the input graph signals . Under certain conditions on the energy of the perturbations , the resulting pruning patterns before and after the perturbations are identical and the overall pGST is stable . C3 . We showcase with extensive experiments that : i ) the proposed pGSTs perform similarly and in certain cases better than the baseline GSTs that use all scattering features , while achieving significant computational savings ; ii ) The extracted features from pGSTs can be utilized towards graph classification and 3D point cloud recognition . Even without any training on the feature extraction step , the performance is comparable to state-of-the-art deep supervised learning approaches , particularly when training data are scarce ; and iii ) By analyzing the pruning patterns of the pGST , we deduce that graph signals in different domains call for different network architectures ; see Fig . 1 . 2 RELATED WORK . GCNs rely on a layered processing architecture comprising trainable graph convolution operations to linearly combine features per graph neighborhood , followed by pointwise nonlinear functions applied to the linearly transformed features ( Bronstein et al. , 2017 ) . Complex GCNs and their variants have shown remarkable success in graph semi-supervised learning ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ) and graph classification ( Ying et al. , 2018 ) . To simplify GCNs , ( Wu et al. , 2019 ) has shown that by employing a single-layer linear GCN the performance in certain social network learning tasks degrades only slightly . On the other hand , ( Li et al. , 2019 ) has developed a 56-layer GCN that achieves state-of-the-art performance in 3D point cloud segmentation . Hence , designing GCN architectures guided by properties of the graph data is a highly motivated research question . Towards theoretically explaining the success of GCNs , recent works study the stability properties of GSTs with respect to metric deformations of the domain ( Gama et al. , 2019b ; a ; Zou & Lerman , 2019 ) . GSTs generalize scattering transforms ( Bruna & Mallat , 2013 ; Mallat , 2012 ) to non-Euclidean domains . GSTs are a cascade of graph filter banks and nonlinear operations that is organized in a tree-structured architecture . The number of extracted scattering features of a GST grows exponentially with the number of layers . Theoretical guarantees for GSTs are obtained after fixing the graph filter banks to implement a set of graph wavelets . The work in ( Zou & Lerman , 2019 ) establishes energy conservation properties for GSTs given that certain energy-preserving graph wavelets are employed , and also prove that GSTs are stable to graph structure perturbations ; see also ( Gama et al. , 2019b ) that focuses on diffusion wavelets . On the other hand , ( Gama et al. , 2019a ) proves stability to relative metric deformations for a wide class of graph wavelet families . These contemporary works shed light into the stability and generalization capabilities of GCNs . However , stable transforms are not necessarily informative , and albeit highly desirable , a principled approach to selecting informative GST features remains still an uncharted venue . 3 BACKGROUND . Consider a graph G : = { V , E } with node set V : = { vi } Ni=1 , and edge set E : = { ei } Ei=1 . Its connectivity is described by the graph shift matrix S ∈ RN×N , whose ( n , n′ ) th entry Snn′ is nonzero if ( n , n′ ) ∈ E or if n = n′ . A typical choice for S is the adjacency or the Laplacian matrix . Further , each node can be also associated with a few attributes . Collect attributes across all nodes in the matrix X : = [ x1 , . . . , xF ] ∈ RN×F , where each column xf ∈ RN is a ‘ graph signal. ’ Graph Fourier transform . A Fourier transform corresponds to the expansion of a signal over bases that are invariant to filtering ; here , this graph frequency basis is the eigenbasis of the graph shift matrix S. Henceforth , S is assumed normal with S = VΛV > , where V ∈ RN×N forms the graph Fourier basis , and Λ ∈ RN×N is the diagonal matrix of corresponding eigenvalues λ0 , . . . , λN−1 . These eigenvalues represent graph frequencies . The graph Fourier transform ( GFT ) of x ∈ RN is x̂ = V > x ∈ RN , while the inverse transform is x = Vx̂ . The vector x̂ represents the signal ’ s expansion in the eigenvector basis and describes the graph spectrum of x . The inverse GFT reconstructs the graph signal from its graph spectrum by combining graph frequency components weighted by the coefficients of the signal ’ s graph Fourier transform . GFT is a tool that has been popular for analyzing graph signals in the graph spectral domain . Graph convolution neural networks . GCNs permeate the benefits of CNNs from processing Euclidean data to modeling graph structured data . GCNs model graph data through a succession of layers , each of which consists of a graph convolution operation ( graph filter ) , a pointwise nonlinear function σ ( · ) , and oftentimes also a pooling operation . Given a graph signal x ∈ RN , the graph convolution operation diffuses each node ’ s information to its neighbors according to the graph shift matrix S , as Sx . The nth entry [ Sx ] n = ∑ n′∈Nn Snn′xn′ is a weighted average of the one-hop neighboring features . Successive application of S will increase the reception field , spreading the information across the network . Hence , a Kth order graph convolution operation ( graph filtering ) is h ( S ) x : = K∑ k=0 wkS kx = Vĥ ( Λ ) x̂ ( 1 ) where the graph filter h ( · ) is parameterized by the learnable weights { wk } Kk=0 , and the graph filter in the graph spectral domain is ĥ ( Λ ) = ∑K k=0 wkΛ k. In the graph vertex domain , the learnable weights reflect the influences from various orders of neighbors ; and in the graph spectral domain , those weights adaptively adjust the focus and emphasize certain graph frequency bands . GCNs employ various graph filter banks per layer , and learn the parameters that minimize a predefined learning objective , such as classification , or regression . Graph scattering transforms . GSTs are the nontrainable counterparts of GCNs , where the parameters of the graph convolutions are selected based on mathematical designs . GSTs process the input at each layer by a sequential application of graph filter banks { hj ( S ) } Jj=1 , an elementwise nonlinear function σ ( · ) , and a pooling operator U . At the first layer , the input graph signal x ∈ RN constitutes the first scattering feature vector z ( 0 ) : = x . Next , z ( 0 ) is processed by the graph filter banks and σ ( · ) to generate { z ( j ) } Jj=1 with z ( j ) : = σ ( hj ( S ) z ( 0 ) ) . At the second layer , the same operation is repeated per j . The resulting computation structure is a tree with J branches at each non-leaf node ; see also Fig . 2 . The ` th layer of the tree includes J ` nodes . Each tree node at layer ` in the scattering transform is indexed by the path p ( ` ) of the sequence of ` graph convolutions applied to the input graph signal x , i.e . p ( ` ) : = ( j ( 1 ) , j ( 2 ) , . . . , j ( ` ) ) .1 The scattering feature vector at the tree node indexed by ( p ( ` ) , j ) at layer ` + 1 is z ( p ( ` ) , j ) = σ ( hj ( S ) z ( p ( ` ) ) ) ( 2 ) where the variable p ( ` ) holds the list of indices of the parent nodes ordered by ancestry , and all path p ( ` ) in the tree with length ` are included in the path set P ( ` ) with |P ( ` ) | = 2 ` . The nonlinear transformation function σ ( · ) disperses the graph frequency representation through the spectrum , and endows the GST with increased discriminating power ( Gama et al. , 2019a ) . By exploiting the sparsity of the graph , the computational complexity of ( 2 ) is O ( KE ) , where E = |E| is the number of edges in G.2 Each scattering feature vector z ( p ( ` ) ) is summarized by an aggregation operator U ( · ) to obtain a scalar scattering coefficient as φ ( p ( ` ) ) : = U ( z ( p ( ` ) ) ) , where U ( · ) is typically an average or sum operator that effects dimensionality reduction of the extracted features . The scattering coefficient at each tree node reflects the activation level at a certain graph frequency band . These scattering coefficients are collected across all tree nodes to form a scattering feature map Φ ( x ) : = { { φ ( p ( ` ) ) } p ( ` ) ∈P ( ` ) } L ` =0 ( 3 ) where |Φ ( x ) | = ∑L ` =0 J ` . The GST operation resembles a forward pass of a trained GCN . This is why several works study GST stability under perturbations of S in order to understand the working mechanism of GCNs ( Zou & Lerman , 2019 ; Gama et al. , 2019a ; b ) .
A scattering transform on graphs consists in the cascade of wavelets, modulus non-linearity and a low-pass filter. The wavelets and the low-pass are designed in the spectral domain, which is computationally extensive. Instead to compute any cascades of wavelets, this paper proposes to prune scattering paths which have the lowest energy. This is simple, and numerically efficient.
SP:1ccaa054dc814a12e6cea27fdc8cdd0d53b25794
Pruned Graph Scattering Transforms
1 INTRODUCTION . The abundance of graph-structured data calls for advanced learning techniques , and complements nicely standard machine learning tools that can not be directly applied to irregular data domains . Permeating the benefits of deep learning to the graph domain , graph convolutional networks ( GCNs ) provide a versatile and powerful framework to learn from complex graph data ( Bronstein et al. , 2017 ) . GCNs and variants thereof have attained remarkable success in social network analysis , 3D point cloud processing , recommender systems and action recognition . However , researchers have recently reported inconsistent perspectives on the appropriate designs for GCN architectures . For example , experiments in social network analysis have argued that deeper GCNs marginally increase the learning performance ( Wu et al. , 2019 ) , whereas a method for 3D point cloud segmentation achieves state-ofthe-art performance with a 56-layer GCN network ( Li et al. , 2019 ) . These ‘ controversial ’ empirical findings motivate theoretical analysis to understand the fundamental performance factors and the architecture design choices for GCNs . Aiming to bestow GCNs with theoretical guarantees , one promising research direction is to study graph scattering transforms ( GSTs ) . GSTs are non-trainable GCNs comprising a cascade of graph filter banks followed by nonlinear activation functions . The graph filter banks are mathematically designed and are adopted to scatter an input graph signal into multiple channels . GSTs extract scattering features that can be utilized towards graph learning tasks ( Gao et al. , 2019 ) , with competitive performance especially when the number of training examples is small . Under certain conditions on the graph filter banks , GSTs are endowed with energy conservation properties ( Zou & Lerman , ∗This work was mainly done while V. N. Ioanndis was working at Mitsubishi Electric Research Laboratories . 2019 ) , as well as stability meaning robustness to graph topology deformations ( Gama et al. , 2019a ) . However , GSTs are associated with exponential complexity in space and time that increases with the number of layers . This discourages deployment of GSTs when a deep architecture is needed . Furthermore , stability should not come at odds with sensitivity . A filter ’ s output should be sensitive to and “ detect ” perturbations of large magnitude . Lastly , graph data in different domains ( social networks , 3D point clouds ) have distinct properties , which encourages GSTs with domain-adaptive architectures . The present paper develops a data-adaptive pruning framework for the GST to systematically retain important features . Specifically , the contribution of this work is threefold . C1 . We put forth a pruning approach to select informative GST features that we naturally term pruned graph scattering transform ( pGST ) . The pruning decisions are guided by a criterion promoting alignment ( matching ) of the input graph spectrum with that of the graph filters . The optimal pruning decisions are provided on-the-fly , and alleviate the exponential complexity of GSTs . C2 . We prove that the pGST is stable to perturbations of the input graph signals . Under certain conditions on the energy of the perturbations , the resulting pruning patterns before and after the perturbations are identical and the overall pGST is stable . C3 . We showcase with extensive experiments that : i ) the proposed pGSTs perform similarly and in certain cases better than the baseline GSTs that use all scattering features , while achieving significant computational savings ; ii ) The extracted features from pGSTs can be utilized towards graph classification and 3D point cloud recognition . Even without any training on the feature extraction step , the performance is comparable to state-of-the-art deep supervised learning approaches , particularly when training data are scarce ; and iii ) By analyzing the pruning patterns of the pGST , we deduce that graph signals in different domains call for different network architectures ; see Fig . 1 . 2 RELATED WORK . GCNs rely on a layered processing architecture comprising trainable graph convolution operations to linearly combine features per graph neighborhood , followed by pointwise nonlinear functions applied to the linearly transformed features ( Bronstein et al. , 2017 ) . Complex GCNs and their variants have shown remarkable success in graph semi-supervised learning ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ) and graph classification ( Ying et al. , 2018 ) . To simplify GCNs , ( Wu et al. , 2019 ) has shown that by employing a single-layer linear GCN the performance in certain social network learning tasks degrades only slightly . On the other hand , ( Li et al. , 2019 ) has developed a 56-layer GCN that achieves state-of-the-art performance in 3D point cloud segmentation . Hence , designing GCN architectures guided by properties of the graph data is a highly motivated research question . Towards theoretically explaining the success of GCNs , recent works study the stability properties of GSTs with respect to metric deformations of the domain ( Gama et al. , 2019b ; a ; Zou & Lerman , 2019 ) . GSTs generalize scattering transforms ( Bruna & Mallat , 2013 ; Mallat , 2012 ) to non-Euclidean domains . GSTs are a cascade of graph filter banks and nonlinear operations that is organized in a tree-structured architecture . The number of extracted scattering features of a GST grows exponentially with the number of layers . Theoretical guarantees for GSTs are obtained after fixing the graph filter banks to implement a set of graph wavelets . The work in ( Zou & Lerman , 2019 ) establishes energy conservation properties for GSTs given that certain energy-preserving graph wavelets are employed , and also prove that GSTs are stable to graph structure perturbations ; see also ( Gama et al. , 2019b ) that focuses on diffusion wavelets . On the other hand , ( Gama et al. , 2019a ) proves stability to relative metric deformations for a wide class of graph wavelet families . These contemporary works shed light into the stability and generalization capabilities of GCNs . However , stable transforms are not necessarily informative , and albeit highly desirable , a principled approach to selecting informative GST features remains still an uncharted venue . 3 BACKGROUND . Consider a graph G : = { V , E } with node set V : = { vi } Ni=1 , and edge set E : = { ei } Ei=1 . Its connectivity is described by the graph shift matrix S ∈ RN×N , whose ( n , n′ ) th entry Snn′ is nonzero if ( n , n′ ) ∈ E or if n = n′ . A typical choice for S is the adjacency or the Laplacian matrix . Further , each node can be also associated with a few attributes . Collect attributes across all nodes in the matrix X : = [ x1 , . . . , xF ] ∈ RN×F , where each column xf ∈ RN is a ‘ graph signal. ’ Graph Fourier transform . A Fourier transform corresponds to the expansion of a signal over bases that are invariant to filtering ; here , this graph frequency basis is the eigenbasis of the graph shift matrix S. Henceforth , S is assumed normal with S = VΛV > , where V ∈ RN×N forms the graph Fourier basis , and Λ ∈ RN×N is the diagonal matrix of corresponding eigenvalues λ0 , . . . , λN−1 . These eigenvalues represent graph frequencies . The graph Fourier transform ( GFT ) of x ∈ RN is x̂ = V > x ∈ RN , while the inverse transform is x = Vx̂ . The vector x̂ represents the signal ’ s expansion in the eigenvector basis and describes the graph spectrum of x . The inverse GFT reconstructs the graph signal from its graph spectrum by combining graph frequency components weighted by the coefficients of the signal ’ s graph Fourier transform . GFT is a tool that has been popular for analyzing graph signals in the graph spectral domain . Graph convolution neural networks . GCNs permeate the benefits of CNNs from processing Euclidean data to modeling graph structured data . GCNs model graph data through a succession of layers , each of which consists of a graph convolution operation ( graph filter ) , a pointwise nonlinear function σ ( · ) , and oftentimes also a pooling operation . Given a graph signal x ∈ RN , the graph convolution operation diffuses each node ’ s information to its neighbors according to the graph shift matrix S , as Sx . The nth entry [ Sx ] n = ∑ n′∈Nn Snn′xn′ is a weighted average of the one-hop neighboring features . Successive application of S will increase the reception field , spreading the information across the network . Hence , a Kth order graph convolution operation ( graph filtering ) is h ( S ) x : = K∑ k=0 wkS kx = Vĥ ( Λ ) x̂ ( 1 ) where the graph filter h ( · ) is parameterized by the learnable weights { wk } Kk=0 , and the graph filter in the graph spectral domain is ĥ ( Λ ) = ∑K k=0 wkΛ k. In the graph vertex domain , the learnable weights reflect the influences from various orders of neighbors ; and in the graph spectral domain , those weights adaptively adjust the focus and emphasize certain graph frequency bands . GCNs employ various graph filter banks per layer , and learn the parameters that minimize a predefined learning objective , such as classification , or regression . Graph scattering transforms . GSTs are the nontrainable counterparts of GCNs , where the parameters of the graph convolutions are selected based on mathematical designs . GSTs process the input at each layer by a sequential application of graph filter banks { hj ( S ) } Jj=1 , an elementwise nonlinear function σ ( · ) , and a pooling operator U . At the first layer , the input graph signal x ∈ RN constitutes the first scattering feature vector z ( 0 ) : = x . Next , z ( 0 ) is processed by the graph filter banks and σ ( · ) to generate { z ( j ) } Jj=1 with z ( j ) : = σ ( hj ( S ) z ( 0 ) ) . At the second layer , the same operation is repeated per j . The resulting computation structure is a tree with J branches at each non-leaf node ; see also Fig . 2 . The ` th layer of the tree includes J ` nodes . Each tree node at layer ` in the scattering transform is indexed by the path p ( ` ) of the sequence of ` graph convolutions applied to the input graph signal x , i.e . p ( ` ) : = ( j ( 1 ) , j ( 2 ) , . . . , j ( ` ) ) .1 The scattering feature vector at the tree node indexed by ( p ( ` ) , j ) at layer ` + 1 is z ( p ( ` ) , j ) = σ ( hj ( S ) z ( p ( ` ) ) ) ( 2 ) where the variable p ( ` ) holds the list of indices of the parent nodes ordered by ancestry , and all path p ( ` ) in the tree with length ` are included in the path set P ( ` ) with |P ( ` ) | = 2 ` . The nonlinear transformation function σ ( · ) disperses the graph frequency representation through the spectrum , and endows the GST with increased discriminating power ( Gama et al. , 2019a ) . By exploiting the sparsity of the graph , the computational complexity of ( 2 ) is O ( KE ) , where E = |E| is the number of edges in G.2 Each scattering feature vector z ( p ( ` ) ) is summarized by an aggregation operator U ( · ) to obtain a scalar scattering coefficient as φ ( p ( ` ) ) : = U ( z ( p ( ` ) ) ) , where U ( · ) is typically an average or sum operator that effects dimensionality reduction of the extracted features . The scattering coefficient at each tree node reflects the activation level at a certain graph frequency band . These scattering coefficients are collected across all tree nodes to form a scattering feature map Φ ( x ) : = { { φ ( p ( ` ) ) } p ( ` ) ∈P ( ` ) } L ` =0 ( 3 ) where |Φ ( x ) | = ∑L ` =0 J ` . The GST operation resembles a forward pass of a trained GCN . This is why several works study GST stability under perturbations of S in order to understand the working mechanism of GCNs ( Zou & Lerman , 2019 ; Gama et al. , 2019a ; b ) .
In this paper, the authors developed graph scattering transforms (GST) with a pruning algorithm, with the aim to reduce the running time and space cost, improve robustness to perturbations on input graph signal, and encourage flexibility for domain adaption. To this end, pruned graph scattering transform (pGST) was proposed based on the alignment between graph spectrum of the graph filters and the scattering feature. The intuition is to consider tree nodes as subbands in the graph spectrum and prune tree nodes that do not have sufficient overlap with the graph spectrum of a graph signal. The pruning problem was formulated as an optimization problem, and a solution was developed with theoretical analysis. Moreover, the analysis on the stability and sensitivity to perturbations were provided. Overall, the algorithm development is solid. The experimental results demonstrate the proposed pGST can outperform GST on graph classification task with less running time. Comparing with some supervised GNN methods, the proposed pGST can still achieve comparable results on several datasets.
SP:1ccaa054dc814a12e6cea27fdc8cdd0d53b25794
Combining MixMatch and Active Learning for Better Accuracy with Fewer Labels
We propose using active learning based techniques to further improve the stateof-the-art semi-supervised learning MixMatch algorithm . We provide a thorough empirical evaluation of several active-learning and baseline methods , which successfully demonstrate a significant improvement on the benchmark CIFAR-10 , CIFAR-100 , and SVHN datasets ( as much as 1.5 % in absolute accuracy ) . We also provide an empirical analysis of the cost trade-off between incrementally gathering more labeled versus unlabeled data . This analysis can be used to measure the relative value of labeled/unlabeled data at different points of the learning curve , where we find that although the incremental value of labeled data can be as much as 20x that of unlabeled , it quickly diminishes to less than 3x once more than 2,000 labeled example are observed . 1 INTRODUCTION . Sophisticated machine learning models have demonstrated state-of-the-art performance across many different domains , such as vision , audio , and text . However , to train these models one often needs access to very large amounts of labeled data , which can be costly to produce . Consider , for example , laborious tasks such as image annotation , audio transcription , or natural language part-of-speech tagging . Several lines of work in machine learning take this cost into account and attempt to reduce the dependence on large quantities of labeled data . In semi-supervised learning ( SSL ) , both labeled and unlabeled data ( which is often much cheaper to obtain ) are leveraged to train a model . Unlabeled data can be used to learn properties of the distribution of features , which then allow for more sophisticated and effective regularization schemes . For example , enforcing that examples close in feature space are labeled similarly . Another , different , approach for address costly labeled data is that of active learning ( AL ) . Here , a model is still trained using only labeled data , but extra care is taken when deciding which unlabeled data examples are to be labeled . Often , the data will be labeled iteratively in batches , where at each iteration an update is made to a current view of the distribution over labels and the next batch of points is selected from regions where the distribution is least certain . As discussed in depth in the following section , the approaches of semi-supervised and active learning can be complementary and used in conjunction to help solve the problem of costly in labels . Our Contributions : In this paper , we take MixMatch , the leading semi-supervised learning technique , and thoroughly evaluate its performance when combined with several active learning methods . We find very encouraging results , which show that state-of-the-art performance is achieved in the limited label setting on CIFAR-10 , CIFAR-100 , and SVHN datasets , demonstrating that combining active learning techniques with MixMatch provides a significant improvement . Furthermore , we perform an analysis , exploring the incremental benefits of labeled versus unlabeled data at different points in the learning curve . Given the relative costs of labeled and unlabeled data , such an analysis aids us in deciding how to best spend a given budget in acquiring labeled versus unlabeled data . The remainder of the paper is organized as follows : We first give a high-level review of active learning related work and the MixMatch algorithm in Sections 2 and 3 , respectively . The evaluated methods and experimental setup are described in Section 4 , while the experimental results are presented and analyzed in Section 5 . Finally , we conclude in Section 6 . 2 ACTIVE LEARNING AND RELATED WORK . Active learning ( or active sampling ) methods are designed to answer the question : Given a limited labeling budget , which examples should I expend my budget on ? Like semi-supervised learning , active learning is particularly is useful when labeled data is costly or scarce for any host of reasons . Generally , an active learning algorithm iteratively selects samples to label , based on the data labeled thus far as well as the model family being trained . Once the labels are received , they are added to the labeled training set and the process continues to the next iteration , until the budget is finally exhausted . Thus , active learning can be considered to be part of the model training process , where classical model training is interleaved with data labeling . Here we give a very brief introduction to a few classes of active learning algorithms , and their use with semi-supervised learning . There are several active learning algorithms with strong theoretical guarantees , which either explicitly or implicitly define a version space that maintains the set of “ good ” candidate classifiers based on the data labeled thus far . The algorithms then suggest labeling points that would most quickly shrink this version space ( Cohn et al. , 1994 ; Dasgupta et al. , 2008 ; Beygelzimer et al. , 2009 ) . In practice , tracking this version space can be computationally inefficient for all but the simplest ( e.g . linear ) model families . Another , related approach , is that of query-by-committee , where points are selected for labeling based on the level of disagreement between a committee of classifiers , which have been selected using the currently labeled pool . Query-by-committee methods benefit from theoretical guarantees ( Abe & Mamitsuka , 1998 ) , as well as practical implementations based on bagging ( Seung et al. , 1992 ) . These methods , along with uncertainty sampling that we introduce next , are only a few prototypical examples of active learning algorithms . For a broader introductory survey of algorithms and techniques , please see Settles ( 2009 ) and the references therein . Arguably , the most popular and practically effective active learning technique is that of uncertainty or margin sampling ( Lewis & Gale , 1994 ; Lewis & Catlett , 1994 ) . At each iteration , this approach first trains a model using the currently labeled subset , then it makes predictions on all unlabeled points under consideration and , finally , it queries labels for those points where the confidence in the models ’ prediction is the smallest . The notion of confidence can be defined several different ways , as will be detailed in Section 4.2 . In this light , we can see that active learning methods and semisupervised learning methods can work in a complementary fashion . At a high level , active learning methods seek to find training examples for which we have least confidence in the underlying labels in order to query for those labels , while semi-supervised learning algorithms can focus on training examples where there is strong confidence in the label distribution and reasonable assumption conclusions can be made regarding unlabeled examples . It is no surprise that combining active learning and semi-supervised learning has been investigated previously ( Hoi et al. , 2009 ; Muslea et al. , 2002 ) . We highlight a few of these examples below ( also see Section 7.1 of Settles ( 2009 ) ) . One of the earliest lines of work combining the two techniques is that of McCallum & Nigam ( 1998 ) , where the query-by-committee framework is combined with an expectation maximization ( EM ) approach that is used to provide pseudo-labels to those examples that have not been queried for true underlying labels by the active learning method . They show that , on a text classification task , the improvement in label complexity of the combined method is better than what either of semi-supervised or active learning method alone can provide . In the work of Zhu et al . ( 2003 ) , the two approaches are combined using a Gaussian random field model . Given a model trained on the union of currently labeled and unlabeled datasets , the expected reduction in the estimated risk due to receiving the label of a particular example can be computed and greedily optimized . Tur et al . ( 2005 ) and Tomanek & Hahn ( 2009 ) both combine uncertainty sampling to label uncertain points with machine labeling of confident examples and show their effectiveness in applications to a spoken language understanding and sequence labeling tasks , respectively . In this work , we combine uncertainty-sampling based active learning method and demonstrate that the already state-of-the-art semi-supervised performance of the MixMatch algorithm can be significantly further improved . 3 MIXMATCH . We only focus on deep learning based semi-supervised learning ( SSL ) in this work . We use the framework from Oliver et al . ( 2018 ) to perform a realistic evaluation of the results ( same splits of the labeled/unlabeled initial data , same network architecture ) . Recent popular techniques are built around two concepts : Consistency Regularization ( Mean Teacher ( Tarvainen & Valpola , 2017 ) , Π-model ( Laine & Aila , 2017 ) , Virtual Adversarial Training ( VAT ) ( Miyato et al. , 2018 ) ) which enforces that two augmentations of the same image must have the same label and Entropy Minimization ( Pseudo-Label Lee ( 2013 ) , Entropy Minimization ( EntMin ) ( Grandvalet & Bengio , 2005 ) ) which states that a prediction should be confident . These two concepts are sometimes combined , for example the technique VAT EntMin is a combination of VAT and Entropy Minimization and MixMatch is also using both of these concepts . MixMatch , a recent state-of-the-art semi-supervised learning ( SSL ) technique , is designed around the idea of guessing labels for the unlabeled data followed by using standard fully supervised training ( Berthelot et al. , 2019 ) . Consider a classification task with classes C. The input of MixMatch are a batch of B ( images , labels ) pairs X = { ( xb , pb ) } 1≤b≤B , where each label pb is an one-hot vector over the class C , and a batch of unlabeled examples ( images ) U = { ub } 1≤b≤B . Note , in this section , a “ batch ” is in reference to the subset of points used in the iterative optimization procedure that is used to train the model , while elsewhere in the paper “ batch ” refers to a set of newly labeled points that are added to the training set during the iterative active learning process . Label Guessing . Label guessing is done by averaging the training model ’ s own prediction on several augmentation ûb , i of a same image ub . This average prediction is then sharpened to produce a low-entropy soft label qb for each image ub . Formally , the average is defined as q̄b = 1 K ∑K i=1 pmodel ( y|ûb , k ; θ ) where pmodel ( y|x ; θ ) is the model ’ s output distribution over class labels y on input x with parameters θ . A sharpening is applied to the average prediction : qb = Sharpen ( q̄b ) . In practice , MixMatch uses a standard softmax temperature reduction computed as Sharpen ( p ) i : = p 1/T i / ∑|C| j=1 p 1/T j where T = 1/2 is a fixed hyper-parameter . Data Augmentation . MixMatch only uses standard ( weak ) augmentations . For the SVHN dataset , we use only random pixel shifts , while for CIFAR-10 and CIFAR-100 we also use random mirroring . Fully supervised . Finally , MixMatch uses fully supervised techniques . In practice , it use weight decay and MixUp across the labeled and unlabeled data ( Zhang et al. , 2017 ) . In essence , MixUp generates new training examples by computing the convex combination of two existing ones . Specifically , it does a pixel level interpolation between images and pairwise interpolation between probability distribution . The resulting interpolated label is a soft label . Such examples encourage the model to make smooth transitions between classes . Let X̂ = { ( x̂b , pb ) } 1≤b≤B and Û = { ( ub , qb ) } 1≤b≤B be the results of data augmentation and label guessing . MixMatch shuffles the union of the two batches X̂ ∪ Û into a batch W of size 2B and performs MixUp to produce the output : • X ′ = MixUp ( X̂ , W [ 1 , ... , B ] ) ( mixing up X̂ with the first half of W ) and • U ′ = MixUp ( Û , W [ B+1 , ... ,2B ] ) ( mixing up Û with the second half of W ) . Given two examples ( x1 , p1 ) and ( x2 , p2 ) where x1 , x2 are feature vectors and p1 , p2 are one-hot encoding or soft label , depending on whether the corresponding feature vector is labeled or not , MixMach performs MixUp as follows : ( 1 ) Sample λ ∼ Beta ( α , α ) from a Beta distribution parameterized by the hyper-parameter α ; ( 2 ) For λ′ = max ( 1− λ , λ ) , compute x′ = λ′x1 + ( 1− λ′ ) x2 and p′ = λ′p1 + ( 1− λ′ ) p2 . Loss function . Similar to other SSL paradigms , the loss function of MixMatch consists of a sum of two terms : ( 1 ) a cross-entropy loss between a predicted label distribution with the ground-truth label and ( 2 ) a Brier score ( L2 loss ) for the unlabeled data which is less sensitive to incorrectly predicted labels . On a MixMatch batch ( X ′ , U ′ ) , the loss function is L = LX + λULU where LX = 1 |X ′| ∑ ( x , p ) ∈X′ CrossEntropy ( p , pmodel ( y|x ; θ ) ) , ( 1 ) LU = 1 |C||U ′| ∑ ( u , q ) ∈U ′ ‖q − pmodel ( y|u ; θ ) ‖2 . ( 2 ) Here λU is the hyper-parameter controlling the importance of the unlabeled data to the training process . We set λU to be 75 for CIFAR-10 , 150 for CIFAR-100 , and 250 for SVHN . We also fix the MixUp hyper-parameter α = 0.75 .
This paper proposes a method that can deal with an active-learning scenario for the recently proposed semi-supervised learning method: MixMatch. More specifically, the proposed method considers uncertainty measures to choose samples and a diversification step to ensure diversity within the sampled batch. For uncertainty measures, the paper considers the simple maximum confidence and the gap between two most likely classes. Additional augmentation techniques inspired from MixMatch are used. For diversification, a clustering method and an information density method are considered. Furthermore, the paper proposes a cost analysis model to compare labeled and unlabeled samples. Experiments demonstrate the behavior of the proposed method.
SP:19e84ea0dc79d30cfbdb25c0b768536f38820885
Combining MixMatch and Active Learning for Better Accuracy with Fewer Labels
We propose using active learning based techniques to further improve the stateof-the-art semi-supervised learning MixMatch algorithm . We provide a thorough empirical evaluation of several active-learning and baseline methods , which successfully demonstrate a significant improvement on the benchmark CIFAR-10 , CIFAR-100 , and SVHN datasets ( as much as 1.5 % in absolute accuracy ) . We also provide an empirical analysis of the cost trade-off between incrementally gathering more labeled versus unlabeled data . This analysis can be used to measure the relative value of labeled/unlabeled data at different points of the learning curve , where we find that although the incremental value of labeled data can be as much as 20x that of unlabeled , it quickly diminishes to less than 3x once more than 2,000 labeled example are observed . 1 INTRODUCTION . Sophisticated machine learning models have demonstrated state-of-the-art performance across many different domains , such as vision , audio , and text . However , to train these models one often needs access to very large amounts of labeled data , which can be costly to produce . Consider , for example , laborious tasks such as image annotation , audio transcription , or natural language part-of-speech tagging . Several lines of work in machine learning take this cost into account and attempt to reduce the dependence on large quantities of labeled data . In semi-supervised learning ( SSL ) , both labeled and unlabeled data ( which is often much cheaper to obtain ) are leveraged to train a model . Unlabeled data can be used to learn properties of the distribution of features , which then allow for more sophisticated and effective regularization schemes . For example , enforcing that examples close in feature space are labeled similarly . Another , different , approach for address costly labeled data is that of active learning ( AL ) . Here , a model is still trained using only labeled data , but extra care is taken when deciding which unlabeled data examples are to be labeled . Often , the data will be labeled iteratively in batches , where at each iteration an update is made to a current view of the distribution over labels and the next batch of points is selected from regions where the distribution is least certain . As discussed in depth in the following section , the approaches of semi-supervised and active learning can be complementary and used in conjunction to help solve the problem of costly in labels . Our Contributions : In this paper , we take MixMatch , the leading semi-supervised learning technique , and thoroughly evaluate its performance when combined with several active learning methods . We find very encouraging results , which show that state-of-the-art performance is achieved in the limited label setting on CIFAR-10 , CIFAR-100 , and SVHN datasets , demonstrating that combining active learning techniques with MixMatch provides a significant improvement . Furthermore , we perform an analysis , exploring the incremental benefits of labeled versus unlabeled data at different points in the learning curve . Given the relative costs of labeled and unlabeled data , such an analysis aids us in deciding how to best spend a given budget in acquiring labeled versus unlabeled data . The remainder of the paper is organized as follows : We first give a high-level review of active learning related work and the MixMatch algorithm in Sections 2 and 3 , respectively . The evaluated methods and experimental setup are described in Section 4 , while the experimental results are presented and analyzed in Section 5 . Finally , we conclude in Section 6 . 2 ACTIVE LEARNING AND RELATED WORK . Active learning ( or active sampling ) methods are designed to answer the question : Given a limited labeling budget , which examples should I expend my budget on ? Like semi-supervised learning , active learning is particularly is useful when labeled data is costly or scarce for any host of reasons . Generally , an active learning algorithm iteratively selects samples to label , based on the data labeled thus far as well as the model family being trained . Once the labels are received , they are added to the labeled training set and the process continues to the next iteration , until the budget is finally exhausted . Thus , active learning can be considered to be part of the model training process , where classical model training is interleaved with data labeling . Here we give a very brief introduction to a few classes of active learning algorithms , and their use with semi-supervised learning . There are several active learning algorithms with strong theoretical guarantees , which either explicitly or implicitly define a version space that maintains the set of “ good ” candidate classifiers based on the data labeled thus far . The algorithms then suggest labeling points that would most quickly shrink this version space ( Cohn et al. , 1994 ; Dasgupta et al. , 2008 ; Beygelzimer et al. , 2009 ) . In practice , tracking this version space can be computationally inefficient for all but the simplest ( e.g . linear ) model families . Another , related approach , is that of query-by-committee , where points are selected for labeling based on the level of disagreement between a committee of classifiers , which have been selected using the currently labeled pool . Query-by-committee methods benefit from theoretical guarantees ( Abe & Mamitsuka , 1998 ) , as well as practical implementations based on bagging ( Seung et al. , 1992 ) . These methods , along with uncertainty sampling that we introduce next , are only a few prototypical examples of active learning algorithms . For a broader introductory survey of algorithms and techniques , please see Settles ( 2009 ) and the references therein . Arguably , the most popular and practically effective active learning technique is that of uncertainty or margin sampling ( Lewis & Gale , 1994 ; Lewis & Catlett , 1994 ) . At each iteration , this approach first trains a model using the currently labeled subset , then it makes predictions on all unlabeled points under consideration and , finally , it queries labels for those points where the confidence in the models ’ prediction is the smallest . The notion of confidence can be defined several different ways , as will be detailed in Section 4.2 . In this light , we can see that active learning methods and semisupervised learning methods can work in a complementary fashion . At a high level , active learning methods seek to find training examples for which we have least confidence in the underlying labels in order to query for those labels , while semi-supervised learning algorithms can focus on training examples where there is strong confidence in the label distribution and reasonable assumption conclusions can be made regarding unlabeled examples . It is no surprise that combining active learning and semi-supervised learning has been investigated previously ( Hoi et al. , 2009 ; Muslea et al. , 2002 ) . We highlight a few of these examples below ( also see Section 7.1 of Settles ( 2009 ) ) . One of the earliest lines of work combining the two techniques is that of McCallum & Nigam ( 1998 ) , where the query-by-committee framework is combined with an expectation maximization ( EM ) approach that is used to provide pseudo-labels to those examples that have not been queried for true underlying labels by the active learning method . They show that , on a text classification task , the improvement in label complexity of the combined method is better than what either of semi-supervised or active learning method alone can provide . In the work of Zhu et al . ( 2003 ) , the two approaches are combined using a Gaussian random field model . Given a model trained on the union of currently labeled and unlabeled datasets , the expected reduction in the estimated risk due to receiving the label of a particular example can be computed and greedily optimized . Tur et al . ( 2005 ) and Tomanek & Hahn ( 2009 ) both combine uncertainty sampling to label uncertain points with machine labeling of confident examples and show their effectiveness in applications to a spoken language understanding and sequence labeling tasks , respectively . In this work , we combine uncertainty-sampling based active learning method and demonstrate that the already state-of-the-art semi-supervised performance of the MixMatch algorithm can be significantly further improved . 3 MIXMATCH . We only focus on deep learning based semi-supervised learning ( SSL ) in this work . We use the framework from Oliver et al . ( 2018 ) to perform a realistic evaluation of the results ( same splits of the labeled/unlabeled initial data , same network architecture ) . Recent popular techniques are built around two concepts : Consistency Regularization ( Mean Teacher ( Tarvainen & Valpola , 2017 ) , Π-model ( Laine & Aila , 2017 ) , Virtual Adversarial Training ( VAT ) ( Miyato et al. , 2018 ) ) which enforces that two augmentations of the same image must have the same label and Entropy Minimization ( Pseudo-Label Lee ( 2013 ) , Entropy Minimization ( EntMin ) ( Grandvalet & Bengio , 2005 ) ) which states that a prediction should be confident . These two concepts are sometimes combined , for example the technique VAT EntMin is a combination of VAT and Entropy Minimization and MixMatch is also using both of these concepts . MixMatch , a recent state-of-the-art semi-supervised learning ( SSL ) technique , is designed around the idea of guessing labels for the unlabeled data followed by using standard fully supervised training ( Berthelot et al. , 2019 ) . Consider a classification task with classes C. The input of MixMatch are a batch of B ( images , labels ) pairs X = { ( xb , pb ) } 1≤b≤B , where each label pb is an one-hot vector over the class C , and a batch of unlabeled examples ( images ) U = { ub } 1≤b≤B . Note , in this section , a “ batch ” is in reference to the subset of points used in the iterative optimization procedure that is used to train the model , while elsewhere in the paper “ batch ” refers to a set of newly labeled points that are added to the training set during the iterative active learning process . Label Guessing . Label guessing is done by averaging the training model ’ s own prediction on several augmentation ûb , i of a same image ub . This average prediction is then sharpened to produce a low-entropy soft label qb for each image ub . Formally , the average is defined as q̄b = 1 K ∑K i=1 pmodel ( y|ûb , k ; θ ) where pmodel ( y|x ; θ ) is the model ’ s output distribution over class labels y on input x with parameters θ . A sharpening is applied to the average prediction : qb = Sharpen ( q̄b ) . In practice , MixMatch uses a standard softmax temperature reduction computed as Sharpen ( p ) i : = p 1/T i / ∑|C| j=1 p 1/T j where T = 1/2 is a fixed hyper-parameter . Data Augmentation . MixMatch only uses standard ( weak ) augmentations . For the SVHN dataset , we use only random pixel shifts , while for CIFAR-10 and CIFAR-100 we also use random mirroring . Fully supervised . Finally , MixMatch uses fully supervised techniques . In practice , it use weight decay and MixUp across the labeled and unlabeled data ( Zhang et al. , 2017 ) . In essence , MixUp generates new training examples by computing the convex combination of two existing ones . Specifically , it does a pixel level interpolation between images and pairwise interpolation between probability distribution . The resulting interpolated label is a soft label . Such examples encourage the model to make smooth transitions between classes . Let X̂ = { ( x̂b , pb ) } 1≤b≤B and Û = { ( ub , qb ) } 1≤b≤B be the results of data augmentation and label guessing . MixMatch shuffles the union of the two batches X̂ ∪ Û into a batch W of size 2B and performs MixUp to produce the output : • X ′ = MixUp ( X̂ , W [ 1 , ... , B ] ) ( mixing up X̂ with the first half of W ) and • U ′ = MixUp ( Û , W [ B+1 , ... ,2B ] ) ( mixing up Û with the second half of W ) . Given two examples ( x1 , p1 ) and ( x2 , p2 ) where x1 , x2 are feature vectors and p1 , p2 are one-hot encoding or soft label , depending on whether the corresponding feature vector is labeled or not , MixMach performs MixUp as follows : ( 1 ) Sample λ ∼ Beta ( α , α ) from a Beta distribution parameterized by the hyper-parameter α ; ( 2 ) For λ′ = max ( 1− λ , λ ) , compute x′ = λ′x1 + ( 1− λ′ ) x2 and p′ = λ′p1 + ( 1− λ′ ) p2 . Loss function . Similar to other SSL paradigms , the loss function of MixMatch consists of a sum of two terms : ( 1 ) a cross-entropy loss between a predicted label distribution with the ground-truth label and ( 2 ) a Brier score ( L2 loss ) for the unlabeled data which is less sensitive to incorrectly predicted labels . On a MixMatch batch ( X ′ , U ′ ) , the loss function is L = LX + λULU where LX = 1 |X ′| ∑ ( x , p ) ∈X′ CrossEntropy ( p , pmodel ( y|x ; θ ) ) , ( 1 ) LU = 1 |C||U ′| ∑ ( u , q ) ∈U ′ ‖q − pmodel ( y|u ; θ ) ‖2 . ( 2 ) Here λU is the hyper-parameter controlling the importance of the unlabeled data to the training process . We set λU to be 75 for CIFAR-10 , 150 for CIFAR-100 , and 250 for SVHN . We also fix the MixUp hyper-parameter α = 0.75 .
The paper proposes to combine active learning techniques with MixMatch for semi-supervised learning. First, they review active learning and semi-supervised learning, especially MixMatch. Instead of traditional semi-supervised learning with a fixed set of labeled examples, they incrementally grow the labeled set as the training process goes on. They consider several different choices in active learning strategies: uncertainty measure and diversification. Diversification methods are used to balance the samples in different classes and ensure diversity. The cost analysis of adding labeled vs unlabeled data looks interesting. They perform an empirical evaluation on image benchmarks and improve over MixMatch.
SP:19e84ea0dc79d30cfbdb25c0b768536f38820885
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
1 INTRODUCTION Despite their impressive performance on various learning tasks , neural networks have been shown to be vulnerable to adversarial perturbations ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . An artificially constructed imperceptible perturbation can cause a significant drop in the prediction accuracy of an otherwise accurate network . The level of distortion is measured by the magnitude of the perturbations ( e.g . in ` ∞ or ` 2 norms ) , i.e . the distance from the original input to the perturbed input . Figure 1 shows an example , where the classifier changes its prediction from panda to bucket when the input is perturbed from the blue sample point to the red one . Figure 1 also shows the natural connection between adversarial robustness and the margins of the data points , where the margin is defined as the distance from a data point to the classifier ’ s decision boundary . Intuitively , the margin of a data point is the minimum distance that x has to be perturbed to change the classifier ’ s prediction . Thus , the larger the margin is , the farther the distance from the input to the decision boundary is , the more robust the classifier is w.r.t . this input . Although naturally connected to adversarial robustness , “ directly ” maximizing margins has not yet been thoroughly studied in the adversarial robustness literature . Instead , the method of minimax adversarial training ( Huang et al. , 2015 ; Madry et al. , 2017 ) is arguably the most common defense to adversarial perturbations due to its effectiveness and simplicity . Adversarial training attempts to minimize the maximum loss within a fixed sized neighborhood about the training data using projected gradient descent ( PGD ) . Despite advancements made in recent years ( Hendrycks et al. , 2019 ; Zhang et al. , 2019a ; Shafahi et al. , 2019 ; Zhang et al. , 2019b ; Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , adversarial training still suffers from a fundamental problem , the perturbation length has to be set and is fixed throughout the training process . In general , the setting of is arbitrary , based on assumptions on whether perturbations within the defined ball are “ imperceptible ” or not . Recent work ( Guo et al. , 2018 ; Sharma et al. , 2019 ) has demonstrated that these assumptions do not ∗Work done at Borealis AI . consistently hold true , commonly used settings assumed to only allow imperceptible perturbations in fact do not . If is set too small , the resulting models lack robustness , if too large , the resulting models lack in accuracy . Moreover , individual data points may have different intrinsic robustness , the variation in ambiguity in collected data is highly diverse , and fixing one for all data points across the whole training procedure is likely suboptimal . Instead of improving adversarial training with a fixed perturbation magnitude , we revisit adversarial robustness from the margin perspective , and propose Max-Margin Adversarial ( MMA ) training for “ direct ” input margin maximization . By directly maximizing margins calculated for each data point , MMA training allows for optimizing the “ current robustness ” of the data , the “ correct ” at this point in training for each sample individually , instead of robustness w.r.t . a predefined magnitude . While it is intuitive that one can achieve the greatest possible robustness by maximizing the margin of a classifier , this maximization has technical difficulties . In Section 2 , we overcome these difficulties and show that margin maximization can be achieved by minimizing a classification loss w.r.t . model parameters , at the “ shortest successful perturbation ” . This makes gradient descent viable for margin maximization , despite the fact that model parameters are entangled in the constraints . We further analyze adversarial training ( Huang et al. , 2015 ; Madry et al. , 2017 ) from the perspective of margin maximization in Section 3 . We show that , for each training example , adversarial training with fixed perturbation length is maximizing a lower ( or upper ) bound of the margin , if is smaller ( or larger ) than the margin of that training point . As such , MMA training improves adversarial training , in the sense that it selects the “ correct ” , the margin value for each example . Finally in Section 4 , we test and compare MMA training with adversarial training on MNIST and CIFAR10 w.r.t . ` ∞ and ` 2 robustness . Our method achieves higher robustness accuracies on average under a variety of perturbation magnitudes , which echoes its goal of maximizing the average margin . Moreover , MMA training automatically balances accuracy vs robustness while being insensitive to its hyperparameter setting , which contrasts sharply with the sensitivity of standard adversarial training to its fixed perturbation magnitude . MMA trained models not only match the performance of the best adversarially trained models with carefully chosen training under different scenarios , it also matches the performance of ensembles of adversarially trained models . In this paper , we focus our theoretical efforts on the formulation for directly maximizing the input space margin , and understanding the standard adversarial training method from a margin maximization perspective . We focus our empirical efforts on thoroughly examining our MMA training algorithm , comparing with adversarial training with a fixed perturbation magnitude . 1.1 RELATED WORKS . Although not often explicitly stated , many defense methods are related to increasing the margin . One class uses regularization to constrain the model ’ s Lipschitz constant ( Cisse et al. , 2017 ; Ross & Doshi-Velez , 2017 ; Hein & Andriushchenko , 2017 ; Sokolic et al. , 2017 ; Tsuzuku et al. , 2018 ) , thus samples with small loss would have large margin since the loss can not increase too fast . If the Lipschitz constant is merely regularized at the data points , it is often too local and not accurate in a neighborhood . When globally enforced , the Lipschitz constraint on the model is often too strong that it harms accuracy . So far , such methods have not achieved strong robustness . There are also efforts using first-order approximation to estimate and maximize input space margin ( Matyasko & Chau , 2017 ; Elsayed et al. , 2018 ; Yan et al. , 2019 ) . Similar to local Lipschitz regularization , the reliance on local information often does not provide accurate margin estimation and efficient maximization . Such approaches have also not achieved strong robustness so far . Croce et al . ( 2018 ) aim to enlarge the linear region around a input example , such that the nearest point , to the input and on the decision boundary , is inside the linear region . Here , the margin can be calculated analytically and hence maximized . However , the analysis only works on ReLU networks , and the implementation so far only works on improving robustness under small perturbations . We defer some detailed discussions on related works to Appendix B , including a comparison between MMA training and SVM . 1.2 NOTATIONS AND DEFINITIONS . We focus on K-class classification problems . Denote S = { xi , yi } as the training set of inputlabel data pairs sampled from data distribution D. We consider the classifier as a score function fθ ( x ) = ( f1θ ( x ) , . . . , f K θ ( x ) ) , parametrized by θ , which assigns score f iθ ( x ) to the i-th class . The predicted label of x is then decided by ŷ = arg maxi f iθ ( x ) . Let L01θ ( x , y ) = I ( ŷ 6= y ) be the 0-1 loss indicating classification error , where I ( · ) is the indicator function . For an input ( x , y ) , we define its margin w.r.t . the classifier fθ ( · ) as : dθ ( x , y ) = ‖δ∗‖ = min ‖δ‖ s.t . δ : L01θ ( x+ δ , y ) = 1 , ( 1 ) where δ∗ = arg minL01θ ( x+δ , y ) =1 ‖δ‖ is the “ shortest successful perturbation ” . We give an equivalent definition of margin with the “ logit margin loss ” LLMθ ( x , y ) = maxj 6=y f j θ ( x ) − f y θ ( x ) . 1 The level set { x : LLMθ ( x , y ) = 0 } corresponds to the decision boundary of class y . Also , when LLMθ ( x , y ) < 0 , the classification is correct , and when LLMθ ( x , y ) ≥ 0 , the classification is wrong . Therefore , we can define the margin in Eq . ( 1 ) in an equivalent way by LLMθ ( · ) as : dθ ( x , y ) = ‖δ∗‖ = min ‖δ‖ s.t . δ : LLMθ ( x+ δ , y ) ≥ 0 , ( 2 ) where δ∗ = arg minLLMθ ( x+δ , y ) ≥0 ‖δ‖ is again the “ shortest successful perturbation ” . For the rest of the paper , we use the term “ margin ” to denote dθ ( x , y ) in Eq . ( 2 ) . For other notions of margin , we will use specific phrases , e.g . “ SLM-margin ” or “ logit margin . ” 2 MAX-MARGIN ADVERSARIAL TRAINING . We propose to improve adversarial robustness by maximizing the average margin of the data distribution D , called Max-Margin Adversarial ( MMA ) training , by optimizing the following objective : min θ { ∑ i∈S+θ max { 0 , dmax − dθ ( xi , yi ) } + β ∑ j∈S−θ Jθ ( xj , yj ) } , ( 3 ) where S+θ = { i : LLMθ ( xi , yi ) < 0 } is the set of correctly classified examples , S − θ = { i : LLMθ ( xi , yi ) ≥ 0 } is the set of wrongly classified examples , Jθ ( · ) is a regular classification loss function , e.g . cross entropy loss , dθ ( xi , yi ) is the margin for correctly classified samples , and β is the coefficient for balancing correct classification and margin maximization . Note that the margin dθ ( xi , yi ) is inside the hinge loss with threshold dmax ( a hyperparameter ) , which forces the learning to focus on the margins that are smaller than dmax . Intuitively , MMA training simultaneously minimizes classification loss on wrongly classified points in S−θ and maximizes the margins of correctly classified points in dθ ( xi , yi ) until it reaches dmax . Note that we do not maximize margins on wrongly classified examples . Minimizing the objective in Eq . ( 3 ) turns out to be a technical challenge . While ∇θJθ ( xj , yj ) can be easily computed by standard back-propagation , computing the gradient of dθ ( xi , yi ) needs some technical developments . In the next section , we show that margin maximization can still be achieved by minimizing a classification loss w.r.t . model parameters , at the “ shortest successful perturbation ” . For smooth functions , a stronger result exists : the gradient of the margin w.r.t . model parameters can be analytically calculated , as a scaled gradient of the loss . Such results make gradient descent viable for margin maximization , despite the fact that model parameters are entangled in the constraints .
This paper proposes an adaptive margin-based adversarial training (eg. MMA) approach to train robust DNNs by maximizing the shortest margin of inputs to the decision boundary. Theoretical analyses have been provided to understand the connection between robust optimization and margin maximization. The main difference between the proposed approach to standard adversarial training is the adaptive selection of the perturbation bound \epsilon. This makes adversarial training with large perturbation possible, which was previously unachievable by standard adversarial training (Madry et al.) Empirical results match the theoretical analysis.
SP:75d17035de7c88ebb45e60795d3acd8f0e93b84b
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
1 INTRODUCTION Despite their impressive performance on various learning tasks , neural networks have been shown to be vulnerable to adversarial perturbations ( Szegedy et al. , 2013 ; Biggio et al. , 2013 ) . An artificially constructed imperceptible perturbation can cause a significant drop in the prediction accuracy of an otherwise accurate network . The level of distortion is measured by the magnitude of the perturbations ( e.g . in ` ∞ or ` 2 norms ) , i.e . the distance from the original input to the perturbed input . Figure 1 shows an example , where the classifier changes its prediction from panda to bucket when the input is perturbed from the blue sample point to the red one . Figure 1 also shows the natural connection between adversarial robustness and the margins of the data points , where the margin is defined as the distance from a data point to the classifier ’ s decision boundary . Intuitively , the margin of a data point is the minimum distance that x has to be perturbed to change the classifier ’ s prediction . Thus , the larger the margin is , the farther the distance from the input to the decision boundary is , the more robust the classifier is w.r.t . this input . Although naturally connected to adversarial robustness , “ directly ” maximizing margins has not yet been thoroughly studied in the adversarial robustness literature . Instead , the method of minimax adversarial training ( Huang et al. , 2015 ; Madry et al. , 2017 ) is arguably the most common defense to adversarial perturbations due to its effectiveness and simplicity . Adversarial training attempts to minimize the maximum loss within a fixed sized neighborhood about the training data using projected gradient descent ( PGD ) . Despite advancements made in recent years ( Hendrycks et al. , 2019 ; Zhang et al. , 2019a ; Shafahi et al. , 2019 ; Zhang et al. , 2019b ; Stanforth et al. , 2019 ; Carmon et al. , 2019 ) , adversarial training still suffers from a fundamental problem , the perturbation length has to be set and is fixed throughout the training process . In general , the setting of is arbitrary , based on assumptions on whether perturbations within the defined ball are “ imperceptible ” or not . Recent work ( Guo et al. , 2018 ; Sharma et al. , 2019 ) has demonstrated that these assumptions do not ∗Work done at Borealis AI . consistently hold true , commonly used settings assumed to only allow imperceptible perturbations in fact do not . If is set too small , the resulting models lack robustness , if too large , the resulting models lack in accuracy . Moreover , individual data points may have different intrinsic robustness , the variation in ambiguity in collected data is highly diverse , and fixing one for all data points across the whole training procedure is likely suboptimal . Instead of improving adversarial training with a fixed perturbation magnitude , we revisit adversarial robustness from the margin perspective , and propose Max-Margin Adversarial ( MMA ) training for “ direct ” input margin maximization . By directly maximizing margins calculated for each data point , MMA training allows for optimizing the “ current robustness ” of the data , the “ correct ” at this point in training for each sample individually , instead of robustness w.r.t . a predefined magnitude . While it is intuitive that one can achieve the greatest possible robustness by maximizing the margin of a classifier , this maximization has technical difficulties . In Section 2 , we overcome these difficulties and show that margin maximization can be achieved by minimizing a classification loss w.r.t . model parameters , at the “ shortest successful perturbation ” . This makes gradient descent viable for margin maximization , despite the fact that model parameters are entangled in the constraints . We further analyze adversarial training ( Huang et al. , 2015 ; Madry et al. , 2017 ) from the perspective of margin maximization in Section 3 . We show that , for each training example , adversarial training with fixed perturbation length is maximizing a lower ( or upper ) bound of the margin , if is smaller ( or larger ) than the margin of that training point . As such , MMA training improves adversarial training , in the sense that it selects the “ correct ” , the margin value for each example . Finally in Section 4 , we test and compare MMA training with adversarial training on MNIST and CIFAR10 w.r.t . ` ∞ and ` 2 robustness . Our method achieves higher robustness accuracies on average under a variety of perturbation magnitudes , which echoes its goal of maximizing the average margin . Moreover , MMA training automatically balances accuracy vs robustness while being insensitive to its hyperparameter setting , which contrasts sharply with the sensitivity of standard adversarial training to its fixed perturbation magnitude . MMA trained models not only match the performance of the best adversarially trained models with carefully chosen training under different scenarios , it also matches the performance of ensembles of adversarially trained models . In this paper , we focus our theoretical efforts on the formulation for directly maximizing the input space margin , and understanding the standard adversarial training method from a margin maximization perspective . We focus our empirical efforts on thoroughly examining our MMA training algorithm , comparing with adversarial training with a fixed perturbation magnitude . 1.1 RELATED WORKS . Although not often explicitly stated , many defense methods are related to increasing the margin . One class uses regularization to constrain the model ’ s Lipschitz constant ( Cisse et al. , 2017 ; Ross & Doshi-Velez , 2017 ; Hein & Andriushchenko , 2017 ; Sokolic et al. , 2017 ; Tsuzuku et al. , 2018 ) , thus samples with small loss would have large margin since the loss can not increase too fast . If the Lipschitz constant is merely regularized at the data points , it is often too local and not accurate in a neighborhood . When globally enforced , the Lipschitz constraint on the model is often too strong that it harms accuracy . So far , such methods have not achieved strong robustness . There are also efforts using first-order approximation to estimate and maximize input space margin ( Matyasko & Chau , 2017 ; Elsayed et al. , 2018 ; Yan et al. , 2019 ) . Similar to local Lipschitz regularization , the reliance on local information often does not provide accurate margin estimation and efficient maximization . Such approaches have also not achieved strong robustness so far . Croce et al . ( 2018 ) aim to enlarge the linear region around a input example , such that the nearest point , to the input and on the decision boundary , is inside the linear region . Here , the margin can be calculated analytically and hence maximized . However , the analysis only works on ReLU networks , and the implementation so far only works on improving robustness under small perturbations . We defer some detailed discussions on related works to Appendix B , including a comparison between MMA training and SVM . 1.2 NOTATIONS AND DEFINITIONS . We focus on K-class classification problems . Denote S = { xi , yi } as the training set of inputlabel data pairs sampled from data distribution D. We consider the classifier as a score function fθ ( x ) = ( f1θ ( x ) , . . . , f K θ ( x ) ) , parametrized by θ , which assigns score f iθ ( x ) to the i-th class . The predicted label of x is then decided by ŷ = arg maxi f iθ ( x ) . Let L01θ ( x , y ) = I ( ŷ 6= y ) be the 0-1 loss indicating classification error , where I ( · ) is the indicator function . For an input ( x , y ) , we define its margin w.r.t . the classifier fθ ( · ) as : dθ ( x , y ) = ‖δ∗‖ = min ‖δ‖ s.t . δ : L01θ ( x+ δ , y ) = 1 , ( 1 ) where δ∗ = arg minL01θ ( x+δ , y ) =1 ‖δ‖ is the “ shortest successful perturbation ” . We give an equivalent definition of margin with the “ logit margin loss ” LLMθ ( x , y ) = maxj 6=y f j θ ( x ) − f y θ ( x ) . 1 The level set { x : LLMθ ( x , y ) = 0 } corresponds to the decision boundary of class y . Also , when LLMθ ( x , y ) < 0 , the classification is correct , and when LLMθ ( x , y ) ≥ 0 , the classification is wrong . Therefore , we can define the margin in Eq . ( 1 ) in an equivalent way by LLMθ ( · ) as : dθ ( x , y ) = ‖δ∗‖ = min ‖δ‖ s.t . δ : LLMθ ( x+ δ , y ) ≥ 0 , ( 2 ) where δ∗ = arg minLLMθ ( x+δ , y ) ≥0 ‖δ‖ is again the “ shortest successful perturbation ” . For the rest of the paper , we use the term “ margin ” to denote dθ ( x , y ) in Eq . ( 2 ) . For other notions of margin , we will use specific phrases , e.g . “ SLM-margin ” or “ logit margin . ” 2 MAX-MARGIN ADVERSARIAL TRAINING . We propose to improve adversarial robustness by maximizing the average margin of the data distribution D , called Max-Margin Adversarial ( MMA ) training , by optimizing the following objective : min θ { ∑ i∈S+θ max { 0 , dmax − dθ ( xi , yi ) } + β ∑ j∈S−θ Jθ ( xj , yj ) } , ( 3 ) where S+θ = { i : LLMθ ( xi , yi ) < 0 } is the set of correctly classified examples , S − θ = { i : LLMθ ( xi , yi ) ≥ 0 } is the set of wrongly classified examples , Jθ ( · ) is a regular classification loss function , e.g . cross entropy loss , dθ ( xi , yi ) is the margin for correctly classified samples , and β is the coefficient for balancing correct classification and margin maximization . Note that the margin dθ ( xi , yi ) is inside the hinge loss with threshold dmax ( a hyperparameter ) , which forces the learning to focus on the margins that are smaller than dmax . Intuitively , MMA training simultaneously minimizes classification loss on wrongly classified points in S−θ and maximizes the margins of correctly classified points in dθ ( xi , yi ) until it reaches dmax . Note that we do not maximize margins on wrongly classified examples . Minimizing the objective in Eq . ( 3 ) turns out to be a technical challenge . While ∇θJθ ( xj , yj ) can be easily computed by standard back-propagation , computing the gradient of dθ ( xi , yi ) needs some technical developments . In the next section , we show that margin maximization can still be achieved by minimizing a classification loss w.r.t . model parameters , at the “ shortest successful perturbation ” . For smooth functions , a stronger result exists : the gradient of the margin w.r.t . model parameters can be analytically calculated , as a scaled gradient of the loss . Such results make gradient descent viable for margin maximization , despite the fact that model parameters are entangled in the constraints .
This paper proposes a method, Max-Margin Adversarial (MMA) training, for robust learning against adversarial attacks. In the MMA, the margin in the input space is directly maximized. In order to alleviate an instability of the learning, a softmax variant of the max-margin is introduced. Moreover, the margin-maximization and the minimization of the worst-case loss are studied. Some numerical experiments show that the proposed MMA training is efficient against several adversarial attacks.
SP:75d17035de7c88ebb45e60795d3acd8f0e93b84b
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
1 INTRODUCTION . Despite the impressive success that deep neural networks have achieved in a wide range of challenging tasks , the inference in deep neural networks is highly memory-intensive and computationintensive due to the over-parameterization of deep neural networks . Network pruning ( LeCun et al . ( 1990 ) ; Han et al . ( 2015 ) ; Molchanov et al . ( 2017 ) ) has been recognized as an effective approach to improving the inference efficiency in resource-limited scenarios . Traditional pruning methods consist of dense network training followed with pruning and fine-tuning iterations . To avoid the expensive pruning and fine-tuning iterations , many sparse training methods ( Mocanu et al. , 2018 ; Bellec et al. , 2017 ; Mostafa & Wang , 2019 ; Dettmers & Zettlemoyer , 2019 ) have been proposed , where the network pruning is conducted during the training process . However , all these methods suffer from following three problems : Coarse-grained predefined pruning schedule . Most of the existing pruning methods use a predefined pruning schedule with many additional hyperparameters like pruning a % parameter each time and then fine-tuning for b epochs with totally c pruning steps . It is non-trivial to determine these hyperparameters for network architectures with various degrees of complexity . Therefore , usually a fixed pruning schedule is adopted for all the network architectures , which means that a very simple network architecture like LeNet-300-100 will have the same pruning schedule as a far more complex network like ResNet-152 . Besides , almost all the existing pruning methods conduct epoch-wise pruning , which means that the pruning is conducted between two epochs and no pruning operation happens inside each epoch . Failure to properly recover the pruned weights . Almost all the existing pruning methods conduct ” hard ” pruning that prunes weights by directly setting their values to 0 . Many works ( Guo et al. , 2016 ; Mocanu et al. , 2018 ; He et al. , 2018 ) have argued that the importance of network weights are not fixed and will change dynamically during the pruning and training process . Previously unimportant weights may tend to be important . So the ability to recover the pruned weights is of high significance . However , directly setting the pruned weights to 0 results in the loss of historical parameter importance , which makes it difficult to determine : 1 ) whether and when each pruned weight should be recovered , 2 ) what values should be assigned to the recovered weights . Therefore , existing methods that claim to be able to recover the pruned weights simply choose a predefined portion of pruned weights to recover and these recover weights are randomly initialized or initialized to the same value . Failure to properly determine layer-wise pruning rates . Modern neural network architectures usually contain dozens of layers with a various number of parameters . Therefore , the degree of parameter redundancy is very different among the layers . For simplicity , some methods prune the same percentage of parameters at each layer , which is not optimal . To obtain dynamic layer-wise pruning rates , a single global pruning threshold or layer-wise greedy algorithms are applied . Using a single global pruning threshold is exceedingly difficult to assess the local parameter importance of the individual layer since each layer has a significantly different amount of parameter and contribution to the model performance . This makes pruning algorithms based on a single global threshold inconsistent and non-robust . The problem of layer-by-layer greedy pruning methods is that the unimportant neurons in an early layer may have a significant influence on the responses in later layers , which may result in propagation and amplification of the reconstruction error ( Yu et al. , 2018 ) . We propose a novel end-to-end sparse training algorithm that properly solves the above problems . With only one additional hyperparameter used to set the final model sparsity , our method can achieve dynamic fine-grained pruning and recovery during the whole training process . Meanwhile , the layerwise pruning rates will be adjusted automatically with respect to the change of parameter importance during the training and pruning process . Our method achieves state-of-the-art performance compared with other sparse training algorithms . The proposed algorithm has following promising properties : • Step-wise pruning and recovery . A training epoch usually will have tens of thousands of training steps , which is the feed-forward and back-propagation pass for a single mini-batch . Instead of pruning between two training epochs with a predefined pruning schedule , our method prunes and recovers the network parameter at each training step , which is far more fine-grained than existing methods . • Neuron-wise or filter-wise trainable thresholds . All the existing methods adopt a single pruning threshold for each layer or the whole architecture . Our method defines a threshold vector for each layer . Therefore , our method adopts neuron-wise pruning thresholds for fully connected and recurrent layer and filter-wise pruning thresholds for convolutional layer . Additionally , all these pruning thresholds are trainable and will be updated automatically via back-propagation . • Dynamic pruning schedule . The training process of deep neural network consists of many hyperparameters . The learning rate is perhaps the most important hyperparameter . Usually , the learning rate will decay during the training process . Our method can automatically adjust the layer-wise pruning rates under different learning rates to get the optimal sparse network structure . • Consistent sparse pattern . Our algorithm can get a consistent layer-wise sparse pattern under different model sparsities , which indicates that our method can automatically determine the optimal layer-wise pruning rates given the target model sparsity . 2 RELATED WORK . Traditional Pruning Methods : LeCun et al . ( 1990 ) presented the early work about network pruning using second-order derivatives as the pruning criterion . The effective and popular training , pruning and fine-tuning pipeline was proposed by Han et al . ( 2015 ) , which used the parameter magnitude as the pruning criterion . Narang et al . ( 2017 ) extended this pipeline to prune the recurrent neural networks with a complicated pruning strategy . Molchanov et al . ( 2016 ) introduced first-order Taylor term as the pruning criterion and conduct global pruning . Li et al . ( 2016 ) used ` 1 regularization to force the unimportant parameters to zero . Sparse Neural Network Training : Recently , some works attempt to find the sparse network directly during the training process without the pruning and fine-tuning stage . Inspired by the growth and extinction of neural cells in biological neural networks , Mocanu et al . ( 2018 ) proposed a pruneregrowth procedure called Sparse Evolutionary Training ( SET ) that allows the pruned neurons and connections to revive randomly . However , the sparsity level needs to be set manually and the random recovery of network connections may provoke unexpected effects on the network . DEEP-R proposed by Bellec et al . ( 2017 ) used Bayesian sampling to decide the pruning and regrowth configuration , which is computationally expensive . Dynamic Sparse Reparameterization ( Mostafa & Wang , 2019 ) used dynamic parameter reallocation to find the sparse structure . However , the pruning threshold can only get halved if the percentage of parameter pruned is too high or get doubled if that percentage is too low for a certain layer . This coarse-grained adjustment of the pruning threshold significantly limits the ability of Dynamic Sparse Reparameterization . Additionally , a predefined pruning ratio and fractional tolerance are required . Dynamic Network Surgery ( Guo et al. , 2016 ) proposed pruning and splicing procedure that can prune or recover network connections according to the parameter magnitude but it requires manually determined thresholds that are fixed during the sparse learning process . These layer-wise thresholds are extremely hard to manually set . Meanwhile , Fixing the thresholds makes it hard to adapt to the rapid change of parameter importance . Dettmers & Zettlemoyer ( 2019 ) proposed sparse momentum that used the exponentially smoothed gradients as the criterion for pruning and regrowth . A fixed percentage of parameters are pruned at each pruning step . The pruning ratio and momentum scaling rate need to be searched from a relatively high parameter space . 3 DYNAMIC SPARSE TRAINING . 3.1 NOTATION . Deep neural network consists of a set of parameters { Wi : 1 ≤ i ≤ C } , where Wi denotes the parameter matrix at layer i and C denotes the number of layers in this network . For each fully connected layer and recurrent layer , the corresponding parameter is Wi ∈ Rco×ci , where co is the output dimension and ci is the input dimension . For each convolutional layer , there exists a convolution kernel Ki ∈ Rco×ci×w×h , where co is the number of output channels , ci is the number of input channels , w and h are the kernel sizes . Each filter in a convolution kernel Ki can be flattened to a vector . Therefore , a corresponding parameter matrix Wi ∈ Rco×z can be derived from each convolution kernel Ki ∈ Rco×ci×w×h , where z = ci × w × h. Actually , the pruning process is equivalent to finding a binary parameter mask Mi for each parameter matrix Wi . Thus , a set of binary parameter masks { Mi : 1 ≤ i ≤ C } will be found by network pruning . Each element for these parameter masks Mi is either 1 or 0 . 3.2 THRESHOLD VECTOR AND DYNAMIC PARAMETER MASK . Pruning can be regarded as applying a binary mask M to each parameter W . This binary parameter mask M preserves the information about the sparse structure . Given the parameter set { W1 , W2 , · · · , WC } , our method will dynamically find the corresponding parameter masks { M1 , M2 , · · · , MC } . To achieve this , for each parameter matrix W ∈ Rco×ci , a trainable pruning threshold vector t ∈ Rco is defined . Then we utilize a unit step function S ( x ) as shown in Figure 2 ( a ) to get the masks according to the magnitude of parameters and corresponding thresholds as present below . Qij = F ( Wij , ti ) = |Wij | − ti 1 ≤ i ≤ co , 1 ≤ j ≤ ci ( 1 ) Mij = S ( Qij ) 1 ≤ i ≤ co , 1 ≤ j ≤ ci ( 2 ) With the dynamic parameter mask M , the corresponding element in mask Mij will be set to 0 if Wij needs to be pruned . This means that the weight Wij is masked out by the 0 at Mij to get a sparse parameter W M . The value of underlying weight Wij will not change , which preserves the historical information about the parameter importance . For a fully connected layer or recurrent layer with parameter W ∈ Rco×ci and threshold vector t ∈ Rco , each weight Wij will have a neuron-wise threshold ti , where Wij is the jth weight associated with the ith output neuron . Similarly , the thresholds are filter-wise for convolutional layer . Besides , a threshold matrix or a single scalar threshold can also be chosen . More details are present in Appendix A.2 . 3.3 TRAINABLE MASKED LAYERS . With the threshold vector and dynamic parameter mask , the trainable masked fully connected , convolutional and recurrent layer are introduced as shown in Figure 1 , where X is the input of current layer and Y is the output . For fully connected and recurrent layers , instead of the dense parameter W , the sparse product W M will be used in the batched matrix multiplication , where denote Hadamard product operator . For convolutional layers , each convolution kernel K ∈ Rco×ci×w×h can be flatten to get W ∈ Rco×z . Therefore , the sparse kernel can be obtained by a similar process as fully connected layers . This sparse kernel will be used for subsequent convolution operation . In order to make the elements in threshold vector t trainable via back-propagation , the derivative of the binary step function S ( x ) is required . However , its original derivative is an impulse function whose value is zero almost everywhere and infinite at zero as shown in Figure 2 ( b ) . Thus the original derivative of the binary step function S ( x ) can not be applied in back-propagation and parameter updating directly . Some previous works ( Hubara et al . ( 2016 ) ; Rastegari et al . ( 2016 ) ; Zhou et al . ( 2016 ) ) demonstrated that by providing a derivative estimation , it is possible to train networks containing such binarization function . A clip function called straight-through estimator ( STE ) ( Bengio et al. , 2013 ) was employed in these works and is illustrated in Figure 2 ( c ) . Furthermore , Xu & Cheung ( 2019 ) discussed the derivative estimation in a balance of tight approximation and smooth back-propagation . We adopt this long-tailed higher-order estimator H ( x ) in our method . As shown in Figure 2 ( d ) , it has a wide active range between [ −1 , 1 ] with a non-zero gradient to avoid gradient vanishing during training . On the other hand , the gradient value near zero is a piecewise polynomial function giving tighter approximation than STE . The estimator is represented as d dx S ( x ) ≈ H ( x ) = 2− 4|x| , −0.4 ≤ x ≤ 0.4 0.4 , 0.4 < |x| ≤ 1 0 , otherwise ( 3 ) With this derivative estimator , the elements in the vector threshold t can be trained via backpropagation . Meanwhile , in trainable masked layers , the network parameter W can receive two branches of gradient , namely the performance gradient for better model performance and the structure gradient for better sparse structure , which helps to properly update the network parameter under sparse network connectivity . The structure gradient enables the pruned ( masked ) weights to be updated via back-propagation . The details about the feed-forward and back-propagation of the trainable masked layer are present in Appendix A.3 . Therefore , the pruned ( masked ) weights , the unpruned ( unmasked ) weights and the elements in the threshold vector can all be updated via back-propagation at each training step . The proposed method will conduct fine-grained step-wise pruning and recovery automatically .
This paper presents a novel network pruning algorithm -- Dynamic Sparse Training. It aims at jointly finding the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds. The experiments on MNIST, and cifar-10 show that proposed model can find sparse neural network models, but unfortunately with little performance loss.
SP:24243429012ab70e9638a009b78e5a9a5b8d73be
Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers
1 INTRODUCTION . Despite the impressive success that deep neural networks have achieved in a wide range of challenging tasks , the inference in deep neural networks is highly memory-intensive and computationintensive due to the over-parameterization of deep neural networks . Network pruning ( LeCun et al . ( 1990 ) ; Han et al . ( 2015 ) ; Molchanov et al . ( 2017 ) ) has been recognized as an effective approach to improving the inference efficiency in resource-limited scenarios . Traditional pruning methods consist of dense network training followed with pruning and fine-tuning iterations . To avoid the expensive pruning and fine-tuning iterations , many sparse training methods ( Mocanu et al. , 2018 ; Bellec et al. , 2017 ; Mostafa & Wang , 2019 ; Dettmers & Zettlemoyer , 2019 ) have been proposed , where the network pruning is conducted during the training process . However , all these methods suffer from following three problems : Coarse-grained predefined pruning schedule . Most of the existing pruning methods use a predefined pruning schedule with many additional hyperparameters like pruning a % parameter each time and then fine-tuning for b epochs with totally c pruning steps . It is non-trivial to determine these hyperparameters for network architectures with various degrees of complexity . Therefore , usually a fixed pruning schedule is adopted for all the network architectures , which means that a very simple network architecture like LeNet-300-100 will have the same pruning schedule as a far more complex network like ResNet-152 . Besides , almost all the existing pruning methods conduct epoch-wise pruning , which means that the pruning is conducted between two epochs and no pruning operation happens inside each epoch . Failure to properly recover the pruned weights . Almost all the existing pruning methods conduct ” hard ” pruning that prunes weights by directly setting their values to 0 . Many works ( Guo et al. , 2016 ; Mocanu et al. , 2018 ; He et al. , 2018 ) have argued that the importance of network weights are not fixed and will change dynamically during the pruning and training process . Previously unimportant weights may tend to be important . So the ability to recover the pruned weights is of high significance . However , directly setting the pruned weights to 0 results in the loss of historical parameter importance , which makes it difficult to determine : 1 ) whether and when each pruned weight should be recovered , 2 ) what values should be assigned to the recovered weights . Therefore , existing methods that claim to be able to recover the pruned weights simply choose a predefined portion of pruned weights to recover and these recover weights are randomly initialized or initialized to the same value . Failure to properly determine layer-wise pruning rates . Modern neural network architectures usually contain dozens of layers with a various number of parameters . Therefore , the degree of parameter redundancy is very different among the layers . For simplicity , some methods prune the same percentage of parameters at each layer , which is not optimal . To obtain dynamic layer-wise pruning rates , a single global pruning threshold or layer-wise greedy algorithms are applied . Using a single global pruning threshold is exceedingly difficult to assess the local parameter importance of the individual layer since each layer has a significantly different amount of parameter and contribution to the model performance . This makes pruning algorithms based on a single global threshold inconsistent and non-robust . The problem of layer-by-layer greedy pruning methods is that the unimportant neurons in an early layer may have a significant influence on the responses in later layers , which may result in propagation and amplification of the reconstruction error ( Yu et al. , 2018 ) . We propose a novel end-to-end sparse training algorithm that properly solves the above problems . With only one additional hyperparameter used to set the final model sparsity , our method can achieve dynamic fine-grained pruning and recovery during the whole training process . Meanwhile , the layerwise pruning rates will be adjusted automatically with respect to the change of parameter importance during the training and pruning process . Our method achieves state-of-the-art performance compared with other sparse training algorithms . The proposed algorithm has following promising properties : • Step-wise pruning and recovery . A training epoch usually will have tens of thousands of training steps , which is the feed-forward and back-propagation pass for a single mini-batch . Instead of pruning between two training epochs with a predefined pruning schedule , our method prunes and recovers the network parameter at each training step , which is far more fine-grained than existing methods . • Neuron-wise or filter-wise trainable thresholds . All the existing methods adopt a single pruning threshold for each layer or the whole architecture . Our method defines a threshold vector for each layer . Therefore , our method adopts neuron-wise pruning thresholds for fully connected and recurrent layer and filter-wise pruning thresholds for convolutional layer . Additionally , all these pruning thresholds are trainable and will be updated automatically via back-propagation . • Dynamic pruning schedule . The training process of deep neural network consists of many hyperparameters . The learning rate is perhaps the most important hyperparameter . Usually , the learning rate will decay during the training process . Our method can automatically adjust the layer-wise pruning rates under different learning rates to get the optimal sparse network structure . • Consistent sparse pattern . Our algorithm can get a consistent layer-wise sparse pattern under different model sparsities , which indicates that our method can automatically determine the optimal layer-wise pruning rates given the target model sparsity . 2 RELATED WORK . Traditional Pruning Methods : LeCun et al . ( 1990 ) presented the early work about network pruning using second-order derivatives as the pruning criterion . The effective and popular training , pruning and fine-tuning pipeline was proposed by Han et al . ( 2015 ) , which used the parameter magnitude as the pruning criterion . Narang et al . ( 2017 ) extended this pipeline to prune the recurrent neural networks with a complicated pruning strategy . Molchanov et al . ( 2016 ) introduced first-order Taylor term as the pruning criterion and conduct global pruning . Li et al . ( 2016 ) used ` 1 regularization to force the unimportant parameters to zero . Sparse Neural Network Training : Recently , some works attempt to find the sparse network directly during the training process without the pruning and fine-tuning stage . Inspired by the growth and extinction of neural cells in biological neural networks , Mocanu et al . ( 2018 ) proposed a pruneregrowth procedure called Sparse Evolutionary Training ( SET ) that allows the pruned neurons and connections to revive randomly . However , the sparsity level needs to be set manually and the random recovery of network connections may provoke unexpected effects on the network . DEEP-R proposed by Bellec et al . ( 2017 ) used Bayesian sampling to decide the pruning and regrowth configuration , which is computationally expensive . Dynamic Sparse Reparameterization ( Mostafa & Wang , 2019 ) used dynamic parameter reallocation to find the sparse structure . However , the pruning threshold can only get halved if the percentage of parameter pruned is too high or get doubled if that percentage is too low for a certain layer . This coarse-grained adjustment of the pruning threshold significantly limits the ability of Dynamic Sparse Reparameterization . Additionally , a predefined pruning ratio and fractional tolerance are required . Dynamic Network Surgery ( Guo et al. , 2016 ) proposed pruning and splicing procedure that can prune or recover network connections according to the parameter magnitude but it requires manually determined thresholds that are fixed during the sparse learning process . These layer-wise thresholds are extremely hard to manually set . Meanwhile , Fixing the thresholds makes it hard to adapt to the rapid change of parameter importance . Dettmers & Zettlemoyer ( 2019 ) proposed sparse momentum that used the exponentially smoothed gradients as the criterion for pruning and regrowth . A fixed percentage of parameters are pruned at each pruning step . The pruning ratio and momentum scaling rate need to be searched from a relatively high parameter space . 3 DYNAMIC SPARSE TRAINING . 3.1 NOTATION . Deep neural network consists of a set of parameters { Wi : 1 ≤ i ≤ C } , where Wi denotes the parameter matrix at layer i and C denotes the number of layers in this network . For each fully connected layer and recurrent layer , the corresponding parameter is Wi ∈ Rco×ci , where co is the output dimension and ci is the input dimension . For each convolutional layer , there exists a convolution kernel Ki ∈ Rco×ci×w×h , where co is the number of output channels , ci is the number of input channels , w and h are the kernel sizes . Each filter in a convolution kernel Ki can be flattened to a vector . Therefore , a corresponding parameter matrix Wi ∈ Rco×z can be derived from each convolution kernel Ki ∈ Rco×ci×w×h , where z = ci × w × h. Actually , the pruning process is equivalent to finding a binary parameter mask Mi for each parameter matrix Wi . Thus , a set of binary parameter masks { Mi : 1 ≤ i ≤ C } will be found by network pruning . Each element for these parameter masks Mi is either 1 or 0 . 3.2 THRESHOLD VECTOR AND DYNAMIC PARAMETER MASK . Pruning can be regarded as applying a binary mask M to each parameter W . This binary parameter mask M preserves the information about the sparse structure . Given the parameter set { W1 , W2 , · · · , WC } , our method will dynamically find the corresponding parameter masks { M1 , M2 , · · · , MC } . To achieve this , for each parameter matrix W ∈ Rco×ci , a trainable pruning threshold vector t ∈ Rco is defined . Then we utilize a unit step function S ( x ) as shown in Figure 2 ( a ) to get the masks according to the magnitude of parameters and corresponding thresholds as present below . Qij = F ( Wij , ti ) = |Wij | − ti 1 ≤ i ≤ co , 1 ≤ j ≤ ci ( 1 ) Mij = S ( Qij ) 1 ≤ i ≤ co , 1 ≤ j ≤ ci ( 2 ) With the dynamic parameter mask M , the corresponding element in mask Mij will be set to 0 if Wij needs to be pruned . This means that the weight Wij is masked out by the 0 at Mij to get a sparse parameter W M . The value of underlying weight Wij will not change , which preserves the historical information about the parameter importance . For a fully connected layer or recurrent layer with parameter W ∈ Rco×ci and threshold vector t ∈ Rco , each weight Wij will have a neuron-wise threshold ti , where Wij is the jth weight associated with the ith output neuron . Similarly , the thresholds are filter-wise for convolutional layer . Besides , a threshold matrix or a single scalar threshold can also be chosen . More details are present in Appendix A.2 . 3.3 TRAINABLE MASKED LAYERS . With the threshold vector and dynamic parameter mask , the trainable masked fully connected , convolutional and recurrent layer are introduced as shown in Figure 1 , where X is the input of current layer and Y is the output . For fully connected and recurrent layers , instead of the dense parameter W , the sparse product W M will be used in the batched matrix multiplication , where denote Hadamard product operator . For convolutional layers , each convolution kernel K ∈ Rco×ci×w×h can be flatten to get W ∈ Rco×z . Therefore , the sparse kernel can be obtained by a similar process as fully connected layers . This sparse kernel will be used for subsequent convolution operation . In order to make the elements in threshold vector t trainable via back-propagation , the derivative of the binary step function S ( x ) is required . However , its original derivative is an impulse function whose value is zero almost everywhere and infinite at zero as shown in Figure 2 ( b ) . Thus the original derivative of the binary step function S ( x ) can not be applied in back-propagation and parameter updating directly . Some previous works ( Hubara et al . ( 2016 ) ; Rastegari et al . ( 2016 ) ; Zhou et al . ( 2016 ) ) demonstrated that by providing a derivative estimation , it is possible to train networks containing such binarization function . A clip function called straight-through estimator ( STE ) ( Bengio et al. , 2013 ) was employed in these works and is illustrated in Figure 2 ( c ) . Furthermore , Xu & Cheung ( 2019 ) discussed the derivative estimation in a balance of tight approximation and smooth back-propagation . We adopt this long-tailed higher-order estimator H ( x ) in our method . As shown in Figure 2 ( d ) , it has a wide active range between [ −1 , 1 ] with a non-zero gradient to avoid gradient vanishing during training . On the other hand , the gradient value near zero is a piecewise polynomial function giving tighter approximation than STE . The estimator is represented as d dx S ( x ) ≈ H ( x ) = 2− 4|x| , −0.4 ≤ x ≤ 0.4 0.4 , 0.4 < |x| ≤ 1 0 , otherwise ( 3 ) With this derivative estimator , the elements in the vector threshold t can be trained via backpropagation . Meanwhile , in trainable masked layers , the network parameter W can receive two branches of gradient , namely the performance gradient for better model performance and the structure gradient for better sparse structure , which helps to properly update the network parameter under sparse network connectivity . The structure gradient enables the pruned ( masked ) weights to be updated via back-propagation . The details about the feed-forward and back-propagation of the trainable masked layer are present in Appendix A.3 . Therefore , the pruned ( masked ) weights , the unpruned ( unmasked ) weights and the elements in the threshold vector can all be updated via back-propagation at each training step . The proposed method will conduct fine-grained step-wise pruning and recovery automatically .
This paper proposes an algorithm for training networks with sparse parameter tensors. This involves achieving sparsity by application of a binary mask, where the mask is determined by current parameter values and a learned threshold. It also involves the addition of a specific regularizer which encourages the thresholds used for the mask to be large. Gradients with respect to both masked-out parameters, and with respect to mask thresholds, are computed using a "long tailed" variant of the straight-through-estimator.
SP:24243429012ab70e9638a009b78e5a9a5b8d73be
NORML: Nodal Optimization for Recurrent Meta-Learning
1 INTRODUCTION . Humans have a remarkable capability to learn useful concepts from a small number of examples or a limited amount of experience . In contrast most machine learning methods require large , labelled datasets to learn effectively . Little is understood about the actual learning algorithm ( s ) used by the human brain , and how it relates to machine learning algorithms like backpropagation ( Lillicrap & Körding ( 2019 ) ) . Botvinick et al . ( 2019 ) argues that inductive bias and structured priors are some of the main factors that enable fast learning in animals . In order to build general-purpose systems we must be able to design and build learning algorithms that can quickly and effectively learn from a limited amount of data by utilizing prior knowledge and experience . Supervised few-shot learning aims to challenge machine learning models to learn new tasks by leveraging only a handful of labelled examples . Vinyals et al . ( 2016 ) introduces the few-shot learning problem for image classification , where a model is tasked to classify a number of images while being provided either 1 or 5 examples of each class ( hereafter referred to as 1-shot and 5-shot learning ) . One way to approach this problem is by way of meta-learning , a broad family of techniques that aim to learn how to learn ( Thrun & Pratt ( 1998 ) ) . One particularly powerful group of approaches , known as memory-based methods , use memory architectures that can leverage prior information to assist in learning ( Santoro et al . ( 2016 ) ; Ravi & Larochelle ( 2017 ) ) . Optimization-based methods ( Finn et al . ( 2017 ) ) is another exciting area that aim to learn an initial set of parameters that can quickly adapt to new , unseen tasks with relatively little training data . This work introduces a novel technique where a recurrent neural network based meta-learner learns how to make parameter updates for a task learner . The meta-learner and the learner are jointly trained so as to learn how to learn new tasks with little data . This approach allows one to utilize aspects from both optimization-based and memory-based meta-learning methods . The recurrent architecture of the meta-learner can use important prior information when updating the learner , while the learner can learn a set of initial task parameters are easily optimized for a new task . The vanishing gradient challenge faced by gradient-based optimization is solved by using a Long shortterm memory ( LSTM ) based meta-learner ( Hochreiter & Schmidhuber ( 1997 ) ) . Memory-based methods ( Ravi & Larochelle ( 2017 ) ) that use all of the learner ’ s parameters as input to a metalearner tend to break down when using a learner with a large number of parameters ( Andrychowicz et al. , 2016 ) . The technique proposed in this work , Nodal Optimization for Recurrent Meta-Learning ( NORML ) , solves this scaling problem by learning layer-wise update signals used to update the learners ’ s parameters . NORML is evaluated on the Mini-ImageNet dataset and is shown to improve on existing optimization-based and memory-based methods . An ablation study is done showing the effects of the different components of NORML . Furthermore a comparison is done between NORML and Model Agnostic Meta-Learning ( MAML ) using the Omniglot dataset . The comparison demonstrates that NORML makes superior parameter updates than the updates made by gradient descent . 2 PRELIMINARIES . 2.1 BACKPROPAGATION . The backpropagation algorithm ( Rumelhart et al. , 1986 ) was developed in the 1980 ’ s and has since been the status quo for training neural networks . By recursively applying the chain rule , backpropagation sends gradient signals back through stacked layers of a deep network . Each gradient signal at a particular layer is used to calculate the gradient of the weights connected to that layer . Consider a neural network with N hidden layers . Let the rows of matrix Wl denote the weights connecting the nodes in the layer below to a hidden node in layer l and let the vector bl denote the biases connected to each node in layer l. Then : al = Wlhl−1 + bl , hl = fl ( al ) ( 1 ) Where fl is the activation function used at layer l , al is the layer ’ s pre-activation output , and hl is referred to as the activations of layer l. Note that h0 = x is the input to the network . The output of the network is given by ŷ = fout ( aout ) . When choosing to use a softmax as the output activation and a cross-entropy loss function , the loss and the output layer ’ s gradient are calculated as follows : L = − 1 n ∑ n yn log ŷn + ( 1− yn ) log ( 1− ŷn ) ( 2 ) δaout = ∂L ∂aout = ŷ − y ( 3 ) where L is the cross-entropy loss , n is the network ’ s output size , and δaout is the gradient at the pre-activation of the output layer . Note that δaout is an n-dimensional vector where each element of δaout describes the corresponding node ’ s contribution to the loss . By propagating δaout back through the network one obtains the gradients at each of the hidden layers ’ nodes : δal−1 = ∂L ∂al−1 = ( W Tl δal ) f ′l−1 ( al−1 ) ( 4 ) where f ′l ( ) is the derivative of the activation function , is an element-wise multiplication and δal is the pre-activation gradient at layer l. In order to calculate the gradient with regards to the weights , δal is matrix multiplied with the activations of the previous layer : ∂L ∂Wl = δalhl−1 ( 5 ) 2.2 META-LEARNING PROBLEM DEFINITION . The N -way K-shot problem is defined using an episodic formulation as proposed in Vinyals et al . ( 2016 ) . A task Ti is sampled from a task distribution p ( T ) and consists of an N -way classification problem . A meta-dataset is divided into a training meta-set Str , a validation meta-set Sval and a testing meta-set Stest where the classes contained in each meta-set is disjointed ( i.e . none of the classes in Stest is present in Str and visa versa ) . The task Ti consists of a training set Dtr and validation set Dval . The training set Dtr contains K examples from each of Ti ’ s N classes . The validation set Dval usually contains a larger set of examples from each class in order to give an estimate of the model ’ s generalization performance on task Ti . Note that a task ’ s validation set Dval ( used to optimize the meta training objective ) should not be confused with the meta validation set Sval , which is used for model selection . 2.3 MODEL AGNOSTIC META-LEARNING . Optimization-based meta-learning is an approach to meta-learning where an inner loop is used for fast adaptation to a new task , and an outer loop is used to optimize the inner loop ’ s training steps . Model Agnostic Meta-Learning ( MAML ) is one such approach that contains all the key ingredients used in optimization-based meta-learning . The inner loop consists of training a model , fθ via gradient descent on a few-shot learning task , Ti . θ′ = G ( θ , Dtr ) ( 6 ) where G is often implemented as a single step of gradient descent on the training set Dtr ; θ′i = θ − η∇θLtrTi ( fθ ) . It is often advantageous for G to consist of multiple sequential update steps . The validation set of Ti is then used to evaluate fθ′i , and the task-specific validation loss , L val Ti , is calculated . The outer loop consists of optimizing the base-parameters θ , using the sum of M different tasks ’ validation losses : θ ← θ − η∇θ M∑ Ti∼p ( T ) LvalTi ( fθ′i ) ( 7 ) where M is referred to as the meta-batch size . This approach allows a model to learn a set of base-parameters θ , that can quickly adapt to a new unseen task . After meta-training a model , the inner loop procedure can update θ and return taskspecific parameters θ′i using only a few examples of the new task . By differentiating through the inner loop , MAML learns to update the base-parameters θ in such a way that the task-specific paramaters , θ′i , generalizes to unseen examples of task Ti . In many cases it is preferred to have an inner loop that consists of multiple sequential updates . However the inner loop ’ s computational graph can become quite large if too many steps are taken . This often results in exploding and vanishing gradients since the outer loop still needs to differentiate through the entire inner loop Aravind Rajeswaran ( 2019 ) Antoniou et al . ( 2018 ) . This limits MAML to domains where a small amount of update steps are sufficient for learning . The LSTM-based metalearner proposed in this work , allow gradients to effectively flow through a large number of update steps . NORML can therefore be applied to a wide array of domains . 3 MODEL . 3.1 TRAINING . NORML incoporates a neural network based learner with N layers and an LSTM-based metalearner that is used to optimize the learner ’ s inner loop parameter updates . Figure 1 depicts the process by which the meta-learner calculates the parameter updates for a single layer learner . When multiple layers are optimized the additional gradient signals and layer inputs are added to the meta learner ’ s input . The meta-learner outputs two update signals for every layer , so when a 4-layer network is used as the learner , the meta-learner outputs 8 update signals . The high-level operation is as follows ( Algorithm 1 ) . First the learner ’ s task-specific loss , LtrTi ( fθ ) , and layer-gradients , δal , are calculated via cross-entropy and backpropagation . The meta-learner takes δal , hl−1 and the loss , L , as input , and outputs the current cell state and the update signals , ĥl−1 and δ̂al : δ̂alĥl−1 cl , t+1 = mΦ ( δal , hl−1 , L , cl , t ) ( 8 ) The update signals are matrix multiplied to determine the parameter updates for layer l : ∆Wl = ηδ̂alĥl−1 ( 9 ) ∆bl = ηδ̂al ( 10 ) The learner ’ s updated parameters , θ′Ti , are then used on the next training example and the whole process gets repeated for U update steps . The inner loop terminates after calculating the task validation loss using fθ′ . LvalTi = CrossEntropy ( yval , fθ′ ( xval ) ) ( 11 ) The outer loop consists of optimizing both θ0 and the meta-learner parameters Φ via gradient descent : θ ← θ − β∇θ ∑ Ti p ( T ) LvalTi ( fθ′i ) ( 12 ) Φ← Φ− β∇Φ ∑ Ti p ( T ) LvalTi ( fθ′i ) ( 13 ) 3.2 META-LEARNER . The meta-learner , m is implemented using a modified LSTM cell as shown in Figure 2 . The input to the meta-learner is normalized , flattened and concatenated . xl is used to denote concatenating the loss , the layer gradients , and the activations of the previous layer . The meta-learner consists of a forget gate , fl , an input gate , il and an update gate , c̃l . The cell state at update step t is denoted via cl , t , and is calculated as follows : fl , t = σ ( Wl , fxl , t + bl , f ) ( 14 ) il , t = σ ( Wl , ixl , t + bl , i ) ( 15 ) c̃l , t = tanh ( Wl , c̃xl , t + bi , c̃ ) ( 16 ) cl , t = fl , t cl , t−1 + il , t c̃l , t ( 17 ) In order to determine the update signals , the cell state , cl , t , is used as input to two separate fullyconnected layers followed by a sigmoid activation function and a pointwise multiplication with the original layer gradient and the previous layer ’ s activations : δ̂al = δal σ ( Wl , gcl , t + bl , g ) ( 18 ) ĥl = hl σ ( Wl , hcl , t + bl , h ) ( 19 ) Algorithm 1 Nodal Optimization for Rapid Meta-Learning Require : : Learner f with parameters θ Require : : Meta-learner m with parameters Φ Require : : p ( T ) : distribution over tasks Require : : η , β : step size hyperparameters 1 : randomly initialize θ , Φ 2 : while not converged do 3 : Sample batch of tasks Ti ∼ p ( T ) 4 : 5 : for all Ti do 6 : Let ( Dtri , D val i ) = Ti 7 : Initialize θ′0 = θ 8 : 9 : for t = 1 , len ( Dtri ) do 10 : Compute learner ’ s training loss LtrTi ( fθ′t−1 ) and layer gradients ∂LtrTi ∂al for all layers l 11 : Compute the update signals : ĥl−1 , ĝl = mΦ ( LtrTi ( fθ′t−1 ) , ∂LtrTi ∂al , hl−1 ) 12 : Update task parameters : θ′t ← θ′t−1 − ηĝlĥl−1 13 : end for 14 : Compute validation loss LvalTi ( fθ′t ) 15 : end for 16 : Update θ ← θ − β∇θ ∑ Ti p ( T ) LvalTi ( fθ′t ) 17 : Update Φ← Φ− β∇Φ ∑ Ti p ( T ) LvalTi ( fθ′t ) 18 : end while The sigmoid activation and pointwise mutliplication operations allow the meta-learner to scale the update signal received by each node in the network . This node-wise scaling let the meta-learner control how the weights connected to a particular node will get changed at any given update step . By scaling the layer gradients , the meta-learner can dynamically control by how much each node ’ s input should be changed at each update step . Likewise , by scaling the hidden layer activations , the meta-learner can control the change in weighting of each of the previous nodes ’ output . By controlling the magnitude of both the activations and the layer gradients , the meta-learner can learn a dynamically adaptive learning rate for each individual weight and bias of the learner . This can be achieved with a meta-learner that scales linearly in size relative to the size of the learner , i.e . increasing the number of learner parameters by a factor of p , will result in the same factor increase in the number of meta-learner parameters .
This submission proposes NORML, a meta-learning method that 1) learns initial parameters for a base model that leads to good few-shot learning performance and 2) where a recurrent neural network (LSTM) is used to control the learning updates on a small support set for a given task. The method is derived specifically for full connected neural networks, where the meta-learner produces gating factors on the normal gradients (one for each neuron that the parameter is connecting). The method is compared with various published few-shot learning methods on miniImageNet, and an ablation study and detailed comparison with MAML is presented on Omniglot.
SP:f66b8d030e2dfef6e9c4fc7b35abd996d957a3fc
NORML: Nodal Optimization for Recurrent Meta-Learning
1 INTRODUCTION . Humans have a remarkable capability to learn useful concepts from a small number of examples or a limited amount of experience . In contrast most machine learning methods require large , labelled datasets to learn effectively . Little is understood about the actual learning algorithm ( s ) used by the human brain , and how it relates to machine learning algorithms like backpropagation ( Lillicrap & Körding ( 2019 ) ) . Botvinick et al . ( 2019 ) argues that inductive bias and structured priors are some of the main factors that enable fast learning in animals . In order to build general-purpose systems we must be able to design and build learning algorithms that can quickly and effectively learn from a limited amount of data by utilizing prior knowledge and experience . Supervised few-shot learning aims to challenge machine learning models to learn new tasks by leveraging only a handful of labelled examples . Vinyals et al . ( 2016 ) introduces the few-shot learning problem for image classification , where a model is tasked to classify a number of images while being provided either 1 or 5 examples of each class ( hereafter referred to as 1-shot and 5-shot learning ) . One way to approach this problem is by way of meta-learning , a broad family of techniques that aim to learn how to learn ( Thrun & Pratt ( 1998 ) ) . One particularly powerful group of approaches , known as memory-based methods , use memory architectures that can leverage prior information to assist in learning ( Santoro et al . ( 2016 ) ; Ravi & Larochelle ( 2017 ) ) . Optimization-based methods ( Finn et al . ( 2017 ) ) is another exciting area that aim to learn an initial set of parameters that can quickly adapt to new , unseen tasks with relatively little training data . This work introduces a novel technique where a recurrent neural network based meta-learner learns how to make parameter updates for a task learner . The meta-learner and the learner are jointly trained so as to learn how to learn new tasks with little data . This approach allows one to utilize aspects from both optimization-based and memory-based meta-learning methods . The recurrent architecture of the meta-learner can use important prior information when updating the learner , while the learner can learn a set of initial task parameters are easily optimized for a new task . The vanishing gradient challenge faced by gradient-based optimization is solved by using a Long shortterm memory ( LSTM ) based meta-learner ( Hochreiter & Schmidhuber ( 1997 ) ) . Memory-based methods ( Ravi & Larochelle ( 2017 ) ) that use all of the learner ’ s parameters as input to a metalearner tend to break down when using a learner with a large number of parameters ( Andrychowicz et al. , 2016 ) . The technique proposed in this work , Nodal Optimization for Recurrent Meta-Learning ( NORML ) , solves this scaling problem by learning layer-wise update signals used to update the learners ’ s parameters . NORML is evaluated on the Mini-ImageNet dataset and is shown to improve on existing optimization-based and memory-based methods . An ablation study is done showing the effects of the different components of NORML . Furthermore a comparison is done between NORML and Model Agnostic Meta-Learning ( MAML ) using the Omniglot dataset . The comparison demonstrates that NORML makes superior parameter updates than the updates made by gradient descent . 2 PRELIMINARIES . 2.1 BACKPROPAGATION . The backpropagation algorithm ( Rumelhart et al. , 1986 ) was developed in the 1980 ’ s and has since been the status quo for training neural networks . By recursively applying the chain rule , backpropagation sends gradient signals back through stacked layers of a deep network . Each gradient signal at a particular layer is used to calculate the gradient of the weights connected to that layer . Consider a neural network with N hidden layers . Let the rows of matrix Wl denote the weights connecting the nodes in the layer below to a hidden node in layer l and let the vector bl denote the biases connected to each node in layer l. Then : al = Wlhl−1 + bl , hl = fl ( al ) ( 1 ) Where fl is the activation function used at layer l , al is the layer ’ s pre-activation output , and hl is referred to as the activations of layer l. Note that h0 = x is the input to the network . The output of the network is given by ŷ = fout ( aout ) . When choosing to use a softmax as the output activation and a cross-entropy loss function , the loss and the output layer ’ s gradient are calculated as follows : L = − 1 n ∑ n yn log ŷn + ( 1− yn ) log ( 1− ŷn ) ( 2 ) δaout = ∂L ∂aout = ŷ − y ( 3 ) where L is the cross-entropy loss , n is the network ’ s output size , and δaout is the gradient at the pre-activation of the output layer . Note that δaout is an n-dimensional vector where each element of δaout describes the corresponding node ’ s contribution to the loss . By propagating δaout back through the network one obtains the gradients at each of the hidden layers ’ nodes : δal−1 = ∂L ∂al−1 = ( W Tl δal ) f ′l−1 ( al−1 ) ( 4 ) where f ′l ( ) is the derivative of the activation function , is an element-wise multiplication and δal is the pre-activation gradient at layer l. In order to calculate the gradient with regards to the weights , δal is matrix multiplied with the activations of the previous layer : ∂L ∂Wl = δalhl−1 ( 5 ) 2.2 META-LEARNING PROBLEM DEFINITION . The N -way K-shot problem is defined using an episodic formulation as proposed in Vinyals et al . ( 2016 ) . A task Ti is sampled from a task distribution p ( T ) and consists of an N -way classification problem . A meta-dataset is divided into a training meta-set Str , a validation meta-set Sval and a testing meta-set Stest where the classes contained in each meta-set is disjointed ( i.e . none of the classes in Stest is present in Str and visa versa ) . The task Ti consists of a training set Dtr and validation set Dval . The training set Dtr contains K examples from each of Ti ’ s N classes . The validation set Dval usually contains a larger set of examples from each class in order to give an estimate of the model ’ s generalization performance on task Ti . Note that a task ’ s validation set Dval ( used to optimize the meta training objective ) should not be confused with the meta validation set Sval , which is used for model selection . 2.3 MODEL AGNOSTIC META-LEARNING . Optimization-based meta-learning is an approach to meta-learning where an inner loop is used for fast adaptation to a new task , and an outer loop is used to optimize the inner loop ’ s training steps . Model Agnostic Meta-Learning ( MAML ) is one such approach that contains all the key ingredients used in optimization-based meta-learning . The inner loop consists of training a model , fθ via gradient descent on a few-shot learning task , Ti . θ′ = G ( θ , Dtr ) ( 6 ) where G is often implemented as a single step of gradient descent on the training set Dtr ; θ′i = θ − η∇θLtrTi ( fθ ) . It is often advantageous for G to consist of multiple sequential update steps . The validation set of Ti is then used to evaluate fθ′i , and the task-specific validation loss , L val Ti , is calculated . The outer loop consists of optimizing the base-parameters θ , using the sum of M different tasks ’ validation losses : θ ← θ − η∇θ M∑ Ti∼p ( T ) LvalTi ( fθ′i ) ( 7 ) where M is referred to as the meta-batch size . This approach allows a model to learn a set of base-parameters θ , that can quickly adapt to a new unseen task . After meta-training a model , the inner loop procedure can update θ and return taskspecific parameters θ′i using only a few examples of the new task . By differentiating through the inner loop , MAML learns to update the base-parameters θ in such a way that the task-specific paramaters , θ′i , generalizes to unseen examples of task Ti . In many cases it is preferred to have an inner loop that consists of multiple sequential updates . However the inner loop ’ s computational graph can become quite large if too many steps are taken . This often results in exploding and vanishing gradients since the outer loop still needs to differentiate through the entire inner loop Aravind Rajeswaran ( 2019 ) Antoniou et al . ( 2018 ) . This limits MAML to domains where a small amount of update steps are sufficient for learning . The LSTM-based metalearner proposed in this work , allow gradients to effectively flow through a large number of update steps . NORML can therefore be applied to a wide array of domains . 3 MODEL . 3.1 TRAINING . NORML incoporates a neural network based learner with N layers and an LSTM-based metalearner that is used to optimize the learner ’ s inner loop parameter updates . Figure 1 depicts the process by which the meta-learner calculates the parameter updates for a single layer learner . When multiple layers are optimized the additional gradient signals and layer inputs are added to the meta learner ’ s input . The meta-learner outputs two update signals for every layer , so when a 4-layer network is used as the learner , the meta-learner outputs 8 update signals . The high-level operation is as follows ( Algorithm 1 ) . First the learner ’ s task-specific loss , LtrTi ( fθ ) , and layer-gradients , δal , are calculated via cross-entropy and backpropagation . The meta-learner takes δal , hl−1 and the loss , L , as input , and outputs the current cell state and the update signals , ĥl−1 and δ̂al : δ̂alĥl−1 cl , t+1 = mΦ ( δal , hl−1 , L , cl , t ) ( 8 ) The update signals are matrix multiplied to determine the parameter updates for layer l : ∆Wl = ηδ̂alĥl−1 ( 9 ) ∆bl = ηδ̂al ( 10 ) The learner ’ s updated parameters , θ′Ti , are then used on the next training example and the whole process gets repeated for U update steps . The inner loop terminates after calculating the task validation loss using fθ′ . LvalTi = CrossEntropy ( yval , fθ′ ( xval ) ) ( 11 ) The outer loop consists of optimizing both θ0 and the meta-learner parameters Φ via gradient descent : θ ← θ − β∇θ ∑ Ti p ( T ) LvalTi ( fθ′i ) ( 12 ) Φ← Φ− β∇Φ ∑ Ti p ( T ) LvalTi ( fθ′i ) ( 13 ) 3.2 META-LEARNER . The meta-learner , m is implemented using a modified LSTM cell as shown in Figure 2 . The input to the meta-learner is normalized , flattened and concatenated . xl is used to denote concatenating the loss , the layer gradients , and the activations of the previous layer . The meta-learner consists of a forget gate , fl , an input gate , il and an update gate , c̃l . The cell state at update step t is denoted via cl , t , and is calculated as follows : fl , t = σ ( Wl , fxl , t + bl , f ) ( 14 ) il , t = σ ( Wl , ixl , t + bl , i ) ( 15 ) c̃l , t = tanh ( Wl , c̃xl , t + bi , c̃ ) ( 16 ) cl , t = fl , t cl , t−1 + il , t c̃l , t ( 17 ) In order to determine the update signals , the cell state , cl , t , is used as input to two separate fullyconnected layers followed by a sigmoid activation function and a pointwise multiplication with the original layer gradient and the previous layer ’ s activations : δ̂al = δal σ ( Wl , gcl , t + bl , g ) ( 18 ) ĥl = hl σ ( Wl , hcl , t + bl , h ) ( 19 ) Algorithm 1 Nodal Optimization for Rapid Meta-Learning Require : : Learner f with parameters θ Require : : Meta-learner m with parameters Φ Require : : p ( T ) : distribution over tasks Require : : η , β : step size hyperparameters 1 : randomly initialize θ , Φ 2 : while not converged do 3 : Sample batch of tasks Ti ∼ p ( T ) 4 : 5 : for all Ti do 6 : Let ( Dtri , D val i ) = Ti 7 : Initialize θ′0 = θ 8 : 9 : for t = 1 , len ( Dtri ) do 10 : Compute learner ’ s training loss LtrTi ( fθ′t−1 ) and layer gradients ∂LtrTi ∂al for all layers l 11 : Compute the update signals : ĥl−1 , ĝl = mΦ ( LtrTi ( fθ′t−1 ) , ∂LtrTi ∂al , hl−1 ) 12 : Update task parameters : θ′t ← θ′t−1 − ηĝlĥl−1 13 : end for 14 : Compute validation loss LvalTi ( fθ′t ) 15 : end for 16 : Update θ ← θ − β∇θ ∑ Ti p ( T ) LvalTi ( fθ′t ) 17 : Update Φ← Φ− β∇Φ ∑ Ti p ( T ) LvalTi ( fθ′t ) 18 : end while The sigmoid activation and pointwise mutliplication operations allow the meta-learner to scale the update signal received by each node in the network . This node-wise scaling let the meta-learner control how the weights connected to a particular node will get changed at any given update step . By scaling the layer gradients , the meta-learner can dynamically control by how much each node ’ s input should be changed at each update step . Likewise , by scaling the hidden layer activations , the meta-learner can control the change in weighting of each of the previous nodes ’ output . By controlling the magnitude of both the activations and the layer gradients , the meta-learner can learn a dynamically adaptive learning rate for each individual weight and bias of the learner . This can be achieved with a meta-learner that scales linearly in size relative to the size of the learner , i.e . increasing the number of learner parameters by a factor of p , will result in the same factor increase in the number of meta-learner parameters .
This paper proposes a meta-learner that learns how to make parameter updates for a model on a new few-shot learning task. The proposed meta-learner is an LSTM that proposes at each time-step, a point-wise multiplier for the gradient of the hidden units and for the hidden units of the learner neural network, which are then used to compute a gradient update for the hidden-layer weights of the learner network. By not directly producing a learning rate for the gradient, the meta-learner’s parameters are only proportional to the square of the number of hidden units in the network rather than the square of the number of weights of the network. Experiments are performed on few-shot learning benchmarks. The first experiment is on Mini-ImageNet. The authors build upon the method of Sun et al, where they pre-train the network on the meta-training data and then do meta-training where the convolutional network weights are frozen and only the fully-connected layer is updated on few-shot learning tasks using their meta-learner LSTM. The other experiment is on Omniglot 20-way classification, where they consider a network with only full-connected layers and show that their meta-learner LSTM performs better than MAML.
SP:f66b8d030e2dfef6e9c4fc7b35abd996d957a3fc
Learning Latent Representations for Inverse Dynamics using Generalized Experiences
1 INTRODUCTION . In reinforcement learning ( RL ) , an agent optimizes its behaviour to maximize a specific reward function that encodes tasks such as moving forward or reaching a target . After training , the agent simply executes the learned policy from its initial state until termination . In practical settings in robotics , however , control policies are invoked at the lowest level of a larger system by higher-level components such as perception and planning units . In such systems , agents have to follow a dynamic sequence of intermediate waypoints , instead of following a single policy until the goal is achieved . A typical approach to achieving goal-directed motion using RL involves learning goal-conditioned policies or value functions ( Schaul et al . ( 2015 ) ) . The key idea is to learn a function conditioned on a combination of the state and goal by sampling goals during the training process . However , this approach requires a large number of training samples , and does not leverage waypoints provided by efficient planning algorithms . Thus , it is desirable to learn models that can compute actions to transition effectively between waypoints . A popular class of such models is called Inverse Dynamics Model ( IDM ) ( Christiano et al . ( 2016 ) ; Pathak et al . ( 2017 ) ) . IDMs typically map the current state ( or a history of states and actions ) and the goal state , to the action . In this paper , we address the need of an efficient control module by learning a generalized IDM that can achieve goal-direction motion by leveraging data collected while training a state-of-the-art RL algorithm . We do not require full information of the goal state , or a history of previous states to learn the IDM . We learn on a reduced goal space , such as 3-D positions to which the agent must learn to navigate . Thus , given just the intermediate 3-D positions , or waypoints , our agent can navigate to the goal , without requiring any additional information about the intermediate states . The basic framework of the IDM is shown in Fig . 1 . The unique aspect of our algorithm is that we eliminate the need to randomly sample goals during training . Instead , we exploit the known symmetries/equivalences of the system ( as is common in many robotics settings ) to guide the collection of generalized experiences during training . We propose a class of algorithms that utilize the property of equivalence between transitions modulo the difference in a fixed set of attributes . In the locomotion setting , the agent ’ s transitions are symmetric under translations and rotations . We capture this symmetry by defining equivalence modulo orientation among experiences . We use this notion of equivalence to guide the training of latent representations shared by these experiences and provide them as input to the IDM to produce the desired actions , as shown in Fig . 4 . A common challenge faced by agents trained using RL techniques is lack of generalization capability . The standard way of training produces policies that work very well on the states encountered by the agent during training , but often fail on unseen states . Achieving good performance using IDMs requires both these components : collecting generalized experiences , and learning these latent representations , as we demonstrate in Section 6 . Our model exhibits high sample efficiency and superior performance , in comparison to other methods involving sampling goals during training . We demonstrate the effectiveness of our approach in the Mujoco Ant environment ( Todorov et al . ( 2012 ) ) in OpenAI Gym ( Brockman et al . ( 2016 ) ) , and the Minitaur and Humanoid environments in PyBullet ( Coumans & Bai ( 2016 ) ) . From a limited number of experiences collected during training under a single reward function of going in one direction , our generalized IDM succeeds at navigating to arbitrary goal positions in the 3-D space . We measure performance by calculating the closest distance to the goal an agent achieves . We perform ablation experiments to show that ( 1 ) collecting generalized experience in the form of equivalent input pairs boosts performance over all baselines , ( 2 ) these equivalent input pairs can be condensed into a latent representation that encodes relevant information , and ( 3 ) learning this latent representation is in fact critical to success of our algorithm . Details of experiments and analysis of results can be found in Sections 5 and 6 . 2 RELATED WORK . Several recent works learn policies and value functions that are conditioned over not just the state space , but also the goal space ( Andrychowicz et al . ( 2017 ) ; Schaul et al . ( 2015 ) ; Kulkarni et al . ( 2016 ) ) and then generalize those functions to unseen goals . Goal-conditioned value functions are also largely used in hierarchical reinforcement learning algorithms ( Kulkarni et al . ( 2016 ) ) , where the higher level module learns over intrinsic goals and the lower level control modules learn subpolicies to reach those goals , or the lower level control modules can efficiently execute goals proposed by the higher-level policy ( Nachum et al . ( 2018 ) ) . Ghosh et al . ( 2018 ) use trained goalconditioned policies to learn actionable latent representations that extract relevant information from the state , and use these pretrained representations to train the agent to excel at other tasks . Pong * et al . ( 2018 ) learn goal-conditioned value functions , and use them in a model-based control setting . IDMs are functions that typically map the current state of the agent and the goal state that the agent aims to achieve , to the desired action . They have been used in a wide variety of contexts in existing literature . Christiano et al . ( 2016 ) train IDMs using a history of states and actions and full goal state information for transferring models trained in simulation to real robots . Pathak et al . ( 2017 ) and Agrawal et al . ( 2016 ) use IDMs in combination with Forward Dynamics Models ( FDMs ) to predict actions from compressed representations of high-dimensional inputs like RGB images generated by the FDM . Specifically , Pathak et al . ( 2017 ) use IDMs to provide a curiosity-based reward signal in the general RL framework to encourage exploration ; Agrawal et al . ( 2016 ) use IDMs to provide supervision for the learning of visual features relevant to the task assigned to the robot . We circumvent the need to learn goal-conditioned policies or value functions by combining IDMs with known symmetric properties of the robot . We train an IDM conditioned on the state space and a reduced goal space , using data collected while training any state-of-the-art RL algorithm . Our data collection is unique in that we exploit equivalences in experiences observed during training and learn a latent representation space shared between such equivalent experiences . Our IDM produces the action given this latent representation as an input , leading to generalization over parts of the state and goal spaces unobserved during training . 3 PRELIMINARIES . In the general RL framework ( Sutton et al . ( 1998 ) ) , a learning agent interacts with an environment modeled as a Markov Decision Process consisting of : 1 ) a state space S , 2 ) an action space A , 3 ) a probability distribution P : S × S × A → [ 0 , 1 ] , where P ( s′|s , a ) is the probability of transitioning into state s′ by taking action a in state s , 4 ) a reward function R : S × A × S → R that gives the reward for this transition , and 5 ) a discount factor Γ . The agent learns a policy πθ : S → A , parameterized by θ while trying to maximize the discounted expected return J ( θ ) = Es0 , a0 , ... [ ∑∞ t=0 Γ tR ( st , at , st+1 ) ] . Goal-conditioned RL optimizes for learning a policy that maximizes the return under a goal-specific reward function Rg . On-policy RL algorithms , such as Policy Gradient methods ( Williams ( 1992 ) ; Mnih et al . ( 2016 ) ) and Trust Region methods ( Schulman et al . ( 2015 ) ; Wu et al . ( 2017 ) ) use deep neural networks to estimate policy gradients , or maximize a surrogate objective function subject to certain constraints . Off-policy RL algorithms ( Lillicrap et al . ( 2016 ) ; Haarnoja et al . ( 2018 ) ) incorporate elements of deep Q-learning ( Mnih et al . ( 2013 ) ) into the actor-critic formulation . Hindsight Experience Replay ( HER ) ( Andrychowicz et al . ( 2017 ) ) is a popular technique used in conjunction with an off-policy RL algorithm to learn policies in a sample-efficient way from sparse rewards in goal-based environments . 4 LEARNING GENERALIZED INVERSE DYNAMICS . Our method leverages samples collected while training a state-of-the-art RL algorithm to train an IDM that maps the current state and desired goal position to the action required to reach the goal . There are four major components involved in this process : 1 ) collecting data while training the RL algorithm , 2 ) learning a basic IDM that maps the current state and the desired goal to the required action , 3 ) collecting experiences that are equivalent to those observed , and using them to train the IDM , and 4 ) learning a latent representation that generalizes this model to unseen parts of the state space by utilizing the equivalent experiences collected in step 3 . We elaborate on each of these in the following sections . 4.1 INITIAL TRAINING AND COLLECTING EXPERIENCE . Our goal in this steps is to collect data for our IDM in the process of learning a policy under a single reward function . Recall the motivation for learning IDMs : we want a model that can take in the current state of the agent and the next command , in the form of a location in the space that the agent should travel to . Thus , we collect state , action , and position data from the transitions observed during the training process . We emphasize the difference between the state space S and the goal space O . The state space S is high-dimensional , consisting of information related to joint angles , velocities , torques , etc . The goal space , O , is low-dimensional , consisting , in this case , of the 3-D coordinates of the goal position that the agent is supposed to navigate to . Definition 1 ( Experiences ) . We define experiences as tuples ( s , o , o′ , a ) , where s is the current state of the agent , o is its current 3-D position , and a is the action that the agent performed to move from the state s and position o to the next position , or intermediate goal , o′ . We write Eτ = { ( s , o , o′ , a ) i } i=1 , ... , T to denote all the experience tuples collected from a trajectory τ of length T . 4.2 LEARNING THE INVERSE DYNAMICS MODEL . Given a set of experiences E , we can train the IDM using supervised learning techniques . Definition 2 ( Inverse Dynamics Model ) . We define the Inverse Dynamics Model ( IDM ) as φ : S ×O ×O → A , φ ( s , o , o′ ) → a ( 1 ) where s , o , o′ and a represent the current state , current position , desired goal , and action respectively . The IDM can reproduce seen actions on state and observation tuples that have appeared in the training data . However , it can not generalize good behaviour to states and observations that have not appeared in the initial training ( see Fig . 6 for qualitative evidence ) . Our aim in the next steps is to generalize the observed experiences so that they can be used over previously unseen inputs from the S ×O ×O space .
This paper proposes a method to learn locomotion and navigation to a goal location or through a set of waypoints for simulated legged robots. The contributions of this paper include 1) generalized experience, which is a data-augmentation technique to add more orientation-invariant experience, and 2) a latent representation to encode the state, the current location and the goal location. The paper compares the proposed method with a few baselines and demonstrates better performance.
SP:006c2334ad366e0c4558bcfccdd89f993fb8bba7
Learning Latent Representations for Inverse Dynamics using Generalized Experiences
1 INTRODUCTION . In reinforcement learning ( RL ) , an agent optimizes its behaviour to maximize a specific reward function that encodes tasks such as moving forward or reaching a target . After training , the agent simply executes the learned policy from its initial state until termination . In practical settings in robotics , however , control policies are invoked at the lowest level of a larger system by higher-level components such as perception and planning units . In such systems , agents have to follow a dynamic sequence of intermediate waypoints , instead of following a single policy until the goal is achieved . A typical approach to achieving goal-directed motion using RL involves learning goal-conditioned policies or value functions ( Schaul et al . ( 2015 ) ) . The key idea is to learn a function conditioned on a combination of the state and goal by sampling goals during the training process . However , this approach requires a large number of training samples , and does not leverage waypoints provided by efficient planning algorithms . Thus , it is desirable to learn models that can compute actions to transition effectively between waypoints . A popular class of such models is called Inverse Dynamics Model ( IDM ) ( Christiano et al . ( 2016 ) ; Pathak et al . ( 2017 ) ) . IDMs typically map the current state ( or a history of states and actions ) and the goal state , to the action . In this paper , we address the need of an efficient control module by learning a generalized IDM that can achieve goal-direction motion by leveraging data collected while training a state-of-the-art RL algorithm . We do not require full information of the goal state , or a history of previous states to learn the IDM . We learn on a reduced goal space , such as 3-D positions to which the agent must learn to navigate . Thus , given just the intermediate 3-D positions , or waypoints , our agent can navigate to the goal , without requiring any additional information about the intermediate states . The basic framework of the IDM is shown in Fig . 1 . The unique aspect of our algorithm is that we eliminate the need to randomly sample goals during training . Instead , we exploit the known symmetries/equivalences of the system ( as is common in many robotics settings ) to guide the collection of generalized experiences during training . We propose a class of algorithms that utilize the property of equivalence between transitions modulo the difference in a fixed set of attributes . In the locomotion setting , the agent ’ s transitions are symmetric under translations and rotations . We capture this symmetry by defining equivalence modulo orientation among experiences . We use this notion of equivalence to guide the training of latent representations shared by these experiences and provide them as input to the IDM to produce the desired actions , as shown in Fig . 4 . A common challenge faced by agents trained using RL techniques is lack of generalization capability . The standard way of training produces policies that work very well on the states encountered by the agent during training , but often fail on unseen states . Achieving good performance using IDMs requires both these components : collecting generalized experiences , and learning these latent representations , as we demonstrate in Section 6 . Our model exhibits high sample efficiency and superior performance , in comparison to other methods involving sampling goals during training . We demonstrate the effectiveness of our approach in the Mujoco Ant environment ( Todorov et al . ( 2012 ) ) in OpenAI Gym ( Brockman et al . ( 2016 ) ) , and the Minitaur and Humanoid environments in PyBullet ( Coumans & Bai ( 2016 ) ) . From a limited number of experiences collected during training under a single reward function of going in one direction , our generalized IDM succeeds at navigating to arbitrary goal positions in the 3-D space . We measure performance by calculating the closest distance to the goal an agent achieves . We perform ablation experiments to show that ( 1 ) collecting generalized experience in the form of equivalent input pairs boosts performance over all baselines , ( 2 ) these equivalent input pairs can be condensed into a latent representation that encodes relevant information , and ( 3 ) learning this latent representation is in fact critical to success of our algorithm . Details of experiments and analysis of results can be found in Sections 5 and 6 . 2 RELATED WORK . Several recent works learn policies and value functions that are conditioned over not just the state space , but also the goal space ( Andrychowicz et al . ( 2017 ) ; Schaul et al . ( 2015 ) ; Kulkarni et al . ( 2016 ) ) and then generalize those functions to unseen goals . Goal-conditioned value functions are also largely used in hierarchical reinforcement learning algorithms ( Kulkarni et al . ( 2016 ) ) , where the higher level module learns over intrinsic goals and the lower level control modules learn subpolicies to reach those goals , or the lower level control modules can efficiently execute goals proposed by the higher-level policy ( Nachum et al . ( 2018 ) ) . Ghosh et al . ( 2018 ) use trained goalconditioned policies to learn actionable latent representations that extract relevant information from the state , and use these pretrained representations to train the agent to excel at other tasks . Pong * et al . ( 2018 ) learn goal-conditioned value functions , and use them in a model-based control setting . IDMs are functions that typically map the current state of the agent and the goal state that the agent aims to achieve , to the desired action . They have been used in a wide variety of contexts in existing literature . Christiano et al . ( 2016 ) train IDMs using a history of states and actions and full goal state information for transferring models trained in simulation to real robots . Pathak et al . ( 2017 ) and Agrawal et al . ( 2016 ) use IDMs in combination with Forward Dynamics Models ( FDMs ) to predict actions from compressed representations of high-dimensional inputs like RGB images generated by the FDM . Specifically , Pathak et al . ( 2017 ) use IDMs to provide a curiosity-based reward signal in the general RL framework to encourage exploration ; Agrawal et al . ( 2016 ) use IDMs to provide supervision for the learning of visual features relevant to the task assigned to the robot . We circumvent the need to learn goal-conditioned policies or value functions by combining IDMs with known symmetric properties of the robot . We train an IDM conditioned on the state space and a reduced goal space , using data collected while training any state-of-the-art RL algorithm . Our data collection is unique in that we exploit equivalences in experiences observed during training and learn a latent representation space shared between such equivalent experiences . Our IDM produces the action given this latent representation as an input , leading to generalization over parts of the state and goal spaces unobserved during training . 3 PRELIMINARIES . In the general RL framework ( Sutton et al . ( 1998 ) ) , a learning agent interacts with an environment modeled as a Markov Decision Process consisting of : 1 ) a state space S , 2 ) an action space A , 3 ) a probability distribution P : S × S × A → [ 0 , 1 ] , where P ( s′|s , a ) is the probability of transitioning into state s′ by taking action a in state s , 4 ) a reward function R : S × A × S → R that gives the reward for this transition , and 5 ) a discount factor Γ . The agent learns a policy πθ : S → A , parameterized by θ while trying to maximize the discounted expected return J ( θ ) = Es0 , a0 , ... [ ∑∞ t=0 Γ tR ( st , at , st+1 ) ] . Goal-conditioned RL optimizes for learning a policy that maximizes the return under a goal-specific reward function Rg . On-policy RL algorithms , such as Policy Gradient methods ( Williams ( 1992 ) ; Mnih et al . ( 2016 ) ) and Trust Region methods ( Schulman et al . ( 2015 ) ; Wu et al . ( 2017 ) ) use deep neural networks to estimate policy gradients , or maximize a surrogate objective function subject to certain constraints . Off-policy RL algorithms ( Lillicrap et al . ( 2016 ) ; Haarnoja et al . ( 2018 ) ) incorporate elements of deep Q-learning ( Mnih et al . ( 2013 ) ) into the actor-critic formulation . Hindsight Experience Replay ( HER ) ( Andrychowicz et al . ( 2017 ) ) is a popular technique used in conjunction with an off-policy RL algorithm to learn policies in a sample-efficient way from sparse rewards in goal-based environments . 4 LEARNING GENERALIZED INVERSE DYNAMICS . Our method leverages samples collected while training a state-of-the-art RL algorithm to train an IDM that maps the current state and desired goal position to the action required to reach the goal . There are four major components involved in this process : 1 ) collecting data while training the RL algorithm , 2 ) learning a basic IDM that maps the current state and the desired goal to the required action , 3 ) collecting experiences that are equivalent to those observed , and using them to train the IDM , and 4 ) learning a latent representation that generalizes this model to unseen parts of the state space by utilizing the equivalent experiences collected in step 3 . We elaborate on each of these in the following sections . 4.1 INITIAL TRAINING AND COLLECTING EXPERIENCE . Our goal in this steps is to collect data for our IDM in the process of learning a policy under a single reward function . Recall the motivation for learning IDMs : we want a model that can take in the current state of the agent and the next command , in the form of a location in the space that the agent should travel to . Thus , we collect state , action , and position data from the transitions observed during the training process . We emphasize the difference between the state space S and the goal space O . The state space S is high-dimensional , consisting of information related to joint angles , velocities , torques , etc . The goal space , O , is low-dimensional , consisting , in this case , of the 3-D coordinates of the goal position that the agent is supposed to navigate to . Definition 1 ( Experiences ) . We define experiences as tuples ( s , o , o′ , a ) , where s is the current state of the agent , o is its current 3-D position , and a is the action that the agent performed to move from the state s and position o to the next position , or intermediate goal , o′ . We write Eτ = { ( s , o , o′ , a ) i } i=1 , ... , T to denote all the experience tuples collected from a trajectory τ of length T . 4.2 LEARNING THE INVERSE DYNAMICS MODEL . Given a set of experiences E , we can train the IDM using supervised learning techniques . Definition 2 ( Inverse Dynamics Model ) . We define the Inverse Dynamics Model ( IDM ) as φ : S ×O ×O → A , φ ( s , o , o′ ) → a ( 1 ) where s , o , o′ and a represent the current state , current position , desired goal , and action respectively . The IDM can reproduce seen actions on state and observation tuples that have appeared in the training data . However , it can not generalize good behaviour to states and observations that have not appeared in the initial training ( see Fig . 6 for qualitative evidence ) . Our aim in the next steps is to generalize the observed experiences so that they can be used over previously unseen inputs from the S ×O ×O space .
The paper proposes a method for exploiting structure in locomotive tasks for efficiently learning low-level control policies that pass through waypoints while achieving some goal (typically 3D Cartesian position). This is in contrast to goal-conditioned RL policies that sample random goals during training and are thus sample inefficient, which are trained to execute one policy at a time. In particular, the paper proposes the notion of generalized experiences, where new trajectories are generated from existing trajectories, in such as a way that they are equivalent to each other (in this case translation and orientation invariant) with respect to actions.
SP:006c2334ad366e0c4558bcfccdd89f993fb8bba7
Learning to Learn Kernels with Variational Random Features
1 INTRODUCTION . Humans have the instinct to effortlessly learn new concepts from a few examples and show great generalization ability to new samples . However , existing machine learning models , e.g. , deep neural networks ( DNNs ) ( Krizhevsky et al. , 2012 ; He et al. , 2016a ) , rely highly on large-scale annotated training data ( Deng et al. , 2009 ) to achieve satisfactory performance . The huge gap between human intelligence and DNNs motivates us to try and progress the task of learning from a few samples , a.k.a . few-shot learning ( Fei-Fei et al. , 2006 ; Lake et al. , 2015 ; Ravi & Larochelle , 2017 ) . Learning to learn , or meta-learning ( Schmidhuber , 1992 ) , has recently received great interests in the machine learning community and offers a promising tool for few-shot learning ( Andrychowicz et al. , 2016 ; Ravi & Larochelle , 2017 ; Finn et al. , 2017 ) . Generally speaking , a meta-learner ( Ravi & Larochelle , 2017 ; Bertinetto et al. , 2019 ) is trained to improve the performance of a base-learner on individual tasks , which is also fast adapted to solve new tasks . The crux of meta-learning for few-shot learning is to explore the common knowledge , such as a good parameter initialization ( Finn et al. , 2017 ) or efficient optimization update rule ( Andrychowicz et al. , 2016 ; Ravi & Larochelle , 2017 ) , shared across different tasks . The knowledge is accumulated and distilled throughout the learning stage , making the model adaptable to new but related tasks ( Finn et al. , 2017 ) . Kernel approximation by random Fourier features ( RFFs ) ( Rahimi & Recht , 2007 ) is an effective technique for efficient kernel learning ( Gärtner et al. , 2002 ) , which has recently become increasingly popular ( Sinha & Duchi , 2016 ; Carratino et al. , 2018 ) . It resorts to the Fourier transform of shift-invariant kernels and constructs explicit feature maps using the Monte Carlo approximation of the Fourier representation . The desired kernel function is approximated by the inner products between these random features . Though demonstrating great potential as a strong base learner , kernel approximation with random features has not yet been fully explored in the meta-learning scenario for few-shot learning . It has already been shown that the classification performance of the kernel with random features does not correlate well with the accurate approximation of kernels . Learning adaptive kernels with random features , for instance , by data-driven sampling strategies ( Sinha & Duchi , 2016 ) , can improve the performance with a low sampling rate compared to using universal random features ( Avron et al. , 2016 ; Chang et al. , 2017 ) . However , since only a few samples are available in each task , it is challenging to learn adaptive kernels with data-driven random features while maintaining high representational capacity for few-shot learning tasks . To obtain powerful kernels for few-shot learning tasks , we need to fully explore the relationship among diverse tasks and capture their shared knowledge to generate informative random features . In this work , we propose meta variational random features ( MetaVRF ) to approximate kernels in a data-driven manner for few-shot learning , which integrates variational inference and kernels in the meta-learning framework . Learning kernels with random Fourier features for few-shot learning allows us to leverage the universal approximation property of kernels to capture shared knowledge in related tasks , and meanwhile it enables us to learn adaptive basis functions to quickly and efficiently adapt to new tasks . Learning adaptive kernels with data-driven random features can be naturally cast into variational inference that approximates probability density through optimization , where the posterior over the random basis function is the spectral distribution of a translation-invariant kernel . The inference of the posterior is conducted in the context of tasks to exploring their dependency for capturing shared knowledge . We adopt a long short-term memory ( LSTM ) based inference network ( Hochreiter & Schmidhuber , 1997 ) , which establishes task context inference to capture the task dependency . Specifically , during the inference , the cell state in the LSTM carries and accumulates the shared knowledge which is updated for each task throughout the course of learning . The remember and forget operations in the LSTM use new information to episodically refine the cell state by gaining experience from a batch of tasks , which can eventually produce random features of highly representational capability for all tasks . For an individual task , the task specific information is first extracted from the support set , and then combined with the shared knowledge in the shared cell state together as the joint condition , to infer the adaptive spectral distribution of the kernels . As a result , the task context inference can not only learn to extract and maintain the shared knowledge across tasks , but also leverage the task-specific knowledge to achieve an adaptive kernel to the current task . The inference framework of our MetaVRF is illustrated in Figure 1 . Extensive experiments on a variety of few-shot learning problems such as regression and classification demonstrate that , our MetaVRF method achieves competitive or even better performance when compared to state-of-the-art algorithms . Due to the advantages of kernels , our MetaVRF can be applied to test settings with different ways and shots from those of training setting , in which the promising results again validate the effectiveness of our MetaVRF for few-shot learning . 2 PROBLEM STATEMENT . In this section , we describe the setup of meta-learning for few-shot learning and introduce the kernel ridge regression as the base-learner , where kernels are approximated by random Fourier features . 2.1 META-LEARNING WITH KERNELS . We adopt the episodic training strategy ( Ravi & Larochelle , 2017 ) commonly used for few-shot classification in meta-learning , which usually involves the meta-training and meta-test stages . In the meta-training stage , a meta-learner is trained to enhance the performance of a base-learner on a meta-training set with a batch of few-shot learning tasks , where a task is usually referred as an episode ( Ravi & Larochelle , 2017 ) . In the meta-test stage , the base learner is evaluated on a meta-test set with different classes of data samples from the meta-training set . For the few-shot classification problem , we sample C-way k-shot classification tasks from the metatraining set , where k is the number of labelled examples for each of the C classes . Given the t-th task with a support set St = { ( xi , yi ) } C×ki=1 and query set Qt = { ( x̃i , ỹi ) } mi=1 ( St , Qt ⊆ X ) , we learn the parameters αt of the predictor fαt using a standard learning algorithm with kernel trick αt = Λ ( Φ ( X ) , Y ) , where St = { X , Y } . Here , Λ is the base-learner and Φ : X → RX is a mapping function from X to a dot product space H. The similarity measure k ( x , x′ ) = 〈Φ ( x ) , Φ ( x′ ) 〉 is usually called a kernel ( Hofmann et al. , 2008 ) . In traditional supervised learning , the base-learner for the t-th single task usually uses a universal kernel to map the input onto a dot product space for efficient learning . Once the base-learner is trained on the support set , its performance is evaluated on the query set by the following loss function∑ ( x̃ , ỹ ) ∈Qt L ( fαt ( Φ ( x̃ ) ) , ỹ ) , ( 1 ) where L ( · ) can be any differentiable function , e.g. , cross-entropy loss . In the meta-learning setting for few-shot learning , we usually consider a batch of tasks . Thus , the meta-learner is trained by optimizing the following objective function w.r.t . the empirical loss on T tasks∑ t ∑ ( x̃ , ỹ ) ∈Qt L ( fαt ( Φt ( x̃ ) ) , ỹ ) , with αt = Λ ( Φt ( X ) , Y ) , ( 2 ) where Φt is the feature mapping function which can be obtained by learning task-specific kernel kt for each task t with data-driven ramdom Fourier features . In this work , we employ kernel ridge regression ( KRR ) , which has an efficient closed-form solution , as the base-learner Λ for few-shot learning . The kernel value in the Gram matrix K ∈ RCk×Ck can be computed as k ( x , x′ ) = Φ ( x ) Φ ( x′ ) > , where “ > ” is the transpose operation . The base-learner Λ for a single task can be obtained by solving the following objective w.r.t . the support set of this task , Λ = arg min α Tr [ ( Y − αK ) ( Y − αK ) > ] + λαKα > . ( 3 ) This produces a closed-form solution α = ( λI +K ) −1Y . The learned predictor is then applied to the query set for prediction of the query set X̃ : Ŷ = fα ( X̃ ) = αK̃ , ( 4 ) where K̃ = Φ ( X ) Φ ( X̃ ) > ∈ RCk×m is with each element as k ( x , x̃ ) between the samples from the support and query sets . Note that we also treat λ in Eq . ( 3 ) as a trainable parameter by leveraging the meta-learning setting , and all these parameters are learned by the meta-learner . In order to obtain task-specific kernels , we propose to learn kernels with random Fourier features , which not only allows us to obtain task-adaptive kernels but also enables us to capture shared knowledge of different tasks by exploring their dependency . 2.2 RANDOM FOURIER FEATURES . Random Fourier features ( RFFs ) were proposed to construct explicit random feature maps using the Monte Carlo approximation of the Fourier representation ( Rahimi & Recht , 2007 ) , which is derived from Bochner ’ s theorem ( Rudin , 1962 ) . Theorem 1 ( Bochner ’ s theorem ) ( Rudin , 1962 ) A continuous , real valued , symmetric and shiftinvariant function k ( x , x′ ) = k ( x − x′ ) on Rd is a positive definite kernel if and only if it is the Fourier transform p ( ω ) of a positive finite measure such that k ( x , x′ ) = ∫ Rd eiω > ( x−x′ ) dp ( ω ) = Eω [ ζω ( x ) ζω ( x ) ∗ ] , where ζω ( x ) = eiω > x . ( 5 ) It is guaranteed that ζω ( x ) ζω ( x ) ∗ is an unbiased estimation of k ( x , x′ ) with sufficient RFF bases { ω } drawn from p ( ω ) ( Rahimi & Recht , 2007 ) . For a predefined kernel , e.g. , radius basis function ( RBF ) , we sample from its spectral distribution using the Monte Carlo method , and obtain the explicit feature map : z ( x ) = 1√ D [ cos ( ω > 1 x + b1 ) , · · · , cos ( ω > Dx + bD ) ] , ( 6 ) where { ω1 , · · · , ωD } are the random bases sampled from p ( ω ) , and [ b1 , · · · , bD ] are D biases sampled from a uniform distribution with a range of [ 0 , 2π ] . Finally , the kernel value k ( x , x′ ) = z ( x ) z ( x′ ) > in K is computed as the dot product of their random feature maps with the same bases . Learning adaptive kernel with data-driven random Fourier features is essential to find the posterior distribution and the specific spectral distribution of kernels . In the following section , we introduce our meta variational random features ( MetaVRF ) , in which random Fourier bases are treated as latent variables inferred from the support set in the meta-learning setting .
This paper proposes a meta-learning framework for learning adaptive kernels using a meta-learner. For representing kernels, the paper learns a variational posterior for the kernel features, by maximizing the Evidence lower Bound. Furthermore, to plug the kernel learning into the meta-learning framework, they let the variational feature posterior to condition on the current support set for adapting and to use a modified LSTM network for accumulating information. Empirically, they compare the proposed MetaVRF with multiple baselines in the standard fewshot classification benchmarks and demonstrate superior performance. They also illustrate that their adaptively-learnt Fourier feature outperforms the standard variational Fourier features.
SP:dce8715440bee1c1dbf8fabc4f0c85bd5d7ddf1f
Learning to Learn Kernels with Variational Random Features
1 INTRODUCTION . Humans have the instinct to effortlessly learn new concepts from a few examples and show great generalization ability to new samples . However , existing machine learning models , e.g. , deep neural networks ( DNNs ) ( Krizhevsky et al. , 2012 ; He et al. , 2016a ) , rely highly on large-scale annotated training data ( Deng et al. , 2009 ) to achieve satisfactory performance . The huge gap between human intelligence and DNNs motivates us to try and progress the task of learning from a few samples , a.k.a . few-shot learning ( Fei-Fei et al. , 2006 ; Lake et al. , 2015 ; Ravi & Larochelle , 2017 ) . Learning to learn , or meta-learning ( Schmidhuber , 1992 ) , has recently received great interests in the machine learning community and offers a promising tool for few-shot learning ( Andrychowicz et al. , 2016 ; Ravi & Larochelle , 2017 ; Finn et al. , 2017 ) . Generally speaking , a meta-learner ( Ravi & Larochelle , 2017 ; Bertinetto et al. , 2019 ) is trained to improve the performance of a base-learner on individual tasks , which is also fast adapted to solve new tasks . The crux of meta-learning for few-shot learning is to explore the common knowledge , such as a good parameter initialization ( Finn et al. , 2017 ) or efficient optimization update rule ( Andrychowicz et al. , 2016 ; Ravi & Larochelle , 2017 ) , shared across different tasks . The knowledge is accumulated and distilled throughout the learning stage , making the model adaptable to new but related tasks ( Finn et al. , 2017 ) . Kernel approximation by random Fourier features ( RFFs ) ( Rahimi & Recht , 2007 ) is an effective technique for efficient kernel learning ( Gärtner et al. , 2002 ) , which has recently become increasingly popular ( Sinha & Duchi , 2016 ; Carratino et al. , 2018 ) . It resorts to the Fourier transform of shift-invariant kernels and constructs explicit feature maps using the Monte Carlo approximation of the Fourier representation . The desired kernel function is approximated by the inner products between these random features . Though demonstrating great potential as a strong base learner , kernel approximation with random features has not yet been fully explored in the meta-learning scenario for few-shot learning . It has already been shown that the classification performance of the kernel with random features does not correlate well with the accurate approximation of kernels . Learning adaptive kernels with random features , for instance , by data-driven sampling strategies ( Sinha & Duchi , 2016 ) , can improve the performance with a low sampling rate compared to using universal random features ( Avron et al. , 2016 ; Chang et al. , 2017 ) . However , since only a few samples are available in each task , it is challenging to learn adaptive kernels with data-driven random features while maintaining high representational capacity for few-shot learning tasks . To obtain powerful kernels for few-shot learning tasks , we need to fully explore the relationship among diverse tasks and capture their shared knowledge to generate informative random features . In this work , we propose meta variational random features ( MetaVRF ) to approximate kernels in a data-driven manner for few-shot learning , which integrates variational inference and kernels in the meta-learning framework . Learning kernels with random Fourier features for few-shot learning allows us to leverage the universal approximation property of kernels to capture shared knowledge in related tasks , and meanwhile it enables us to learn adaptive basis functions to quickly and efficiently adapt to new tasks . Learning adaptive kernels with data-driven random features can be naturally cast into variational inference that approximates probability density through optimization , where the posterior over the random basis function is the spectral distribution of a translation-invariant kernel . The inference of the posterior is conducted in the context of tasks to exploring their dependency for capturing shared knowledge . We adopt a long short-term memory ( LSTM ) based inference network ( Hochreiter & Schmidhuber , 1997 ) , which establishes task context inference to capture the task dependency . Specifically , during the inference , the cell state in the LSTM carries and accumulates the shared knowledge which is updated for each task throughout the course of learning . The remember and forget operations in the LSTM use new information to episodically refine the cell state by gaining experience from a batch of tasks , which can eventually produce random features of highly representational capability for all tasks . For an individual task , the task specific information is first extracted from the support set , and then combined with the shared knowledge in the shared cell state together as the joint condition , to infer the adaptive spectral distribution of the kernels . As a result , the task context inference can not only learn to extract and maintain the shared knowledge across tasks , but also leverage the task-specific knowledge to achieve an adaptive kernel to the current task . The inference framework of our MetaVRF is illustrated in Figure 1 . Extensive experiments on a variety of few-shot learning problems such as regression and classification demonstrate that , our MetaVRF method achieves competitive or even better performance when compared to state-of-the-art algorithms . Due to the advantages of kernels , our MetaVRF can be applied to test settings with different ways and shots from those of training setting , in which the promising results again validate the effectiveness of our MetaVRF for few-shot learning . 2 PROBLEM STATEMENT . In this section , we describe the setup of meta-learning for few-shot learning and introduce the kernel ridge regression as the base-learner , where kernels are approximated by random Fourier features . 2.1 META-LEARNING WITH KERNELS . We adopt the episodic training strategy ( Ravi & Larochelle , 2017 ) commonly used for few-shot classification in meta-learning , which usually involves the meta-training and meta-test stages . In the meta-training stage , a meta-learner is trained to enhance the performance of a base-learner on a meta-training set with a batch of few-shot learning tasks , where a task is usually referred as an episode ( Ravi & Larochelle , 2017 ) . In the meta-test stage , the base learner is evaluated on a meta-test set with different classes of data samples from the meta-training set . For the few-shot classification problem , we sample C-way k-shot classification tasks from the metatraining set , where k is the number of labelled examples for each of the C classes . Given the t-th task with a support set St = { ( xi , yi ) } C×ki=1 and query set Qt = { ( x̃i , ỹi ) } mi=1 ( St , Qt ⊆ X ) , we learn the parameters αt of the predictor fαt using a standard learning algorithm with kernel trick αt = Λ ( Φ ( X ) , Y ) , where St = { X , Y } . Here , Λ is the base-learner and Φ : X → RX is a mapping function from X to a dot product space H. The similarity measure k ( x , x′ ) = 〈Φ ( x ) , Φ ( x′ ) 〉 is usually called a kernel ( Hofmann et al. , 2008 ) . In traditional supervised learning , the base-learner for the t-th single task usually uses a universal kernel to map the input onto a dot product space for efficient learning . Once the base-learner is trained on the support set , its performance is evaluated on the query set by the following loss function∑ ( x̃ , ỹ ) ∈Qt L ( fαt ( Φ ( x̃ ) ) , ỹ ) , ( 1 ) where L ( · ) can be any differentiable function , e.g. , cross-entropy loss . In the meta-learning setting for few-shot learning , we usually consider a batch of tasks . Thus , the meta-learner is trained by optimizing the following objective function w.r.t . the empirical loss on T tasks∑ t ∑ ( x̃ , ỹ ) ∈Qt L ( fαt ( Φt ( x̃ ) ) , ỹ ) , with αt = Λ ( Φt ( X ) , Y ) , ( 2 ) where Φt is the feature mapping function which can be obtained by learning task-specific kernel kt for each task t with data-driven ramdom Fourier features . In this work , we employ kernel ridge regression ( KRR ) , which has an efficient closed-form solution , as the base-learner Λ for few-shot learning . The kernel value in the Gram matrix K ∈ RCk×Ck can be computed as k ( x , x′ ) = Φ ( x ) Φ ( x′ ) > , where “ > ” is the transpose operation . The base-learner Λ for a single task can be obtained by solving the following objective w.r.t . the support set of this task , Λ = arg min α Tr [ ( Y − αK ) ( Y − αK ) > ] + λαKα > . ( 3 ) This produces a closed-form solution α = ( λI +K ) −1Y . The learned predictor is then applied to the query set for prediction of the query set X̃ : Ŷ = fα ( X̃ ) = αK̃ , ( 4 ) where K̃ = Φ ( X ) Φ ( X̃ ) > ∈ RCk×m is with each element as k ( x , x̃ ) between the samples from the support and query sets . Note that we also treat λ in Eq . ( 3 ) as a trainable parameter by leveraging the meta-learning setting , and all these parameters are learned by the meta-learner . In order to obtain task-specific kernels , we propose to learn kernels with random Fourier features , which not only allows us to obtain task-adaptive kernels but also enables us to capture shared knowledge of different tasks by exploring their dependency . 2.2 RANDOM FOURIER FEATURES . Random Fourier features ( RFFs ) were proposed to construct explicit random feature maps using the Monte Carlo approximation of the Fourier representation ( Rahimi & Recht , 2007 ) , which is derived from Bochner ’ s theorem ( Rudin , 1962 ) . Theorem 1 ( Bochner ’ s theorem ) ( Rudin , 1962 ) A continuous , real valued , symmetric and shiftinvariant function k ( x , x′ ) = k ( x − x′ ) on Rd is a positive definite kernel if and only if it is the Fourier transform p ( ω ) of a positive finite measure such that k ( x , x′ ) = ∫ Rd eiω > ( x−x′ ) dp ( ω ) = Eω [ ζω ( x ) ζω ( x ) ∗ ] , where ζω ( x ) = eiω > x . ( 5 ) It is guaranteed that ζω ( x ) ζω ( x ) ∗ is an unbiased estimation of k ( x , x′ ) with sufficient RFF bases { ω } drawn from p ( ω ) ( Rahimi & Recht , 2007 ) . For a predefined kernel , e.g. , radius basis function ( RBF ) , we sample from its spectral distribution using the Monte Carlo method , and obtain the explicit feature map : z ( x ) = 1√ D [ cos ( ω > 1 x + b1 ) , · · · , cos ( ω > Dx + bD ) ] , ( 6 ) where { ω1 , · · · , ωD } are the random bases sampled from p ( ω ) , and [ b1 , · · · , bD ] are D biases sampled from a uniform distribution with a range of [ 0 , 2π ] . Finally , the kernel value k ( x , x′ ) = z ( x ) z ( x′ ) > in K is computed as the dot product of their random feature maps with the same bases . Learning adaptive kernel with data-driven random Fourier features is essential to find the posterior distribution and the specific spectral distribution of kernels . In the following section , we introduce our meta variational random features ( MetaVRF ) , in which random Fourier bases are treated as latent variables inferred from the support set in the meta-learning setting .
This paper studies meta-learning problem with few-shot learning settings. The author proposes a learn each task predictive function via the form of random Fourier features, where the kernel is jointly learned from all tasks. The novel part is the parametrization of inference network using LSTM such that the random feature samples of t-th task conditional depending on all previous task 1,...,t-1, which is an interesting way of modeling kernel spectral distribution. The experiment results show improvement of the proposed methods compared to SoTA meta learning algorithms.
SP:dce8715440bee1c1dbf8fabc4f0c85bd5d7ddf1f
Simple is Better: Training an End-to-end Contract Bridge Bidding Agent without Human Knowledge
1 INTRODUCTION . Games have long been recognized as a testbed for reinforcement learning . Recent technology advancements have outperformed top level experts in perfect information games like Chess ( Campbell et al. , 2002 ) and Go ( Silver et al. , 2016 ; 2017 ) , through human supervision and selfplay . During recent years researchers have also steered towards imperfection information games , such as Poker ( Brown & Sandholm , 2018 ; Moravčík et al. , 2017 ) , Dota 2 1 , and real-time strategy games ( Arulkumaran et al. , 2019 ; Tian et al. , 2017 ) . There are multiple programs which focus specifically in card games . Libratus ( Brown & Sandholm , 2018 ) and DeepStack ( Moravčík et al. , 2017 ) outperforms human experts in two-player Texas Holdem . Bayesian Action Decoder ( Foerster et al. , 2018b ) is able to achieve near optimal performance in multi-player collaborative games like Hanabi . Contract Bridge , or simply Bridge , is a trick-taking card game with 2 teams , each with 2 players . There are 52 cards ( 4 suits , each with 13 cards ) . Each player is dealt with 13 cards . The game has two phases : bidding and playing . In the bidding phase , each player can only see their own card and negotiate in turns via proposing contract , which sets an explicit goal to aim at during the playing stage . High contracts override low ones . Players with stronger cards aim at high contracts for high reward ; while failing to reach the contract , the opponent team receives rewards . Therefore , players utilize the bidding phase to reason about their teammate and opponents ’ cards for a better final contract . In the playing phase , one player reveals their cards publicly . In each round , each player plays one card in turn and the player with best card wins the round . The score is simply how many rounds each team can win . We introduce the game in more detail in Appendix A . Historically AI programs can handle the playing phase well . Back in 1999 , the GIB program ( Ginsberg , 1999 ) placed 12th among 34 human experts partnership , in a competition without the bidding phase . In more recent years , Jack 2 and Wbridge5 3 , champions of computer bridge tournament , has demonstrated strong performances against top level professional humans . 1https : //openai.com/blog/openai-five/ 2http : //www.jackbridge.com/eindex.htm 3http : //www.wbridge5.com/ On the other hand , the bidding phase is very challenging for computer programs . During the bidding phase a player can only access his own 13 cards ( private information ) and the bidding history ( public information ) . They need to exchange information with their partners and try to interfere opponents from doing so through a sequences of non-decreasing bids . Moreover these bids also carry the meaning of suggesting a contract . If the bid surpasses the highest contract they can make , they will get negative score and risk of being doubled . Thus , the amount of information exchange is constrained and dependent on the actual hands . Nevertheless the state space is very large . A player can hold 6.35 × 1011 unique hands and there are 1047 possible bidding sequences . Human has designed a lot of hand-crafted rules and heuristics to cover these cases , called bidding system , and designated a meaning to many common bidding sequences . However , due to large state space , the meaning of these sequences are sometimes ambiguous or conflicting . The bidding system itself also has room for improvement . The award winning programs often implement a subset of some specified human bidding system . Recently , there are also attempts to learn such a bidding system automatically through reinforcement learning . These methods either focus on bidding in the collaborative only setting , where both opponents will bid PASS throughout ( Tian et al. , 2018 ; Yeh & Lin , 2016 ) , or heavily used human expert data for extra supervision ( Rong et al. , 2019 ) . In this work , we propose a system that is the state-of-the-art in competitive bridge bidding . It allows end-to-end training without any human knowledge through selfplay . We propose a novel bidding history representation , and remove any explicit modeling of belief in other agent ’ s state , which are shown to be critical in previous works ( Rong et al. , 2019 ; Tian et al. , 2018 ) . We show that selfplay schedule and details are critical in learning imperfect information games . We use a much smaller model ( about 1/70 in total parameters compared with previous state-of-the-art ( Rong et al. , 2019 ) ) , and reach better performance than the baselines ( Rong et al. , 2019 ; Yeh & Lin , 2016 ) . Furthermore , we outperform world computer bridge championship Wbridge5 by 0.41 IMPs per board over a tournament of 64 boards . Finally , we show an interpretation of the trained system , and will open source the code , model , and experimental data we use . 2 RELATED WORK . Imperfect information games , especially card games , have drawn multiple researchers ’ attention . Prior works on two-player Texas Holdem mainly focus on finding the Nash Equilibrium through variations of counterfactual regret minimization ( Zinkevich et al. , 2008 ) . Libratus ( Brown & Sandholm , 2018 ) utilizes nested safe subgame solving and handles off-tree actions by real time computing . It also has a built-in self improver to enhance the background blueprint strategy . DeepStack ( Moravčík et al. , 2017 ) proposed to use a value network to approximate the value function of the state . They both outperform top human experts in the field . Bayesian Action Decoder ( BAD ) ( Foerster et al. , 2018b ) proposes to model public belief and private belief separately , and sample policy based on an evolving deterministic communication protocol . This protocol is then improved through Bayesian updates . BAD is able to reach near optimal results in two-player Hanabi , outperforming previous methods by a significant margin . In recent years there are also multiple works specifically focusing on contract bridge . Yeh and Lin ( Yeh & Lin , 2016 ) uses deep reinforcement learning to train a bidding model in the collaborative setting . It proposes Penetrative Bellman ’ s Equation ( PBE ) to make the Q-function updates more efficient . The limitation is that PBE can only handle fixed number of bids , which are not realistic in a normal bridge game setting . We refer to this approach as baseline16 . Tian et al ( Tian et al. , 2018 ) proposes Policy Belief Learning ( PBL ) to alternate training between policy learning and belief learning over the whole selfplay process . PBL also only works on the collaborative setting . Rong et al ( Rong et al. , 2019 ) proposes two networks , Estimation Neural Network ( ENN ) and Policy Neural Network ( PNN ) to train a competitive bridge model . ENN is first trained supervisedly from human expert data , and PNN is then learned based on ENN . After learning PNN and ENN from human expert data , the two network are further trained jointly through reinforcement learning and selfplay . PBE claims to be better than Wbridge5 in the collaborative setting , while PNN and ENN outperforms Wbridge5 in the competitive setting . We refer to this approach as baseline19 . Selfplay methods have been proposed for a long time . Back in 1951 , Brown et al ( Brown , 1951 ) proposes fictitious play in imperfect information games to find the Nash Equilibrium . This is a classic selfplay algorithm in game theory and inspires many extensions and applications ( Brown & Sandholm , 2018 ; Heinrich et al. , 2015 ; Heinrich & Silver , 2016 ; Moravčík et al. , 2017 ) . Large scale selfplay algorithms do not emerge until recent years , partially due to computation constraint . AlphaGo ( Silver et al. , 2016 ) uses selfplay to train a value network to defeat the human Go champion Lee Sedol 4:1 . AlphaGoZero ( Silver et al. , 2017 ) and AlphaZero ( Silver et al. , 2018 ) completely discard human knowledge and train superhuman models from scratch . In Dota 2 and StarCraft , selfplay is also used extensively to train models to outperform professional players . Belief modeling is also very critical in previous works about imperfect information games . Besides the previous mentioned card game agents ( Foerster et al. , 2018b ; Rong et al. , 2019 ; Tian et al. , 2018 ) , LOLA agents ( Foerster et al. , 2018a ) are trained with anticipated learning of other agents . StarCraft Defogger ( Synnaeve et al. , 2018 ) also tries to reason about states of unknown territory in real time strategy games . 3 METHOD . 3.1 PROBLEM SETUP . We focus on the bidding part of the bridge game . Double Dummy Solver ( DDS ) 4 computes the maximum tricks each side can get during the playing phase if all the plays are optimal . Previous works show that DDS is a good approximate to human expert real plays ( Rong et al. , 2019 ) , so we directly use the results of DDS at the end of bidding phase to assign reward to each side . The training dataset contains randomly generated 2.5 million hands along with their precomputed DDS results . The evaluation dataset contains 100k such hands . We will open source this data for the community and future work . Inspired by the format of duplicate bridge tournament , during training and evaluation , each hand is played twice , where a specific partnership sits North-South in one game , and East-West in another . The difference in the results of the two tables is the final reward . In this way , the impact of randomness in the hands is reduced to minimum and model ’ s true strength can be better evaluated . The difference in scores is then converted to IMPs scale , and then normalized to [ -1 , 1 ] . 3.2 INPUT REPRESENTATION . We encode the state of a bridge game to a 267 bit vector . The first 52 bits indicate that if the current player holds a specific card . The next 175 bits encodes the bidding history , which consists of 5 segments of 35 bits each . These 35 bit segments correspond to 35 contract bids . The first segment indicates if the current player has made a corresponding bid in the bidding history . Similarly , the next 3 segments encodes the contract bid history of the current player ’ s partner , left opponent and right opponent . The last segment indicates that if a corresponding contract bid has been doubled or redoubled . Since the bidding sequence can only be non-decreasing , the order of these bids are implicitly conveyed . The next 2 bits encode the current vulnerability of the game , corresponding to the vulnerability of North-South and East-West respectively . Finally , the last 38 bits indicates whether an action is legal , given the current bidding history . We emphasize that this encoding is quite general and there is not much domain-specific information . baseline19 presents a novel bidding history representation using positions in the maximal possible bidding sequence , which is highly specific to the contract bridge game . 4https : //github.com/dds-bridge/dds
The authors propose a deep learning agent for automatic bidding in the bridge game. The agent is trained with a standard A3C reinforcement learning model with self-play, and the internal neural network only takes a rather succinct representation of the bidding history as the input. Experiment results demonstrate state-of-the-art performance with a simpler model. The authors discuss some findings with the proposed agent, such as the lack of need to explicitly model the belief and the possibility to self-train with different variants of opponents. Some visualization is also provided to understand how the trained agent behaves.
SP:e5435a2d586d6f2bebf436aae8a7fb3602064ab8
Simple is Better: Training an End-to-end Contract Bridge Bidding Agent without Human Knowledge
1 INTRODUCTION . Games have long been recognized as a testbed for reinforcement learning . Recent technology advancements have outperformed top level experts in perfect information games like Chess ( Campbell et al. , 2002 ) and Go ( Silver et al. , 2016 ; 2017 ) , through human supervision and selfplay . During recent years researchers have also steered towards imperfection information games , such as Poker ( Brown & Sandholm , 2018 ; Moravčík et al. , 2017 ) , Dota 2 1 , and real-time strategy games ( Arulkumaran et al. , 2019 ; Tian et al. , 2017 ) . There are multiple programs which focus specifically in card games . Libratus ( Brown & Sandholm , 2018 ) and DeepStack ( Moravčík et al. , 2017 ) outperforms human experts in two-player Texas Holdem . Bayesian Action Decoder ( Foerster et al. , 2018b ) is able to achieve near optimal performance in multi-player collaborative games like Hanabi . Contract Bridge , or simply Bridge , is a trick-taking card game with 2 teams , each with 2 players . There are 52 cards ( 4 suits , each with 13 cards ) . Each player is dealt with 13 cards . The game has two phases : bidding and playing . In the bidding phase , each player can only see their own card and negotiate in turns via proposing contract , which sets an explicit goal to aim at during the playing stage . High contracts override low ones . Players with stronger cards aim at high contracts for high reward ; while failing to reach the contract , the opponent team receives rewards . Therefore , players utilize the bidding phase to reason about their teammate and opponents ’ cards for a better final contract . In the playing phase , one player reveals their cards publicly . In each round , each player plays one card in turn and the player with best card wins the round . The score is simply how many rounds each team can win . We introduce the game in more detail in Appendix A . Historically AI programs can handle the playing phase well . Back in 1999 , the GIB program ( Ginsberg , 1999 ) placed 12th among 34 human experts partnership , in a competition without the bidding phase . In more recent years , Jack 2 and Wbridge5 3 , champions of computer bridge tournament , has demonstrated strong performances against top level professional humans . 1https : //openai.com/blog/openai-five/ 2http : //www.jackbridge.com/eindex.htm 3http : //www.wbridge5.com/ On the other hand , the bidding phase is very challenging for computer programs . During the bidding phase a player can only access his own 13 cards ( private information ) and the bidding history ( public information ) . They need to exchange information with their partners and try to interfere opponents from doing so through a sequences of non-decreasing bids . Moreover these bids also carry the meaning of suggesting a contract . If the bid surpasses the highest contract they can make , they will get negative score and risk of being doubled . Thus , the amount of information exchange is constrained and dependent on the actual hands . Nevertheless the state space is very large . A player can hold 6.35 × 1011 unique hands and there are 1047 possible bidding sequences . Human has designed a lot of hand-crafted rules and heuristics to cover these cases , called bidding system , and designated a meaning to many common bidding sequences . However , due to large state space , the meaning of these sequences are sometimes ambiguous or conflicting . The bidding system itself also has room for improvement . The award winning programs often implement a subset of some specified human bidding system . Recently , there are also attempts to learn such a bidding system automatically through reinforcement learning . These methods either focus on bidding in the collaborative only setting , where both opponents will bid PASS throughout ( Tian et al. , 2018 ; Yeh & Lin , 2016 ) , or heavily used human expert data for extra supervision ( Rong et al. , 2019 ) . In this work , we propose a system that is the state-of-the-art in competitive bridge bidding . It allows end-to-end training without any human knowledge through selfplay . We propose a novel bidding history representation , and remove any explicit modeling of belief in other agent ’ s state , which are shown to be critical in previous works ( Rong et al. , 2019 ; Tian et al. , 2018 ) . We show that selfplay schedule and details are critical in learning imperfect information games . We use a much smaller model ( about 1/70 in total parameters compared with previous state-of-the-art ( Rong et al. , 2019 ) ) , and reach better performance than the baselines ( Rong et al. , 2019 ; Yeh & Lin , 2016 ) . Furthermore , we outperform world computer bridge championship Wbridge5 by 0.41 IMPs per board over a tournament of 64 boards . Finally , we show an interpretation of the trained system , and will open source the code , model , and experimental data we use . 2 RELATED WORK . Imperfect information games , especially card games , have drawn multiple researchers ’ attention . Prior works on two-player Texas Holdem mainly focus on finding the Nash Equilibrium through variations of counterfactual regret minimization ( Zinkevich et al. , 2008 ) . Libratus ( Brown & Sandholm , 2018 ) utilizes nested safe subgame solving and handles off-tree actions by real time computing . It also has a built-in self improver to enhance the background blueprint strategy . DeepStack ( Moravčík et al. , 2017 ) proposed to use a value network to approximate the value function of the state . They both outperform top human experts in the field . Bayesian Action Decoder ( BAD ) ( Foerster et al. , 2018b ) proposes to model public belief and private belief separately , and sample policy based on an evolving deterministic communication protocol . This protocol is then improved through Bayesian updates . BAD is able to reach near optimal results in two-player Hanabi , outperforming previous methods by a significant margin . In recent years there are also multiple works specifically focusing on contract bridge . Yeh and Lin ( Yeh & Lin , 2016 ) uses deep reinforcement learning to train a bidding model in the collaborative setting . It proposes Penetrative Bellman ’ s Equation ( PBE ) to make the Q-function updates more efficient . The limitation is that PBE can only handle fixed number of bids , which are not realistic in a normal bridge game setting . We refer to this approach as baseline16 . Tian et al ( Tian et al. , 2018 ) proposes Policy Belief Learning ( PBL ) to alternate training between policy learning and belief learning over the whole selfplay process . PBL also only works on the collaborative setting . Rong et al ( Rong et al. , 2019 ) proposes two networks , Estimation Neural Network ( ENN ) and Policy Neural Network ( PNN ) to train a competitive bridge model . ENN is first trained supervisedly from human expert data , and PNN is then learned based on ENN . After learning PNN and ENN from human expert data , the two network are further trained jointly through reinforcement learning and selfplay . PBE claims to be better than Wbridge5 in the collaborative setting , while PNN and ENN outperforms Wbridge5 in the competitive setting . We refer to this approach as baseline19 . Selfplay methods have been proposed for a long time . Back in 1951 , Brown et al ( Brown , 1951 ) proposes fictitious play in imperfect information games to find the Nash Equilibrium . This is a classic selfplay algorithm in game theory and inspires many extensions and applications ( Brown & Sandholm , 2018 ; Heinrich et al. , 2015 ; Heinrich & Silver , 2016 ; Moravčík et al. , 2017 ) . Large scale selfplay algorithms do not emerge until recent years , partially due to computation constraint . AlphaGo ( Silver et al. , 2016 ) uses selfplay to train a value network to defeat the human Go champion Lee Sedol 4:1 . AlphaGoZero ( Silver et al. , 2017 ) and AlphaZero ( Silver et al. , 2018 ) completely discard human knowledge and train superhuman models from scratch . In Dota 2 and StarCraft , selfplay is also used extensively to train models to outperform professional players . Belief modeling is also very critical in previous works about imperfect information games . Besides the previous mentioned card game agents ( Foerster et al. , 2018b ; Rong et al. , 2019 ; Tian et al. , 2018 ) , LOLA agents ( Foerster et al. , 2018a ) are trained with anticipated learning of other agents . StarCraft Defogger ( Synnaeve et al. , 2018 ) also tries to reason about states of unknown territory in real time strategy games . 3 METHOD . 3.1 PROBLEM SETUP . We focus on the bidding part of the bridge game . Double Dummy Solver ( DDS ) 4 computes the maximum tricks each side can get during the playing phase if all the plays are optimal . Previous works show that DDS is a good approximate to human expert real plays ( Rong et al. , 2019 ) , so we directly use the results of DDS at the end of bidding phase to assign reward to each side . The training dataset contains randomly generated 2.5 million hands along with their precomputed DDS results . The evaluation dataset contains 100k such hands . We will open source this data for the community and future work . Inspired by the format of duplicate bridge tournament , during training and evaluation , each hand is played twice , where a specific partnership sits North-South in one game , and East-West in another . The difference in the results of the two tables is the final reward . In this way , the impact of randomness in the hands is reduced to minimum and model ’ s true strength can be better evaluated . The difference in scores is then converted to IMPs scale , and then normalized to [ -1 , 1 ] . 3.2 INPUT REPRESENTATION . We encode the state of a bridge game to a 267 bit vector . The first 52 bits indicate that if the current player holds a specific card . The next 175 bits encodes the bidding history , which consists of 5 segments of 35 bits each . These 35 bit segments correspond to 35 contract bids . The first segment indicates if the current player has made a corresponding bid in the bidding history . Similarly , the next 3 segments encodes the contract bid history of the current player ’ s partner , left opponent and right opponent . The last segment indicates that if a corresponding contract bid has been doubled or redoubled . Since the bidding sequence can only be non-decreasing , the order of these bids are implicitly conveyed . The next 2 bits encode the current vulnerability of the game , corresponding to the vulnerability of North-South and East-West respectively . Finally , the last 38 bits indicates whether an action is legal , given the current bidding history . We emphasize that this encoding is quite general and there is not much domain-specific information . baseline19 presents a novel bidding history representation using positions in the maximal possible bidding sequence , which is highly specific to the contract bridge game . 4https : //github.com/dds-bridge/dds
This paper develops a method to train agents to bid competitively in the game of Bridge. The authors focus on the bidding phase of the game and develop a model to predict the best bid to make at each turn of the phase. The difficulty in the bidding lies in understanding the signals provided by your own teammate as well as the opponent team in order to estimate the state of the game, which is partially observable since each player cannot see the hands of the three other players. The authors show that explicitly modeling the belief of other agents is not necessary and that competitive performance can be achieved with self-play.
SP:e5435a2d586d6f2bebf436aae8a7fb3602064ab8
Selection via Proxy: Efficient Data Selection for Deep Learning
1 INTRODUCTION . Data selection methods , such as active learning and core-set selection , improve the data efficiency of machine learning by identifying the most informative training examples . To quantify informativeness , these methods depend on semantically meaningful features or a trained model to calculate uncertainty . Concretely , active learning selects points to label from a large pool of unlabeled data by repeatedly training a model on a small pool of labeled data and selecting additional examples to label based on the model ’ s uncertainty ( e.g. , the entropy of predicted class probabilities ) or other heuristics ( Lewis & Gale , 1994 ; Rosenberg et al. , 2005 ; Settles , 2011 ; 2012 ) . Conversely , core-set selection techniques start with a large labeled or unlabeled dataset and aim to find a small subset that accurately approximates the full dataset by selecting representative examples ( Har-Peled & Kushal , 2007 ; Tsang et al. , 2005 ; Huggins et al. , 2016 ; Campbell & Broderick , 2017 ; 2018 ; Sener & Savarese , 2018 ) . Unfortunately , classical data selection methods are often prohibitively expensive to apply in deep learning ( Shen et al. , 2017 ; Sener & Savarese , 2018 ; Kirsch et al. , 2019 ) . Deep learning models learn complex internal semantic representations ( hidden layers ) from raw inputs ( e.g. , pixels or characters ) that enable them to achieve state-of-the-art performance but result in substantial training times . Many core-set selection and active learning techniques require some feature representation before they can accurately identify informative points either to take diversity into account or as part of a trained model to quantify uncertainty . As a result , new deep active learning methods request labels in large batches to avoid retraining the model too many times ( Shen et al. , 2017 ; Sener & Savarese , 2018 ; Kirsch et al. , 2019 ) . However , batch active learning still requires training a full deep model for every batch , which is costly for large models ( He et al. , 2016b ; Jozefowicz et al. , 2016 ; Vaswani et al. , 2017 ) . Similarly , core-set selection applications mitigate the training time of deep learning models by using ∗Correspondence : cody @ cs.stanford.edu bespoke combinations of hand-engineered features and simple models ( e.g. , hidden Markov models ) pretrained on auxiliary tasks ( Wei et al. , 2013 ; 2014 ; Tschiatschek et al. , 2014 ; Ni et al. , 2015 ) . In this paper , we propose selection via proxy ( SVP ) as a way to make existing data selection methods more computationally efficient for deep learning . SVP uses the feature representation from a separate , less computationally intensive proxy model in place of the representation from the much larger and more accurate target model we aim to train . SVP builds on the idea of heterogeneous uncertainty sampling from Lewis & Catlett ( 1994 ) , which showed that an inexpensive classifier ( e.g. , naïve Bayes ) can select points to label for a much more computationally expensive classifier ( e.g. , decision tree ) . In our work , we show that small deep learning models can similarly serve as an inexpensive proxy for data selection in deep learning , significantly accelerating both active learning and core-set selection across a range of datasets and selection methods . To create these cheap proxy models , we can scale down deep learning models by removing layers , using smaller model architectures , and training them for fewer epochs . While these scaled-down models achieve significantly lower accuracy than larger models , we surprisingly find that they still provide useful representations to rank and select points . Specifically , we observe high Spearman ’ s and Pearson ’ s correlations between the rankings from small proxy models and the larger , more accurate target models on metrics including uncertainty ( Settles , 2012 ) , forgetting events ( Toneva et al. , 2019 ) , and submodular algorithms such as greedy k-centers ( Wolf , 2011 ) . Because these proxy models are quick to train ( often 10× faster ) , we can identify which points to select nearly as well as the larger target model but significantly faster . We empirically evaluated SVP for active learning and core-set selection on five datasets : CIFAR10 , CIFAR100 ( Krizhevsky & Hinton , 2009 ) , ImageNet ( Russakovsky et al. , 2015 ) , Amazon Review Polarity , and Amazon Review Full ( Zhang et al. , 2015 ) . For active learning , we considered both least confidence uncertainty sampling ( Settles , 2012 ; Shen et al. , 2017 ; Gal et al. , 2017 ) and the greedy k-centers approach from Sener & Savarese ( 2018 ) with a variety of proxies . Across all datasets , we found that SVP matches the accuracy of the traditional approach of using the same large model for both selecting points and the final prediction task . Depending on the proxy , SVP yielded up to a 7× speed-up on CIFAR10 and CIFAR100 , 41.9× speed-up on Amazon Review Polarity and Full , and 2.9× speed-up on ImageNet in data selection runtime ( i.e. , the time it takes to repeatedly train and select points ) . For core-set selection , we tried three methods to identify a subset of points : max entropy uncertainty sampling ( Settles , 2012 ) , greedy k-centers as a submodular approach ( Wolf , 2011 ) , and the recent approach of forgetting events ( Toneva et al. , 2019 ) . For each method , we found that smaller proxy models have high Spearman ’ s rank-order correlations with models that are 10× larger and performed as well as these large models at identifying subsets of points to train on that yield high test accuracy . On CIFAR10 , SVP applied to forgetting events removed 50 % of the data without impacting the accuracy of ResNet164 with pre-activation ( He et al. , 2016b ) , using a 10× faster model than ResNet164 to make the selection . This substitution yielded an end-to-end training time improvement of about 1.6× for ResNet164 ( including the time to train and use the proxy ) . Taken together , these results demonstrate that SVP is a promising , yet simple approach to make data selection methods computationally feasible for deep learning . While we focus on active learning and core-set selection , SVP is widely applicable to methods that depend on learned representations . 2 METHODS . In this section , we describe SVP and show how it can be incorporated into active learning and core-set selection . Figure 1 shows an overview of SVP : in active learning , we retrain a proxy model APk in place of the target model ATk after each batch is selected , and in core-set selection , we train the proxy AP [ n ] rather than the target A T [ n ] over all the data to learn a feature representation and select points . 2.1 ACTIVE LEARNING . Pool-based active learning starts with a large pool of unlabeled data U = { xi } i∈ [ n ] where [ n ] = { 1 , . . . , n } . Each example is from the space X with an unknown label from the label space Y and is sampled i.i.d . over the space Z = X × Y as ( xi , yi ) ∼ pZ . Initially , methods label a small pool of points s0 = { s0j ∈ [ n ] } j∈ [ m ] chosen uniformly at random . Given U , a loss function ` , and the labels { ys0j } j∈ [ m ] for the initial random subset , the goal of active learning is to select up to a budget of b points s = s0 ∪ { sj ∈ [ n ] \ s0 } j∈ [ b−m ] to label that produces a model As with low error . Baseline . In this paper , we apply SVP to least confidence uncertainty sampling ( Settles , 2012 ; Shen et al. , 2017 ; Gal et al. , 2017 ) and the recent greedy k-centers approach from Sener & Savarese ( 2018 ) . Like recent work for deep active learning ( Shen et al. , 2017 ; Sener & Savarese , 2018 ; Kirsch et al. , 2019 ) , we consider a batch setting with K rounds where we select bK points in every round . Following Gal et al . ( 2017 ) ; Sener & Savarese ( 2018 ) ; Kirsch et al . ( 2019 ) , we reinitialize the target model and retrain on all of the labeled data from the previous k rounds to avoid any correlation between selections ( Frankle & Carbin , 2018 ; Kirsch et al. , 2019 ) . We denote this trained model as ATs0∪ ... ∪sk or just ATk for simplicity . Then using A T k , we either calculate the model ’ s confidence as : fconfidence ( x ; A T k ) = 1−max ŷ P ( ŷ|x ; ATk ) and select the examples with the lowest confidence or extract a feature representation from the model ’ s final hidden layer and compute the distance between examples ( i.e. , ∆ ( xi , xj ; ATk ) ) to select points according to the greedy k-centers method from Wolf ( 2011 ) ; Sener & Savarese ( 2018 ) ( Algorithm 1 ) . The same model is trained on the final b labeled points to yield the final model , ATK , which is then tested on a held-out set to evaluate error and quantify the quality of the selected data . Although other selection approaches exist , least confidence uncertainty sampling and greedy k-centers cover the spectrum of uncertainty-based and representativeness-based approaches for deep active learning . Other uncertainty metrics such as entropy or margin were highly correlated with confidence when using the same trained model ( i.e. , above a 0.96 Spearman ’ s correlation in our experiments on CIFAR ) . Query-by-committee ( Seung et al. , 1992 ) can be prohibitively expensive in deep learning , where training a single model is already costly . BALD ( Houlsby et al. , 2011 ) has seen success in deep learning ( Gal et al. , 2017 ; Shen et al. , 2017 ) but is restricted to Bayesian neural networks or networks with dropout ( Srivastava et al. , 2014 ) as an approximation ( Gal & Ghahramani , 2016 ) . Algorithm 1 GREEDY K-CENTERS ( WOLF , 2011 ; SENER & SAVARESE , 2018 ) Input : data xi , existing pool s0 , trained model AT0 , and a budget b 1 : Initialize s = s0 2 : repeat 3 : u = arg maxi∈ [ n ] \s minj∈s ∆ ( xi , xj ; A T 0 ) 4 : s = s ∪ { u } 5 : until |s| = b + |s0| 6 : return s \ s0 Algorithm 2 FORGETTING EVENTS ( TONEVA ET AL. , 2019 ) 1 : Initialize prev_acci = 0 , i ∈ [ n ] 2 : Initialize forgetting_eventsi = 0 , i ∈ [ n ] 3 : while training is not done do 4 : Sample mini-batch B from L 5 : for example i ∈ B do 6 : Compute acci 7 : if prev_acci > acci then 8 : forgetting_eventsi += 1 9 : prev_acci = acci 10 : Gradient update classifier on B 11 : return forgetting_events
The paper proposes a method for selecting a subset of a large dataset to reduce the computational costs of deep neural netwoks. The main idea is to train a proxy model, a smaller version of the full neural network, to choose important data points for active learning or core-set selection. Experiments on standard classification tasks demonstrate that this approach can yield substantial computational savings with only a small drop in accuracy.
SP:fa4272fd8c8acea21a01d8fd6542a51534c1aee8
Selection via Proxy: Efficient Data Selection for Deep Learning
1 INTRODUCTION . Data selection methods , such as active learning and core-set selection , improve the data efficiency of machine learning by identifying the most informative training examples . To quantify informativeness , these methods depend on semantically meaningful features or a trained model to calculate uncertainty . Concretely , active learning selects points to label from a large pool of unlabeled data by repeatedly training a model on a small pool of labeled data and selecting additional examples to label based on the model ’ s uncertainty ( e.g. , the entropy of predicted class probabilities ) or other heuristics ( Lewis & Gale , 1994 ; Rosenberg et al. , 2005 ; Settles , 2011 ; 2012 ) . Conversely , core-set selection techniques start with a large labeled or unlabeled dataset and aim to find a small subset that accurately approximates the full dataset by selecting representative examples ( Har-Peled & Kushal , 2007 ; Tsang et al. , 2005 ; Huggins et al. , 2016 ; Campbell & Broderick , 2017 ; 2018 ; Sener & Savarese , 2018 ) . Unfortunately , classical data selection methods are often prohibitively expensive to apply in deep learning ( Shen et al. , 2017 ; Sener & Savarese , 2018 ; Kirsch et al. , 2019 ) . Deep learning models learn complex internal semantic representations ( hidden layers ) from raw inputs ( e.g. , pixels or characters ) that enable them to achieve state-of-the-art performance but result in substantial training times . Many core-set selection and active learning techniques require some feature representation before they can accurately identify informative points either to take diversity into account or as part of a trained model to quantify uncertainty . As a result , new deep active learning methods request labels in large batches to avoid retraining the model too many times ( Shen et al. , 2017 ; Sener & Savarese , 2018 ; Kirsch et al. , 2019 ) . However , batch active learning still requires training a full deep model for every batch , which is costly for large models ( He et al. , 2016b ; Jozefowicz et al. , 2016 ; Vaswani et al. , 2017 ) . Similarly , core-set selection applications mitigate the training time of deep learning models by using ∗Correspondence : cody @ cs.stanford.edu bespoke combinations of hand-engineered features and simple models ( e.g. , hidden Markov models ) pretrained on auxiliary tasks ( Wei et al. , 2013 ; 2014 ; Tschiatschek et al. , 2014 ; Ni et al. , 2015 ) . In this paper , we propose selection via proxy ( SVP ) as a way to make existing data selection methods more computationally efficient for deep learning . SVP uses the feature representation from a separate , less computationally intensive proxy model in place of the representation from the much larger and more accurate target model we aim to train . SVP builds on the idea of heterogeneous uncertainty sampling from Lewis & Catlett ( 1994 ) , which showed that an inexpensive classifier ( e.g. , naïve Bayes ) can select points to label for a much more computationally expensive classifier ( e.g. , decision tree ) . In our work , we show that small deep learning models can similarly serve as an inexpensive proxy for data selection in deep learning , significantly accelerating both active learning and core-set selection across a range of datasets and selection methods . To create these cheap proxy models , we can scale down deep learning models by removing layers , using smaller model architectures , and training them for fewer epochs . While these scaled-down models achieve significantly lower accuracy than larger models , we surprisingly find that they still provide useful representations to rank and select points . Specifically , we observe high Spearman ’ s and Pearson ’ s correlations between the rankings from small proxy models and the larger , more accurate target models on metrics including uncertainty ( Settles , 2012 ) , forgetting events ( Toneva et al. , 2019 ) , and submodular algorithms such as greedy k-centers ( Wolf , 2011 ) . Because these proxy models are quick to train ( often 10× faster ) , we can identify which points to select nearly as well as the larger target model but significantly faster . We empirically evaluated SVP for active learning and core-set selection on five datasets : CIFAR10 , CIFAR100 ( Krizhevsky & Hinton , 2009 ) , ImageNet ( Russakovsky et al. , 2015 ) , Amazon Review Polarity , and Amazon Review Full ( Zhang et al. , 2015 ) . For active learning , we considered both least confidence uncertainty sampling ( Settles , 2012 ; Shen et al. , 2017 ; Gal et al. , 2017 ) and the greedy k-centers approach from Sener & Savarese ( 2018 ) with a variety of proxies . Across all datasets , we found that SVP matches the accuracy of the traditional approach of using the same large model for both selecting points and the final prediction task . Depending on the proxy , SVP yielded up to a 7× speed-up on CIFAR10 and CIFAR100 , 41.9× speed-up on Amazon Review Polarity and Full , and 2.9× speed-up on ImageNet in data selection runtime ( i.e. , the time it takes to repeatedly train and select points ) . For core-set selection , we tried three methods to identify a subset of points : max entropy uncertainty sampling ( Settles , 2012 ) , greedy k-centers as a submodular approach ( Wolf , 2011 ) , and the recent approach of forgetting events ( Toneva et al. , 2019 ) . For each method , we found that smaller proxy models have high Spearman ’ s rank-order correlations with models that are 10× larger and performed as well as these large models at identifying subsets of points to train on that yield high test accuracy . On CIFAR10 , SVP applied to forgetting events removed 50 % of the data without impacting the accuracy of ResNet164 with pre-activation ( He et al. , 2016b ) , using a 10× faster model than ResNet164 to make the selection . This substitution yielded an end-to-end training time improvement of about 1.6× for ResNet164 ( including the time to train and use the proxy ) . Taken together , these results demonstrate that SVP is a promising , yet simple approach to make data selection methods computationally feasible for deep learning . While we focus on active learning and core-set selection , SVP is widely applicable to methods that depend on learned representations . 2 METHODS . In this section , we describe SVP and show how it can be incorporated into active learning and core-set selection . Figure 1 shows an overview of SVP : in active learning , we retrain a proxy model APk in place of the target model ATk after each batch is selected , and in core-set selection , we train the proxy AP [ n ] rather than the target A T [ n ] over all the data to learn a feature representation and select points . 2.1 ACTIVE LEARNING . Pool-based active learning starts with a large pool of unlabeled data U = { xi } i∈ [ n ] where [ n ] = { 1 , . . . , n } . Each example is from the space X with an unknown label from the label space Y and is sampled i.i.d . over the space Z = X × Y as ( xi , yi ) ∼ pZ . Initially , methods label a small pool of points s0 = { s0j ∈ [ n ] } j∈ [ m ] chosen uniformly at random . Given U , a loss function ` , and the labels { ys0j } j∈ [ m ] for the initial random subset , the goal of active learning is to select up to a budget of b points s = s0 ∪ { sj ∈ [ n ] \ s0 } j∈ [ b−m ] to label that produces a model As with low error . Baseline . In this paper , we apply SVP to least confidence uncertainty sampling ( Settles , 2012 ; Shen et al. , 2017 ; Gal et al. , 2017 ) and the recent greedy k-centers approach from Sener & Savarese ( 2018 ) . Like recent work for deep active learning ( Shen et al. , 2017 ; Sener & Savarese , 2018 ; Kirsch et al. , 2019 ) , we consider a batch setting with K rounds where we select bK points in every round . Following Gal et al . ( 2017 ) ; Sener & Savarese ( 2018 ) ; Kirsch et al . ( 2019 ) , we reinitialize the target model and retrain on all of the labeled data from the previous k rounds to avoid any correlation between selections ( Frankle & Carbin , 2018 ; Kirsch et al. , 2019 ) . We denote this trained model as ATs0∪ ... ∪sk or just ATk for simplicity . Then using A T k , we either calculate the model ’ s confidence as : fconfidence ( x ; A T k ) = 1−max ŷ P ( ŷ|x ; ATk ) and select the examples with the lowest confidence or extract a feature representation from the model ’ s final hidden layer and compute the distance between examples ( i.e. , ∆ ( xi , xj ; ATk ) ) to select points according to the greedy k-centers method from Wolf ( 2011 ) ; Sener & Savarese ( 2018 ) ( Algorithm 1 ) . The same model is trained on the final b labeled points to yield the final model , ATK , which is then tested on a held-out set to evaluate error and quantify the quality of the selected data . Although other selection approaches exist , least confidence uncertainty sampling and greedy k-centers cover the spectrum of uncertainty-based and representativeness-based approaches for deep active learning . Other uncertainty metrics such as entropy or margin were highly correlated with confidence when using the same trained model ( i.e. , above a 0.96 Spearman ’ s correlation in our experiments on CIFAR ) . Query-by-committee ( Seung et al. , 1992 ) can be prohibitively expensive in deep learning , where training a single model is already costly . BALD ( Houlsby et al. , 2011 ) has seen success in deep learning ( Gal et al. , 2017 ; Shen et al. , 2017 ) but is restricted to Bayesian neural networks or networks with dropout ( Srivastava et al. , 2014 ) as an approximation ( Gal & Ghahramani , 2016 ) . Algorithm 1 GREEDY K-CENTERS ( WOLF , 2011 ; SENER & SAVARESE , 2018 ) Input : data xi , existing pool s0 , trained model AT0 , and a budget b 1 : Initialize s = s0 2 : repeat 3 : u = arg maxi∈ [ n ] \s minj∈s ∆ ( xi , xj ; A T 0 ) 4 : s = s ∪ { u } 5 : until |s| = b + |s0| 6 : return s \ s0 Algorithm 2 FORGETTING EVENTS ( TONEVA ET AL. , 2019 ) 1 : Initialize prev_acci = 0 , i ∈ [ n ] 2 : Initialize forgetting_eventsi = 0 , i ∈ [ n ] 3 : while training is not done do 4 : Sample mini-batch B from L 5 : for example i ∈ B do 6 : Compute acci 7 : if prev_acci > acci then 8 : forgetting_eventsi += 1 9 : prev_acci = acci 10 : Gradient update classifier on B 11 : return forgetting_events
This paper presents a method to speed up the data selection in active learning and core-set learning. The authors present a simple idea: instead of using the full model to select data points, they use a smaller model with fewer layers, potentially trained for fewer iterations. The authors show that this simple approach is able to speed up the data selection portion of both processes significantly with minimal loss in performance, and also results in significant speedup of the entire pipeline (data selection + training).
SP:fa4272fd8c8acea21a01d8fd6542a51534c1aee8
Mutual Information Gradient Estimation for Representation Learning
1 INTRODUCTION . Mutual information ( MI ) is an appealing metric widely used in information theory and machine learning to quantify the amount of shared information between a pair of random variables . Specifically , given a pair of random variables x , y , the MI , denoted by I ( x ; y ) , is defined as I ( x ; y ) = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] , ( 1 ) where E is the expectation over the given distribution . Since MI is invariant to invertible and smooth transformations , it can capture non-linear statistical dependencies between variables ( Kinney & Atwal , 2014 ) . These appealing properties make it act as a fundamental measure of true dependence . Therefore , MI has found applications in a wide range of machine learning tasks , including feature selection ( Kwak & Choi , 2002 ; Fleuret , 2004 ; Peng et al. , 2005 ) , clustering ( Müller et al. , 2012 ; Ver Steeg & Galstyan , 2015 ) , and causality ( Butte & Kohane , 1999 ) . It has also been pervasively used in science , such as biomedical sciences ( Maes et al. , 1997 ) , computational biology ( Krishnaswamy et al. , 2014 ) , and computational neuroscience ( Palmer et al. , 2015 ) . Recently , there has been a revival of methods in unsupervised representation learning based on MI . A seminal work is the InfoMax principle ( Linsker , 1988 ) , where given an input instance x , the goal of the InfoMax principle is to learn a representation Eψ ( x ) by maximizing the MI between the input and its representation . A growing set of recent works have demonstrated promising empirical performance in unsupervised representation learning via MI maximization ( Krause et al. , 2010 ; Hu et al. , 2017 ; Alemi et al. , 2018b ; Oord et al. , 2018 ; Hjelm et al. , 2019 ) . Another closely related work is the Information Bottleneck method ( Tishby et al. , 2000 ; Alemi et al. , 2017 ) , where MI is used to limit the contents of representations . Specifically , the representations are learned by extracting taskrelated information from the original data while being constrained to discard parts that are irrelevant to the task . Several recent works have also suggested that by controlling the amount of information between learned representations and the original data , one can tune desired characteristics of trained models such as generalization error ( Tishby & Zaslavsky , 2015 ; Vera et al. , 2018 ) , robustness ( Alemi et al. , 2017 ) , and detection of out-of-distribution data ( Alemi et al. , 2018a ) . Despite playing a pivotal role across a variety of domains , MI is notoriously intractable . Exact computation is only tractable for discrete variables , or for a limited family of problems where the probability distributions are known . For more general problems , MI is challenging to analytically compute or estimate from samples . A variety of MI estimators have been developed over the years , including likelihood-ratio estimators ( Suzuki et al. , 2008 ) , binning ( Fraser & Swinney , 1986 ; Darbellay & Vajda , 1999 ; Shwartz-Ziv & Tishby , 2017 ) , k-nearest neighbors ( Kozachenko & Leonenko , 1987 ; Kraskov et al. , 2004 ; Pérez-Cruz , 2008 ; Singh & Póczos , 2016 ) , and kernel density estimators ( Moon et al. , 1995 ; Kwak & Choi , 2002 ; Kandasamy et al. , 2015 ) . However , few of these mutual information estimators scale well with dimension and sample size in machine learning problems ( Gao et al. , 2015 ) . In order to overcome the intractability of MI in the continuous and high-dimensional settings , Alemi et al . ( 2017 ) combines variational bounds of Barber & Agakov ( 2003 ) with neural networks for the estimation . However , the tractable density for the approximate distribution is required due to variational approximation . This limits its application to the general-purpose estimation , since the underlying distributions are often unknown . Alternatively , the Mutual Information Neural Estimation ( MINE , Belghazi et al . ( 2018 ) ) and the Jensen-Shannon MI estimator ( JSD , Hjelm et al . ( 2019 ) ) enable differentiable and tractable estimation of MI by training a discriminator to distinguish samples coming from the joint distribution or the product of the marginals . In detail , MINE employs a lower-bound to the MI based on the Donsker-Varadhan representation of the KL-divergence , and JSD follows the formulation of f-GAN KL-divergence . In general , these estimators are often noisy and can lead to unstable training due to their dependence on the discriminator used to estimate the bounds of mutual information . As pointed out by Poole et al . ( 2019 ) , these unnormalized critic estimators of MI exhibit high variance and are challenging to tune for estimation . An alternative low-variance choice of MI estimator is Information Noise-Contrastive Estimation ( InfoNCE , Oord et al . ( 2018 ) ) , which introduces the Noise-Contrastive Estimation with flexible critics parameterized by neural networks as a bound to approximate MI . Nonetheless , its estimation saturates at log of the batch size and suffers from high bias . Despite their modeling power , none of the estimators are capable of providing accurate estimation of MI with low variance when the MI is large and the batch size is small ( Poole et al. , 2019 ) . As supported by the theoretical findings in McAllester & Statos ( 2018 ) , any distribution-free high-confidence lower bound on entropy requires a sample size exponential in the size of the bound . More discussions about the bounds of MI and their relationship can be referred to Poole et al . ( 2019 ) . In summary , existing estimators first approximate MI and then use these approximations to optimize the associated parameters . For estimating MI based on any finite number of samples , there exists an infinite number of functions , with arbitrarily diverse gradients , that can perfectly approximate the true MI at these samples . However , these approximate functions can lead to unstable training and poor performance in optimization due to gradients discrepancy between approximate estimation and true MI . Estimating gradients of MI rather than estimating MI may be a better approach for MI optimization . To this end , to the best of our knowledge , we firstly propose the Mutual Information Gradient Estimator ( MIGE ) in representation learning . In detail , we estimate the score function of an implicit distribution , ∇x log q ( x ) , to achieve a general-purpose MI gradient estimation for representation learning . In particular , to deal with high-dimensional inputs , such as text , images and videos , score function estimation via Spectral Stein Gradient Estimator ( SSGE ) ( Shi et al. , 2018 ) is computationally expensive and complex . We thus propose an efficient high-dimensional score function estimator to make SSGE scalable . To this end , we derive a new reparameterization trick for the representation distribution based on the lower-variance reparameterization trick proposed by Roeder et al . ( 2017 ) . We summarize the contributions of this paper as follows : • We propose the Mutual Information Gradient Estimator ( MIGE ) for representation learning based on the score function estimation of implicit distributions . Compared with MINE and MINE-f , MIGE provides a tighter and smoother gradient estimation of MI in a highdimensional and large-MI setting , as shown in Figure 1 of Section 4 . • We propose the Scalable SSGE to alleviate the exorbitant computational cost of SSGE in high-dimensional settings . • To learn meaningful representations , we apply SSGE as gradient estimators for both InfoMax and Information Bottlenck , and have achieved improved performance than their corresponding competitors . 2 SCALABLE SPECTRAL STEIN GRADIENT ESTIMATOR . Score estimation of implicit distributions has been widely explored in the past few years ( Song et al. , 2019 ; Li & Turner , 2017 ; Shi et al. , 2018 ) . A promising method of score estimation is the Stein gradient estimator ( Li & Turner , 2017 ; Shi et al. , 2018 ) , which is proposed for implicit distributions . It is inspired by generalized Steins identity ( Gorham & Mackey , 2015 ; Liu & Wang , 2016 ) as follows . Steins identity . Let q ( x ) be a continuously differentiable ( also called smooth ) density supported on X ⊆ Rd , and h ( x ) = [ h1 ( x ) , h2 ( x ) , . . . , hd′ ( x ) ] T is a smooth vector function . Further , the boundary conditions on h is q ( x ) h ( x ) = 0 , ∀x ∈ ∂X if X is compact , or lim x→∞ q ( x ) h ( x ) = 0 if X = Rd . ( 2 ) Under this condition , the following identity can be easily checked using integration by parts , assuming mild zero boundary conditions on h , Eq [ h ( x ) ∇x log q ( x ) T +∇xh ( x ) ] = 0 . ( 3 ) Here h is called the Stein class of q ( x ) if Steins identity Eq . ( 3 ) holds . Monte Carlo estimation of the expectation in Eq . ( 3 ) builds the connection between ∇x log q ( x ) and the samples from q ( x ) in Steins identity . For modeling implicit distributions , Motivated by Steins identity , Shi et al . ( 2018 ) proposed Spectral Stein Gradient Estimator ( SSGE ) for implicit distributions based on Stein ’ s identity and a spectral decomposition of kernel operators where the eigenfunctions are approximated by the Nyström method . Below we briefly review SSGE . More details refer to Shi et al . ( 2018 ) . Specifically , we denote the target gradient function to estimate by g : X → Rd : g ( x ) = ∇x log q ( x ) . The ith component of the gradient is gi ( x ) = ∇xi log q ( x ) . We assume g1 , . . . , gd ∈ L2 ( X , q ) . { ψj } j≥1 denotes an orthonormal basis of L 2 ( X , q ) . We can expand gi ( x ) into the spectral series , i.e. , gi ( x ) = ∑∞ j=1 βijψj ( x ) . The value of the j th eigenfunction ψj at x can be approximated by the Nyström method ( Xu et al. , 2015 ) . Due to the orthonormality of eigenfunctions { ψj } j≥1 , there is a constraint under the probability measure q ( . ) : ∫ ψi ( x ) ψj ( x ) q ( x ) dx = δij , where δij = 1 [ i = j ] . Based on this constraint , we can obtain the following equation for { ψj } j≥1 : ∫ k ( x , y ) ψ ( y ) q ( y ) dy = µψ ( x ) , ( 4 ) where k ( . ) is a kernel function . The left side of the above equation can be approximated by the Monte Carlo estimate using i.i.d . samples x1 , ... , xM from q ( . ) : 1MKψ ≈ µψ , where K is the Gram Matrix and ψ = [ ψ ( x1 ) , . . . , ψ ( xM ) ] > . We can solve this eigenvalue problem by choose the J largest eigenvalues λ1 ≥ · · · ≥ λJ for K. uj denotes the eigenvector of the Gram matrix . The approximation for { ψj } j≥1 can be obtained combined with Eq . ( 4 ) as following : ψj ( x ) ≈ ψ̂j ( x ) =√ M λj ∑M m=1 ujmk ( x , x m ) . Furthermore , based on the orthonormality of { ψj } j≥1 , we can easily obtain βij = −Eq∇xiψj ( x ) . By taking derivative both sides of Eq . ( 4 ) , we can show that : µj∇xiψj ( x ) = ∇xi ∫ k ( x , y ) ψj ( y ) q ( y ) dy = ∫ ∇xik ( x , y ) ψj ( y ) q ( y ) dy . ( 5 ) Then we can estimate as following : ∇̂xiψj ( x ) ≈ 1 µjM M∑ m=1 ∇xik ( x , xm ) ψj ( xm ) . ( 6 ) Finally , by truncating the expansion to the first J terms and plugging in the Nyström approximations of { ψj } j≥1 , we can get the score estimator : ĝi ( x ) = J∑ j=1 β̂ijψ̂j ( x ) , β̂ij = − 1 M M∑ m=1 ∇xi ψ̂j ( xm ) . ( 7 ) In general , representation learning for large-scale datasets is usually costly in terms of storage and computation . For instance , the dimension of images in the STL-10 dataset is 96 × 96 × 3 ( i.e. , the vector length is 27648 ) . This makes it almost impossible to directly estimate the gradient of MI between the input and representation . To alleviate this problem , we introduce random projection ( RP ) ( Bingham & Mannila , 2001 ) to reduce the dimension of x . We briefly review RP . More details refer to Bingham & Mannila ( 2001 ) . RP projects the original d-dimensional data into a k-dimensional ( k < < d ) subspace . Concretely , let matrix Xd×N denotes the original set of N d-dimensional data , the projection of the original data XRPk×N is obtained by introducing a random matrix Rk×d whose columns have unit length , as follows ( Bingham & Mannila , 2001 ) , XRPk×N = Rk×dXd×N . After RP , the Euclidean distance between two original data vectors can be approximated by the Euclidean distance of the projective vectors in reduced spaces : ‖x1 − x2‖ ≈ √ d/k ‖Rx1 −Rx2‖ , ( 8 ) where x1 and x2 denote the two data vectors in the original large dimensional space . Based on the principle of RP , we can derive a Salable Spectral Stein Gradient Estimator , which is an efficient high-dimensional score function estimator . One can show that the RBF kernel satisfies Steins identity ( Liu & Wang , 2016 ) . Shi et al . ( 2018 ) also shows that it is a promising choice for SSGE with a lower error bound . To reduce the computation of the kernel similarities of SSGE in high-dimensional settings , we replace the input of SSGE with a projections obtained by RP according to the approximation of Eq . ( 8 ) for the computation of the RBF kernel .
This paper proposes MIGE---a novel estimator of the mutual information (MI) gradient, based on estimating the score function of an implicit distribution. To this end, the authors employ the spectral Stein gradient estimator (SSGE) and propose its scalable version based on random projections of the original input. The theoretical advantages of the method are presented using a toy experiment with correlated Gaussian random variables, where both the mutual information and its gradient can be computed analytically. In this setting, MIGE provides gradient estimates that are less biased and smoother than baselines. The method is also evaluated on two more complicated tasks: unsupervised representation learning on Cifar-10 and CIfar-100 via DeepInfoMax (DIM) and classification on MNIST with Information Bottleneck (IB), where MIGE outperforms all baselines by a significant margin.
SP:1e0fa3e10b19c54a0271b7cd2528ac8a3a51686a
Mutual Information Gradient Estimation for Representation Learning
1 INTRODUCTION . Mutual information ( MI ) is an appealing metric widely used in information theory and machine learning to quantify the amount of shared information between a pair of random variables . Specifically , given a pair of random variables x , y , the MI , denoted by I ( x ; y ) , is defined as I ( x ; y ) = Ep ( x , y ) [ log p ( x , y ) p ( x ) p ( y ) ] , ( 1 ) where E is the expectation over the given distribution . Since MI is invariant to invertible and smooth transformations , it can capture non-linear statistical dependencies between variables ( Kinney & Atwal , 2014 ) . These appealing properties make it act as a fundamental measure of true dependence . Therefore , MI has found applications in a wide range of machine learning tasks , including feature selection ( Kwak & Choi , 2002 ; Fleuret , 2004 ; Peng et al. , 2005 ) , clustering ( Müller et al. , 2012 ; Ver Steeg & Galstyan , 2015 ) , and causality ( Butte & Kohane , 1999 ) . It has also been pervasively used in science , such as biomedical sciences ( Maes et al. , 1997 ) , computational biology ( Krishnaswamy et al. , 2014 ) , and computational neuroscience ( Palmer et al. , 2015 ) . Recently , there has been a revival of methods in unsupervised representation learning based on MI . A seminal work is the InfoMax principle ( Linsker , 1988 ) , where given an input instance x , the goal of the InfoMax principle is to learn a representation Eψ ( x ) by maximizing the MI between the input and its representation . A growing set of recent works have demonstrated promising empirical performance in unsupervised representation learning via MI maximization ( Krause et al. , 2010 ; Hu et al. , 2017 ; Alemi et al. , 2018b ; Oord et al. , 2018 ; Hjelm et al. , 2019 ) . Another closely related work is the Information Bottleneck method ( Tishby et al. , 2000 ; Alemi et al. , 2017 ) , where MI is used to limit the contents of representations . Specifically , the representations are learned by extracting taskrelated information from the original data while being constrained to discard parts that are irrelevant to the task . Several recent works have also suggested that by controlling the amount of information between learned representations and the original data , one can tune desired characteristics of trained models such as generalization error ( Tishby & Zaslavsky , 2015 ; Vera et al. , 2018 ) , robustness ( Alemi et al. , 2017 ) , and detection of out-of-distribution data ( Alemi et al. , 2018a ) . Despite playing a pivotal role across a variety of domains , MI is notoriously intractable . Exact computation is only tractable for discrete variables , or for a limited family of problems where the probability distributions are known . For more general problems , MI is challenging to analytically compute or estimate from samples . A variety of MI estimators have been developed over the years , including likelihood-ratio estimators ( Suzuki et al. , 2008 ) , binning ( Fraser & Swinney , 1986 ; Darbellay & Vajda , 1999 ; Shwartz-Ziv & Tishby , 2017 ) , k-nearest neighbors ( Kozachenko & Leonenko , 1987 ; Kraskov et al. , 2004 ; Pérez-Cruz , 2008 ; Singh & Póczos , 2016 ) , and kernel density estimators ( Moon et al. , 1995 ; Kwak & Choi , 2002 ; Kandasamy et al. , 2015 ) . However , few of these mutual information estimators scale well with dimension and sample size in machine learning problems ( Gao et al. , 2015 ) . In order to overcome the intractability of MI in the continuous and high-dimensional settings , Alemi et al . ( 2017 ) combines variational bounds of Barber & Agakov ( 2003 ) with neural networks for the estimation . However , the tractable density for the approximate distribution is required due to variational approximation . This limits its application to the general-purpose estimation , since the underlying distributions are often unknown . Alternatively , the Mutual Information Neural Estimation ( MINE , Belghazi et al . ( 2018 ) ) and the Jensen-Shannon MI estimator ( JSD , Hjelm et al . ( 2019 ) ) enable differentiable and tractable estimation of MI by training a discriminator to distinguish samples coming from the joint distribution or the product of the marginals . In detail , MINE employs a lower-bound to the MI based on the Donsker-Varadhan representation of the KL-divergence , and JSD follows the formulation of f-GAN KL-divergence . In general , these estimators are often noisy and can lead to unstable training due to their dependence on the discriminator used to estimate the bounds of mutual information . As pointed out by Poole et al . ( 2019 ) , these unnormalized critic estimators of MI exhibit high variance and are challenging to tune for estimation . An alternative low-variance choice of MI estimator is Information Noise-Contrastive Estimation ( InfoNCE , Oord et al . ( 2018 ) ) , which introduces the Noise-Contrastive Estimation with flexible critics parameterized by neural networks as a bound to approximate MI . Nonetheless , its estimation saturates at log of the batch size and suffers from high bias . Despite their modeling power , none of the estimators are capable of providing accurate estimation of MI with low variance when the MI is large and the batch size is small ( Poole et al. , 2019 ) . As supported by the theoretical findings in McAllester & Statos ( 2018 ) , any distribution-free high-confidence lower bound on entropy requires a sample size exponential in the size of the bound . More discussions about the bounds of MI and their relationship can be referred to Poole et al . ( 2019 ) . In summary , existing estimators first approximate MI and then use these approximations to optimize the associated parameters . For estimating MI based on any finite number of samples , there exists an infinite number of functions , with arbitrarily diverse gradients , that can perfectly approximate the true MI at these samples . However , these approximate functions can lead to unstable training and poor performance in optimization due to gradients discrepancy between approximate estimation and true MI . Estimating gradients of MI rather than estimating MI may be a better approach for MI optimization . To this end , to the best of our knowledge , we firstly propose the Mutual Information Gradient Estimator ( MIGE ) in representation learning . In detail , we estimate the score function of an implicit distribution , ∇x log q ( x ) , to achieve a general-purpose MI gradient estimation for representation learning . In particular , to deal with high-dimensional inputs , such as text , images and videos , score function estimation via Spectral Stein Gradient Estimator ( SSGE ) ( Shi et al. , 2018 ) is computationally expensive and complex . We thus propose an efficient high-dimensional score function estimator to make SSGE scalable . To this end , we derive a new reparameterization trick for the representation distribution based on the lower-variance reparameterization trick proposed by Roeder et al . ( 2017 ) . We summarize the contributions of this paper as follows : • We propose the Mutual Information Gradient Estimator ( MIGE ) for representation learning based on the score function estimation of implicit distributions . Compared with MINE and MINE-f , MIGE provides a tighter and smoother gradient estimation of MI in a highdimensional and large-MI setting , as shown in Figure 1 of Section 4 . • We propose the Scalable SSGE to alleviate the exorbitant computational cost of SSGE in high-dimensional settings . • To learn meaningful representations , we apply SSGE as gradient estimators for both InfoMax and Information Bottlenck , and have achieved improved performance than their corresponding competitors . 2 SCALABLE SPECTRAL STEIN GRADIENT ESTIMATOR . Score estimation of implicit distributions has been widely explored in the past few years ( Song et al. , 2019 ; Li & Turner , 2017 ; Shi et al. , 2018 ) . A promising method of score estimation is the Stein gradient estimator ( Li & Turner , 2017 ; Shi et al. , 2018 ) , which is proposed for implicit distributions . It is inspired by generalized Steins identity ( Gorham & Mackey , 2015 ; Liu & Wang , 2016 ) as follows . Steins identity . Let q ( x ) be a continuously differentiable ( also called smooth ) density supported on X ⊆ Rd , and h ( x ) = [ h1 ( x ) , h2 ( x ) , . . . , hd′ ( x ) ] T is a smooth vector function . Further , the boundary conditions on h is q ( x ) h ( x ) = 0 , ∀x ∈ ∂X if X is compact , or lim x→∞ q ( x ) h ( x ) = 0 if X = Rd . ( 2 ) Under this condition , the following identity can be easily checked using integration by parts , assuming mild zero boundary conditions on h , Eq [ h ( x ) ∇x log q ( x ) T +∇xh ( x ) ] = 0 . ( 3 ) Here h is called the Stein class of q ( x ) if Steins identity Eq . ( 3 ) holds . Monte Carlo estimation of the expectation in Eq . ( 3 ) builds the connection between ∇x log q ( x ) and the samples from q ( x ) in Steins identity . For modeling implicit distributions , Motivated by Steins identity , Shi et al . ( 2018 ) proposed Spectral Stein Gradient Estimator ( SSGE ) for implicit distributions based on Stein ’ s identity and a spectral decomposition of kernel operators where the eigenfunctions are approximated by the Nyström method . Below we briefly review SSGE . More details refer to Shi et al . ( 2018 ) . Specifically , we denote the target gradient function to estimate by g : X → Rd : g ( x ) = ∇x log q ( x ) . The ith component of the gradient is gi ( x ) = ∇xi log q ( x ) . We assume g1 , . . . , gd ∈ L2 ( X , q ) . { ψj } j≥1 denotes an orthonormal basis of L 2 ( X , q ) . We can expand gi ( x ) into the spectral series , i.e. , gi ( x ) = ∑∞ j=1 βijψj ( x ) . The value of the j th eigenfunction ψj at x can be approximated by the Nyström method ( Xu et al. , 2015 ) . Due to the orthonormality of eigenfunctions { ψj } j≥1 , there is a constraint under the probability measure q ( . ) : ∫ ψi ( x ) ψj ( x ) q ( x ) dx = δij , where δij = 1 [ i = j ] . Based on this constraint , we can obtain the following equation for { ψj } j≥1 : ∫ k ( x , y ) ψ ( y ) q ( y ) dy = µψ ( x ) , ( 4 ) where k ( . ) is a kernel function . The left side of the above equation can be approximated by the Monte Carlo estimate using i.i.d . samples x1 , ... , xM from q ( . ) : 1MKψ ≈ µψ , where K is the Gram Matrix and ψ = [ ψ ( x1 ) , . . . , ψ ( xM ) ] > . We can solve this eigenvalue problem by choose the J largest eigenvalues λ1 ≥ · · · ≥ λJ for K. uj denotes the eigenvector of the Gram matrix . The approximation for { ψj } j≥1 can be obtained combined with Eq . ( 4 ) as following : ψj ( x ) ≈ ψ̂j ( x ) =√ M λj ∑M m=1 ujmk ( x , x m ) . Furthermore , based on the orthonormality of { ψj } j≥1 , we can easily obtain βij = −Eq∇xiψj ( x ) . By taking derivative both sides of Eq . ( 4 ) , we can show that : µj∇xiψj ( x ) = ∇xi ∫ k ( x , y ) ψj ( y ) q ( y ) dy = ∫ ∇xik ( x , y ) ψj ( y ) q ( y ) dy . ( 5 ) Then we can estimate as following : ∇̂xiψj ( x ) ≈ 1 µjM M∑ m=1 ∇xik ( x , xm ) ψj ( xm ) . ( 6 ) Finally , by truncating the expansion to the first J terms and plugging in the Nyström approximations of { ψj } j≥1 , we can get the score estimator : ĝi ( x ) = J∑ j=1 β̂ijψ̂j ( x ) , β̂ij = − 1 M M∑ m=1 ∇xi ψ̂j ( xm ) . ( 7 ) In general , representation learning for large-scale datasets is usually costly in terms of storage and computation . For instance , the dimension of images in the STL-10 dataset is 96 × 96 × 3 ( i.e. , the vector length is 27648 ) . This makes it almost impossible to directly estimate the gradient of MI between the input and representation . To alleviate this problem , we introduce random projection ( RP ) ( Bingham & Mannila , 2001 ) to reduce the dimension of x . We briefly review RP . More details refer to Bingham & Mannila ( 2001 ) . RP projects the original d-dimensional data into a k-dimensional ( k < < d ) subspace . Concretely , let matrix Xd×N denotes the original set of N d-dimensional data , the projection of the original data XRPk×N is obtained by introducing a random matrix Rk×d whose columns have unit length , as follows ( Bingham & Mannila , 2001 ) , XRPk×N = Rk×dXd×N . After RP , the Euclidean distance between two original data vectors can be approximated by the Euclidean distance of the projective vectors in reduced spaces : ‖x1 − x2‖ ≈ √ d/k ‖Rx1 −Rx2‖ , ( 8 ) where x1 and x2 denote the two data vectors in the original large dimensional space . Based on the principle of RP , we can derive a Salable Spectral Stein Gradient Estimator , which is an efficient high-dimensional score function estimator . One can show that the RBF kernel satisfies Steins identity ( Liu & Wang , 2016 ) . Shi et al . ( 2018 ) also shows that it is a promising choice for SSGE with a lower error bound . To reduce the computation of the kernel similarities of SSGE in high-dimensional settings , we replace the input of SSGE with a projections obtained by RP according to the approximation of Eq . ( 8 ) for the computation of the RBF kernel .
This paper works out estimators for the gradient of Mutual Information (MI). The focus is on its recent popular use for representation learning. The insight the authors provide is to see encoding the representation as a ‘reparametrization’ of the data. This insight enables mathematical tools from the literature on ‘pathwise derivatives’. With gradients on the MI, one can estimate models that aim to maximize this quantity. For example in unsupervised learning one can learn representations for downstream tasks. This is shown in Table 1. Another application in supervised learning is the Information Bottleneck. This shown in Table 2.
SP:1e0fa3e10b19c54a0271b7cd2528ac8a3a51686a
Low Rank Training of Deep Neural Networks for Emerging Memory Technology
The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making “ at the edge. ” However , this work has traditionally focused on neural network inference , rather than training , due to memory and compute limitations , especially in emerging non-volatile memory systems , where writes are energetically costly and reduce lifespan . Yet , the ability to train at the edge is becoming increasingly important as it enables real-time adaptability to device drift and environmental variation , user customization , and federated learning across devices . In this work , we address two key challenges for training on edge devices with non-volatile memory : low write density and low auxiliary memory . We present a low-rank training scheme that addresses these challenges while maintaining computational efficiency . We then demonstrate the technique on a representative convolutional neural network across several adaptation problems , where it out-performs standard SGD both in accuracy and in number of weight writes . 1 INTRODUCTION . Deep neural networks have shown remarkable performance on a variety of challenging inference tasks . As the energy efficiency of deep-learning inference accelerators improves , some models are now being deployed directly to edge devices to take advantage of increased privacy , reduced network bandwidth , and lower inference latency . Despite edge deployment , training happens predominately in the cloud . This limits the privacy advantages of running models on-device and results in static models that do not adapt to evolving data distributions in the field . Efforts aimed at on-device training address some of these challenges . Federated learning aims to keep data on-device by training models in a distributed fashion ( Konecný et al. , 2016 ) . On-device model customization has been achieved by techniques such as weight-imprinting ( Qi et al. , 2018 ) , or by retraining limited sets of layers . On-chip training has also been demonstrated for handling hardware imperfections ( Zhang et al. , 2017 ; Gonugondla et al. , 2018 ) . Despite this progress with small models , on-chip training of larger models is bottlenecked by the limited memory size and compute horsepower of edge processors . Emerging non-volatile ( NVM ) memories such as resistive random access memory ( RRAM ) have shown great promise for energy and area-efficient inference ( Yu , 2018 ) . However , on-chip training requires a large number of writes to the memory , and RRAM writes cost significantly more energy than reads ( e.g. , 10.9 pJ/bit versus 1.76 pJ/bit ( Wu et al. , 2019 ) ) . Additionally , RRAM endurance is on the order of 106 writes ( Grossi et al. , 2019 ) , shortening the lifetime of a device due to memory writes for on-chip training . In this paper , we present an online training scheme amenable to NVM memories to enable next generation edge devices . Our contributions are ( 1 ) an algorithm called Streaming Kronecker Sum Approximation ( SKS ) , and its analysis , which addresses the two key challenges of low write density and low auxiliary memory ; ( 2 ) two techniques “ gradient max-norm ” and “ streaming batch norm ” to help training specifically in the online setting ; ( 3 ) a suite of adaptation experiments to demonstrate the advantages of our approach . 2 RELATED WORK . Efficient training for resistive arrays . Several works have aimed at improving the efficiency of training algorithms on resistive arrays . Of the three weight-computations required in training ( forward , backprop , and weight update ) , weight updates are the hardest to parallelize using the array structure . Stochastic weight updates ( Gokmen & Vlasov , 2016 ) allow programming of all cells in a crossbar at once , as opposed to row/column-wise updating . Online Manhattan rule updating ( Zamanidoost et al. , 2015 ) can also be used to update all the weights at once . Several works have proposed new memory structures to improve the efficiency of training ( Soudry et al. , 2015 ; Ambrogio et al. , 2018 ) . The number of writes has also been quantified in the context of chip-in-the-loop training ( Yu et al. , 2016 ) . Distributed gradient descent . Distributed training in the data center is another problem that suffers from expensive weight updates . Here , the model is replicated onto many compute nodes and in each training iteration , the mini-batch is split across the nodes to compute gradients . The distributed gradients are then accumulated on a central node that computes the updated weights and broadcasts them . These systems can be limited by communication bandwidth , and compressed gradient techniques ( Aji & Heafield , 2017 ) have therefore been developed . In Lin et al . ( 2017 ) , the gradients are accumulated over multiple training iterations on each compute node and only gradients that exceed a threshold are communicated back to the central node . In the context of on-chip training with NVM , this method helps reduce the number of weight updates . However , the gradient accumulator requires as much memory as the weights themselves , which negates the density benefits of NVM . Low-Rank Training . Our work draws heavily from previous low-rank training schemes that have largely been developed for use in recurrent neural networks to uncouple the training memory requirements from the number of time steps inherent to the standard truncated backpropagation through time ( TBPTT ) training algorithm . Algorithms developed since then to address the memory problem include Real-Time Recurrent Learning ( RTRL ) ( Williams & Zipser , 1989 ) , Unbiased Online Recurrent Optimization ( UORO ) ( Tallec & Ollivier , 2017 ) , Kronecker Factored RTRL ( KF-RTRL ) ( Mujika et al. , 2018 ) , and Optimal Kronecker Sums ( OK ) ( Benzing et al. , 2019 ) . These latter few techniques rely on the weight gradients in a weight-vector product looking like a sum of outer products ( i.e. , Kronecker sums ) of input vectors with backpropagated errors . Instead of storing a growing number of these sums , they can be approximated with a low-rank representation involving fewer sums . 3 TRAINING NON-VOLATILE MEMORY . The meat of most deep learning systems are many weight matrix - activation vector productsW · a. Fully-connected ( dense ) layers use them explicitly : a [ ` ] = σ ( W [ ` ] a [ ` −1 ] + b [ ` ] ) for layer ` , where σ is a non-linear activation function ( more details are discussed in detail in Appendix C.1 ) . Recurrent neural networks use one or many matrix-vector products per recurrent cell . Convolutional layers can also be interpreted in terms of matrix-vector products by unrolling the input feature map into strided convolution-kernel-size slices . Then , each matrix-vector product takes one such input slice and maps it to all channels of the corresponding output pixel ( more details are discussed in Appendix C.2 ) . The ubiquity of matrix-vector products allows us to adapt the techniques discussed in “ Low-Rank Training ” of Section 2 to other network architectures . Instead of reducing the memory across time steps , we can reduce the memory across training samples in the case of a traditional feedforward neural network . However , in traditional training ( e.g. , on a GPU ) , this technique does not confer advantages . Traditional training platforms often have ample memory to store a batch of activations and backpropagated gradients , and the weight updates ∆W can be applied directly to the weights W once they are computed , allowing temporary activation memory to be deleted . The benefits of low-rank training only become apparent when looking at the challenges of proposed NVM devices : Low write density ( LWD ) . In NVM , writing to weights at every sample is costly in energy , time , and endurance . These concerns are exacerbated in multilevel cells , which require several steps of an iterative write-verify cycle to program the desired level . We therefore want to minimize the number of writes to NVM . Low auxiliary memory ( LAM ) . NVM is the densest form of memory . In 40nm technology , RRAM 1T-1R bitcells @ 0.085 um2 ( Chou et al. , 2018 ) are 2.8x smaller than 6T SRAM cells @ 0.242 um2 ( TSMC , 2019 ) . Therefore , NVM should be used to store the memory-intensive weights . By the same token , no other on-chip memory should come close to the size of the on-chip NVM . In particular , if our b−bit NVM stores a weight matrix of size no × ni , we should use at most r ( ni + no ) b auxiliary non-NVM memory , where r is a small constant . Despite these space limitations , the reason we might opt to use auxiliary ( large , high endurance , low energy ) memory is because there are places where writes are frequent , violating LWD if we were to use NVM . In the traditional minibatch SGD setting with batch size B , an upper limit on the write density per cell per sample is easily seen : 1/B . However , to store such a batch of updates without intermediate writes to NVM would require auxiliary memory proportional to B . Therefore , a trade-off becomes apparent . If B is reduced , LAM is satisfied at the cost of LWD . If B is raised , LWD is satisfied at the cost of LAM . Using low-rank training techniques , the auxiliary memory requirements are decoupled from the batch size , allowing us to increase B while satisfying both LWD and LAM1 . Additionally , because the low-rank representation uses so little memory , a larger bitwidth can be used , potentially allowing for gradient accumulation in a way that is not possible with low bitwidth NVM weights . In the next section , we elaborate on the low-rank training method . 4 LOW-RANK TRAINING METHOD . Let z ( i ) = Wa ( i ) + b be the standard affine transformation building block of some larger network , e.g. , y ( i ) p = fpost ( z ( i ) ) and a ( i ) = fpre ( x ( i ) ) with prediction loss L ( y ( i ) p , y ( i ) t ) , where ( x ( i ) , y ( i ) t ) is the ith training sample pair . Then weight gradient∇WL ( i ) = dz ( i ) ( a ( i ) ) > = dz ( i ) ⊗ a ( i ) where dz ( i ) = ∇z ( i ) L ( i ) . A minibatch SGD weight update accumulates this gradient over B samples : ∆W = −η ∑B i=1 dz ( i ) ⊗ a ( i ) for learning rate η . For a rank-r training scheme , approximate the sum ∑B i=1 dz ( i ) ⊗a ( i ) by iteratively updating two rank- r matrices L̃ ∈ Rno×r , R̃ ∈ Rni×r with each new outer product : L̃R̃ > ← rankReduce ( L̃R̃ > + dz ( i ) ⊗a ( i ) ) . Therefore , at each sample , we convert the rank-q = r+ 1 system L̃R̃ > +dz ( i ) ⊗a ( i ) into the rank-r L̃R̃ > . In the next sections , we discuss how to compute rankReduce . 4.1 OPTIMAL KRONECKER SUM APPROXIMATION ( OK ) . One option for rankReduce ( X ) to convert from rank q = r + 1 X to rank r is a minimum error estimator , which is implemented by selecting the top r components of a singular value decomposition ( SVD ) of X . However , a naı̈ve implementation is computationally infeasible and biased : E [ rankReduce ( X ) ] 6= X . Benzing et al . ( 2019 ) solves these problems by proposing a minimum variance unbiased estimator for rankReduce , which they call the OK algorithm2 . The OK algorithm can be understood in two key steps : first , an efficient method of computing the SVD of a Kronecker sum ; second , a method of splitting the singular value matrix Σ into two rank-r matrices whose outer product is a minimum-variance , unbiased estimate of Σ . Details can be found in their paper , however we include a high-level explanation in Sections 4.1.1 and 4.1.2 to aid our discussions . Note that our variable notation differs from Benzing et al . ( 2019 ) .
While inference on edge devices is a popular and well-studied problem in recent days, training on these devices comes with many challenges. This paper proposes a low-rank training schema that helps mitigate some of the critical challenges that occur during training models on NVM memory-based edge devices. Additionally, two techniques, namely streaming batch norm and gradient max norm, are proposed to help training in an online setting. The proposed method is mainly based on approximating the Kronecker sum and is largely inspired by (Benzing et al, ICML 2019, Optimal Kronecker-sum approximation of real time recurrent learning). The proposed approach provides a few optimizations that improves this performance further, and outperforms SGD in terms of accuracy and the number of weights updates in a limited experimental setting.
SP:f949cb0fd9e1b4afc31725a740ef87dd2d5d5a49
Low Rank Training of Deep Neural Networks for Emerging Memory Technology
The recent success of neural networks for solving difficult decision tasks has incentivized incorporating smart decision making “ at the edge. ” However , this work has traditionally focused on neural network inference , rather than training , due to memory and compute limitations , especially in emerging non-volatile memory systems , where writes are energetically costly and reduce lifespan . Yet , the ability to train at the edge is becoming increasingly important as it enables real-time adaptability to device drift and environmental variation , user customization , and federated learning across devices . In this work , we address two key challenges for training on edge devices with non-volatile memory : low write density and low auxiliary memory . We present a low-rank training scheme that addresses these challenges while maintaining computational efficiency . We then demonstrate the technique on a representative convolutional neural network across several adaptation problems , where it out-performs standard SGD both in accuracy and in number of weight writes . 1 INTRODUCTION . Deep neural networks have shown remarkable performance on a variety of challenging inference tasks . As the energy efficiency of deep-learning inference accelerators improves , some models are now being deployed directly to edge devices to take advantage of increased privacy , reduced network bandwidth , and lower inference latency . Despite edge deployment , training happens predominately in the cloud . This limits the privacy advantages of running models on-device and results in static models that do not adapt to evolving data distributions in the field . Efforts aimed at on-device training address some of these challenges . Federated learning aims to keep data on-device by training models in a distributed fashion ( Konecný et al. , 2016 ) . On-device model customization has been achieved by techniques such as weight-imprinting ( Qi et al. , 2018 ) , or by retraining limited sets of layers . On-chip training has also been demonstrated for handling hardware imperfections ( Zhang et al. , 2017 ; Gonugondla et al. , 2018 ) . Despite this progress with small models , on-chip training of larger models is bottlenecked by the limited memory size and compute horsepower of edge processors . Emerging non-volatile ( NVM ) memories such as resistive random access memory ( RRAM ) have shown great promise for energy and area-efficient inference ( Yu , 2018 ) . However , on-chip training requires a large number of writes to the memory , and RRAM writes cost significantly more energy than reads ( e.g. , 10.9 pJ/bit versus 1.76 pJ/bit ( Wu et al. , 2019 ) ) . Additionally , RRAM endurance is on the order of 106 writes ( Grossi et al. , 2019 ) , shortening the lifetime of a device due to memory writes for on-chip training . In this paper , we present an online training scheme amenable to NVM memories to enable next generation edge devices . Our contributions are ( 1 ) an algorithm called Streaming Kronecker Sum Approximation ( SKS ) , and its analysis , which addresses the two key challenges of low write density and low auxiliary memory ; ( 2 ) two techniques “ gradient max-norm ” and “ streaming batch norm ” to help training specifically in the online setting ; ( 3 ) a suite of adaptation experiments to demonstrate the advantages of our approach . 2 RELATED WORK . Efficient training for resistive arrays . Several works have aimed at improving the efficiency of training algorithms on resistive arrays . Of the three weight-computations required in training ( forward , backprop , and weight update ) , weight updates are the hardest to parallelize using the array structure . Stochastic weight updates ( Gokmen & Vlasov , 2016 ) allow programming of all cells in a crossbar at once , as opposed to row/column-wise updating . Online Manhattan rule updating ( Zamanidoost et al. , 2015 ) can also be used to update all the weights at once . Several works have proposed new memory structures to improve the efficiency of training ( Soudry et al. , 2015 ; Ambrogio et al. , 2018 ) . The number of writes has also been quantified in the context of chip-in-the-loop training ( Yu et al. , 2016 ) . Distributed gradient descent . Distributed training in the data center is another problem that suffers from expensive weight updates . Here , the model is replicated onto many compute nodes and in each training iteration , the mini-batch is split across the nodes to compute gradients . The distributed gradients are then accumulated on a central node that computes the updated weights and broadcasts them . These systems can be limited by communication bandwidth , and compressed gradient techniques ( Aji & Heafield , 2017 ) have therefore been developed . In Lin et al . ( 2017 ) , the gradients are accumulated over multiple training iterations on each compute node and only gradients that exceed a threshold are communicated back to the central node . In the context of on-chip training with NVM , this method helps reduce the number of weight updates . However , the gradient accumulator requires as much memory as the weights themselves , which negates the density benefits of NVM . Low-Rank Training . Our work draws heavily from previous low-rank training schemes that have largely been developed for use in recurrent neural networks to uncouple the training memory requirements from the number of time steps inherent to the standard truncated backpropagation through time ( TBPTT ) training algorithm . Algorithms developed since then to address the memory problem include Real-Time Recurrent Learning ( RTRL ) ( Williams & Zipser , 1989 ) , Unbiased Online Recurrent Optimization ( UORO ) ( Tallec & Ollivier , 2017 ) , Kronecker Factored RTRL ( KF-RTRL ) ( Mujika et al. , 2018 ) , and Optimal Kronecker Sums ( OK ) ( Benzing et al. , 2019 ) . These latter few techniques rely on the weight gradients in a weight-vector product looking like a sum of outer products ( i.e. , Kronecker sums ) of input vectors with backpropagated errors . Instead of storing a growing number of these sums , they can be approximated with a low-rank representation involving fewer sums . 3 TRAINING NON-VOLATILE MEMORY . The meat of most deep learning systems are many weight matrix - activation vector productsW · a. Fully-connected ( dense ) layers use them explicitly : a [ ` ] = σ ( W [ ` ] a [ ` −1 ] + b [ ` ] ) for layer ` , where σ is a non-linear activation function ( more details are discussed in detail in Appendix C.1 ) . Recurrent neural networks use one or many matrix-vector products per recurrent cell . Convolutional layers can also be interpreted in terms of matrix-vector products by unrolling the input feature map into strided convolution-kernel-size slices . Then , each matrix-vector product takes one such input slice and maps it to all channels of the corresponding output pixel ( more details are discussed in Appendix C.2 ) . The ubiquity of matrix-vector products allows us to adapt the techniques discussed in “ Low-Rank Training ” of Section 2 to other network architectures . Instead of reducing the memory across time steps , we can reduce the memory across training samples in the case of a traditional feedforward neural network . However , in traditional training ( e.g. , on a GPU ) , this technique does not confer advantages . Traditional training platforms often have ample memory to store a batch of activations and backpropagated gradients , and the weight updates ∆W can be applied directly to the weights W once they are computed , allowing temporary activation memory to be deleted . The benefits of low-rank training only become apparent when looking at the challenges of proposed NVM devices : Low write density ( LWD ) . In NVM , writing to weights at every sample is costly in energy , time , and endurance . These concerns are exacerbated in multilevel cells , which require several steps of an iterative write-verify cycle to program the desired level . We therefore want to minimize the number of writes to NVM . Low auxiliary memory ( LAM ) . NVM is the densest form of memory . In 40nm technology , RRAM 1T-1R bitcells @ 0.085 um2 ( Chou et al. , 2018 ) are 2.8x smaller than 6T SRAM cells @ 0.242 um2 ( TSMC , 2019 ) . Therefore , NVM should be used to store the memory-intensive weights . By the same token , no other on-chip memory should come close to the size of the on-chip NVM . In particular , if our b−bit NVM stores a weight matrix of size no × ni , we should use at most r ( ni + no ) b auxiliary non-NVM memory , where r is a small constant . Despite these space limitations , the reason we might opt to use auxiliary ( large , high endurance , low energy ) memory is because there are places where writes are frequent , violating LWD if we were to use NVM . In the traditional minibatch SGD setting with batch size B , an upper limit on the write density per cell per sample is easily seen : 1/B . However , to store such a batch of updates without intermediate writes to NVM would require auxiliary memory proportional to B . Therefore , a trade-off becomes apparent . If B is reduced , LAM is satisfied at the cost of LWD . If B is raised , LWD is satisfied at the cost of LAM . Using low-rank training techniques , the auxiliary memory requirements are decoupled from the batch size , allowing us to increase B while satisfying both LWD and LAM1 . Additionally , because the low-rank representation uses so little memory , a larger bitwidth can be used , potentially allowing for gradient accumulation in a way that is not possible with low bitwidth NVM weights . In the next section , we elaborate on the low-rank training method . 4 LOW-RANK TRAINING METHOD . Let z ( i ) = Wa ( i ) + b be the standard affine transformation building block of some larger network , e.g. , y ( i ) p = fpost ( z ( i ) ) and a ( i ) = fpre ( x ( i ) ) with prediction loss L ( y ( i ) p , y ( i ) t ) , where ( x ( i ) , y ( i ) t ) is the ith training sample pair . Then weight gradient∇WL ( i ) = dz ( i ) ( a ( i ) ) > = dz ( i ) ⊗ a ( i ) where dz ( i ) = ∇z ( i ) L ( i ) . A minibatch SGD weight update accumulates this gradient over B samples : ∆W = −η ∑B i=1 dz ( i ) ⊗ a ( i ) for learning rate η . For a rank-r training scheme , approximate the sum ∑B i=1 dz ( i ) ⊗a ( i ) by iteratively updating two rank- r matrices L̃ ∈ Rno×r , R̃ ∈ Rni×r with each new outer product : L̃R̃ > ← rankReduce ( L̃R̃ > + dz ( i ) ⊗a ( i ) ) . Therefore , at each sample , we convert the rank-q = r+ 1 system L̃R̃ > +dz ( i ) ⊗a ( i ) into the rank-r L̃R̃ > . In the next sections , we discuss how to compute rankReduce . 4.1 OPTIMAL KRONECKER SUM APPROXIMATION ( OK ) . One option for rankReduce ( X ) to convert from rank q = r + 1 X to rank r is a minimum error estimator , which is implemented by selecting the top r components of a singular value decomposition ( SVD ) of X . However , a naı̈ve implementation is computationally infeasible and biased : E [ rankReduce ( X ) ] 6= X . Benzing et al . ( 2019 ) solves these problems by proposing a minimum variance unbiased estimator for rankReduce , which they call the OK algorithm2 . The OK algorithm can be understood in two key steps : first , an efficient method of computing the SVD of a Kronecker sum ; second , a method of splitting the singular value matrix Σ into two rank-r matrices whose outer product is a minimum-variance , unbiased estimate of Σ . Details can be found in their paper , however we include a high-level explanation in Sections 4.1.1 and 4.1.2 to aid our discussions . Note that our variable notation differs from Benzing et al . ( 2019 ) .
This paper proposes a low rank training method called the Streaming Kronecker Sum approximation (SKS algorithm) for training low precision models on edge devices. The authors compare their method to SGD for convolutional networks on MNIST and demonstrate improvements in terms of accuracy. The authors make use of the Optimal Kronecker-sum algorithm of Benzing et al and propose further improvements to it in the form of the SKS algorithm.
SP:f949cb0fd9e1b4afc31725a740ef87dd2d5d5a49
Hindsight Trust Region Policy Optimization
1 INTRODUCTION . Reinforcement Learning has been a heuristic approach confronting a great many real-world problems from playing complex strategic games ( Mnih et al. , 2015 ; Silver et al. , 2016 ; Justesen et al. , 2019 ) to the precise control of robots ( Levine et al. , 2016 ; Mahler & Goldberg , 2017 ; Quillen et al. , 2018 ) , in which policy gradient methods play very important roles ( Sutton et al. , 2000 ; Deisenroth et al. , 2013 ) . Among them , the ones based on trust region including Trust Region Policy Optimization ( Schulman et al. , 2015a ) and Proximal Policy Optimization ( Schulman et al. , 2017 ) have achieved stable and effective performances on several benchmark tasks . Later on , they have been verified in a variety of applications including skill learning ( Nagabandi et al. , 2018 ) , multi-agent control ( Gupta et al. , 2017 ) , imitation learning ( Ho et al. , 2016 ) , and have been investigated further to be combined with more advanced techniques ( Nachum et al. , 2017 ; Houthooft et al. , 2016 ; Heess et al. , 2017 ) . One unresolved core issue in reinforcement learning is efficiently training the agent in sparse reward environments , in which the agent is given a distinctively high feedback only upon reaching the desired final goal state . On one hand , generalizing reinforcement learning methods to sparse reward scenarios obviates designing delicate reward mechanism , which is known as reward shaping ( Ng et al. , 1999 ) ; on the other hand , receiving rewards only when precisely reaching the final goal states also guarantees that the agent can focus on the intended task itself without any deviation . Despite the extensive use of policy gradient methods , they tend to be vulnerable when dealing with sparse reward scenarios . Admittedly , policy gradient may work in simple and sufficiently rewarding environments through massive random exploration . However , since it relies heavily on the expected return , the chances in complex and sparsely rewarding scenarios become rather slim , which often makes it unfeasible to converge to a policy by exploring randomly . Recently , several works have been devoted to solving the problem of sparse reward , mainly applying either hierarchical reinforcement learning ( Kulkarni et al. , 2016 ; Vezhnevets et al. , 2017 ; Le et al. , 2018 ; Marino et al. , 2019 ) or a hindsight methodology , including Hindsight Experience Replay ( Andrychowicz et al. , 2017 ) , Hindsight Policy Gradient ( Rauber et al. , 2019 ) and their extensions ( Fang et al. , 2019 ; Levy et al. , 2019 ) . The idea of Hindsight Experience Replay ( HER ) is to regard the ending states obtained through the interaction under current policy as alternative goals , and therefore generate more effective training data comparing to that with only real goals . Such augmentation overcomes the defects of random exploration and allows the agent to progressively move towards intended goals . It is proven to be promising when dealing with sparse reward reinforcement learning problems . For Hindsight Policy Gradient ( HPG ) , it introduces hindsight to policy gradient approach and improves sample efficiency in sparse reward environments . Yet , its learning curve for policy update still oscillates considerably . Because it inherits the intrinsic high variance of policy gradient methods which has been widely studied in Schulman et al . ( 2015b ) , Gu et al . ( 2016 ) and Wu et al . ( 2018 ) . Furthermore , introducing hindsight to policy gradient methods would lead to greater variance ( Rauber et al. , 2019 ) . Consequently , such exacerbation would cause obstructive instability during the optimization process . To design an advanced and efficient on-policy reinforcement learning algorithm with hindsight experience , the main problem is the contradiction between on-policy data needed by the training process and the severely off-policy hindsight experience we can get . Moreover , for TRPO , one of the most significant property is the approximated monotonic converging process . Therefore , how these advantages can be preserved when the agent is trained with hindsight data also remains unsolved . In this paper , we propose a methodology called Hindsight Trust Region Policy Optimization ( HTRPO ) . Starting from TRPO , a hindsight form of policy optimization problem within trust region is theoretically derived , which can be approximately solved with the Monte Carlo estimator using severely off-policy hindsight experience data . HTRPO extends the effective and monotonically iterative policy optimization procedure within trust region to accommodate sparse reward environments . In HTRPO , both the objective function and the expectation of KL divergence between policies are estimated using generated hindsight data instead of on-policy data . To overcome the high variance and instability in KL divergence estimation , another f -divergence is applied to approximate KL divergence , and both theoretically and practically , it is proved to be more efficient and stable . We demonstrate that on several benchmark tasks , HTRPO can significantly improve the performance and sample efficiency in sparse reward scenarios while maintains the learning stability . From the experiments , we illustrate that HTRPO can be neatly applied to not only simple discrete tasks but continuous environments as well . Besides , it is verified that HTRPO can be generalized to different hyperparameter settings with little impact on performance level . 2 PRELIMINARIES . Reinforcement Learning Formulation and Notation . Consider the standard infinite-horizon reinforcement learning formulation which can be defined by tuple ( S , A , π , ρ0 , r , γ ) . S represents the set of states and A denotes the set of actions . π : S → P ( A ) is a policy that represents an agent ’ s behavior by mapping states to a probability distribution over actions . ρ0 denotes the distribution of the initial state s0 . Reward function r : S → R defines the reward obtained from the environment and γ ∈ ( 0 , 1 ) is a discount factor . In this paper , the policy is a differentiable function regarding parameter θ . We follow the standard formalism of state-action value function Q ( s , a ) , state value function V ( s ) and advantage function A ( s , a ) in Sutton & Barto ( 2018 ) . We also adopt the definition of γ-discounted state visitation distribution as ρθ ( s ) = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s ) ( Ho et al. , 2016 ) , in which the coefficient 1−γ is added to keep the integration of ρθ ( s ) as 1 . Correspondingly , γ-discounted state-action visitation distribution ( Ho et al. , 2016 ) , also known as occupancy measure ( Ho & Ermon , 2016 ) , is defined as ρθ ( s , a ) = ρθ ( s ) × πθ ( a|s ) , in which πθ ( a|s ) stands for the policy under parameter θ . Trust Region Policy Optimization ( TRPO ) . Schulman et al . ( 2015a ) proposes an iterative trust region method that effectively optimizes policy by maximizing the per-iteration policy improvement . The optimization problem proposed in TRPO can be formalized as follows : max θ LTRPO ( θ ) ( 1 ) s.t . E s∼ρθ̃ ( s ) [ DKL ( πθ̃ ( a|s ) ||πθ ( a|s ) ) ] ≤ ( 2 ) in which ρθ̃ ( s ) = ∑∞ t=0 γ tP ( st = s ) . θ denotes the parameter of the new policy while θ̃ is that of the old one . Trajectory is represented by τ = s1 , a1 , s2 , a2 , .... The objective function LTRPO ( θ ) can be given out in the form of expeted return : LTRPO ( θ ) = E s , a∼ρθ̃ ( s , a ) [ πθ ( a|s ) πθ̃ ( a|s ) Aθ̃ ( s , a ) ] ( 3 ) Hindsight Policy Gradient ( HPG ) . After generalizing the concept of hindsight , Rauber et al . ( 2019 ) combines the idea with policy gradient methods . Though goal-conditioned reinforcement learning has been explored for a long time and actively investigated in recent works ( Peters & Schaal , 2008 ; Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Nachum et al. , 2018 ; Held et al. , 2018 ; Nair et al. , 2018 ; Veeriah et al. , 2018 ) , HPG firstly extends the idea of hindsight to goal-conditioned policy gradient and shows that the policy gradient can be computed in expectation over all goals . The goal-conditioned policy gradient is derived as follows : ∇θη ( θ ) = E g [ E τ∼pθ ( τ |g ) [ T−1∑ t=1 ∇θ log πθ ( at | st , g ) Aθ ( st , at , g ) ] ] ( 4 ) Then , by applying hindsight formulation , it rewrites goal-conditioned policy gradient with trajectories conditioned on some other goal g′ using importance sampling ( Bishop , 2016 ) to improve sample efficiency in sparse-reward scenarios . In this paper , we propose an approach that introduces the idea of hindsight to TRPO , called Hindsight Trust Region Policy Optimization ( HTRPO ) , aiming to further improve policy performance and sample efficiency for reinforcement learning with sparse rewards . In Section 3 and Section 4 , we demonstrate how to redesign the objective function and the constraints starting from TRPO respectively . 3 EXPECTED RETURN AND POLICY GRADIENTS OF HTRPO . In order to apply hindsight methodology , this section presents the main steps for the derivation of HTRPO objective function . Starting from the original optimization problem in TRPO , the objective function can be written in the following variant form : Lθ̃ ( θ ) = E τ∼pθ̃ ( τ ) [ ∞∑ t=0 γt πθ ( at|st ) πθ̃ ( at|st ) Aθ̃ ( st , at ) ] ( 5 ) The derivation process of this variant form is shown explicitly in Appendix A.1 and in Schulman et al . ( 2015a ) . Given the expression above , we consider the goal-conditioned objective function of TRPO as a premise for hindsight formulation . Similar to equation 4 , Lθ̃ ( θ ) can be correspondingly given out in the following form : Lθ̃ ( θ ) = E g [ E τ∼pθ̃ ( τ |g ) [ ∞∑ t=0 γt πθ ( at|st , g ) πθ̃ ( at|st , g ) Aθ̃ ( st , at , g ) ] ] ( 6 ) For the record , though it seems that equation 6 makes it possible for off-policy learning , it can be used as the objective only when policy πθ is close to the old policy πθ̃ , i.e . within the trust region . Using severely off-policy data like hindsight experience will make the learning process diverge . Therefore , importance sampling need to be integrated to correct the difference of the trajectory distribution caused by changing the goal . Based on the goal-conditioned form of the objective function , the following theorem gives out the hindsight objective function conditioned on some goal g′ with the distribution correction derived from importance sampling . Theorem 3.1 ( HTRPO Objective Function ) . For the original goal g and an alternative goal g′ , the object function of HTRPO Lθ̃ ( θ ) is given by : Lθ̃ ( θ ) = E g′ [ E τ∼pθ ( τ |g ) [ ∞∑ t=0 t∏ k=1 πθ̃ ( ak|sk , g′ ) πθ̃ ( ak|sk , g ) γt πθ ( at|st , g′ ) πθ̃ ( at|st , g′ ) Aθ̃ ( st , at , g ′ ) ] ] , ( 7 ) in which , τ = s1 , a1 , s2 , a2 , ... , st , at . Appendix A.2 presents an explicit proof on how the hindsight-form objective function derives from equation 6 . It will be solved under a KL divergence expectation constraint , which will be discussed in detail in Section 4 . Intuitively , equation 7 provides a way to compute the expected return in terms of the advantage with new-goal-conditioned hindsight experiences which are generated from interactions directed by old goals . Naturally , Theorem 3.2 gives out the gradient of HTRPO objective function that will be applied to solve the optimization problem . Detailed steps of computing the gradient is presented in Appendix A.3 . Theorem 3.2 ( Gradient of HTRPO Objective Function ) . For the original goal g and an alternative goal g′ , the gradient∇θLθ̃ ( θ ) of HTRPO object function with respect to θ is given by the following expression : ∇θLθ̃ ( θ ) = E g′ [ E τ∼pθ ( τ |g ) [ ∞∑ t=0 t∏ k=1 πθ̃ ( ak|sk , g′ ) πθ̃ ( ak|sk , g ) γt ∇θπθ ( at|st , g′ ) πθ̃ ( at|st , g′ ) Aθ̃ ( st , at , g ′ ) ] ] , ( 8 ) in which τ = s1 , a1 , s2 , a2 , ... , st , at .
This paper augments the TRPO policy optimization objective with hindsight data, where the hindsight data is generated from goals based on trajectories. The key contribution of the paper is based on deriving an on-policy adaptation of hindsight based TRPO, that can be useful for sparse reward environments. The paper draws ideas from existing papers such as HPG and considers the IS based variant of HPG for on-policy, similar to TRPO, that can achieve monotonic performance improvements. Furthermore, the authors introduce a logarithmic form of constraint, by re-deriving the KL constraint and leading to a f-divergence based constraint, which is argued to have useful effects in terms of lowering the variance. Experimental results are compared with baselines including HPG and its variants on standard sparse reward benchmark tasks.
SP:f150b1d2a3ad4d9614e1ef434ef18d742ee78e47
Hindsight Trust Region Policy Optimization
1 INTRODUCTION . Reinforcement Learning has been a heuristic approach confronting a great many real-world problems from playing complex strategic games ( Mnih et al. , 2015 ; Silver et al. , 2016 ; Justesen et al. , 2019 ) to the precise control of robots ( Levine et al. , 2016 ; Mahler & Goldberg , 2017 ; Quillen et al. , 2018 ) , in which policy gradient methods play very important roles ( Sutton et al. , 2000 ; Deisenroth et al. , 2013 ) . Among them , the ones based on trust region including Trust Region Policy Optimization ( Schulman et al. , 2015a ) and Proximal Policy Optimization ( Schulman et al. , 2017 ) have achieved stable and effective performances on several benchmark tasks . Later on , they have been verified in a variety of applications including skill learning ( Nagabandi et al. , 2018 ) , multi-agent control ( Gupta et al. , 2017 ) , imitation learning ( Ho et al. , 2016 ) , and have been investigated further to be combined with more advanced techniques ( Nachum et al. , 2017 ; Houthooft et al. , 2016 ; Heess et al. , 2017 ) . One unresolved core issue in reinforcement learning is efficiently training the agent in sparse reward environments , in which the agent is given a distinctively high feedback only upon reaching the desired final goal state . On one hand , generalizing reinforcement learning methods to sparse reward scenarios obviates designing delicate reward mechanism , which is known as reward shaping ( Ng et al. , 1999 ) ; on the other hand , receiving rewards only when precisely reaching the final goal states also guarantees that the agent can focus on the intended task itself without any deviation . Despite the extensive use of policy gradient methods , they tend to be vulnerable when dealing with sparse reward scenarios . Admittedly , policy gradient may work in simple and sufficiently rewarding environments through massive random exploration . However , since it relies heavily on the expected return , the chances in complex and sparsely rewarding scenarios become rather slim , which often makes it unfeasible to converge to a policy by exploring randomly . Recently , several works have been devoted to solving the problem of sparse reward , mainly applying either hierarchical reinforcement learning ( Kulkarni et al. , 2016 ; Vezhnevets et al. , 2017 ; Le et al. , 2018 ; Marino et al. , 2019 ) or a hindsight methodology , including Hindsight Experience Replay ( Andrychowicz et al. , 2017 ) , Hindsight Policy Gradient ( Rauber et al. , 2019 ) and their extensions ( Fang et al. , 2019 ; Levy et al. , 2019 ) . The idea of Hindsight Experience Replay ( HER ) is to regard the ending states obtained through the interaction under current policy as alternative goals , and therefore generate more effective training data comparing to that with only real goals . Such augmentation overcomes the defects of random exploration and allows the agent to progressively move towards intended goals . It is proven to be promising when dealing with sparse reward reinforcement learning problems . For Hindsight Policy Gradient ( HPG ) , it introduces hindsight to policy gradient approach and improves sample efficiency in sparse reward environments . Yet , its learning curve for policy update still oscillates considerably . Because it inherits the intrinsic high variance of policy gradient methods which has been widely studied in Schulman et al . ( 2015b ) , Gu et al . ( 2016 ) and Wu et al . ( 2018 ) . Furthermore , introducing hindsight to policy gradient methods would lead to greater variance ( Rauber et al. , 2019 ) . Consequently , such exacerbation would cause obstructive instability during the optimization process . To design an advanced and efficient on-policy reinforcement learning algorithm with hindsight experience , the main problem is the contradiction between on-policy data needed by the training process and the severely off-policy hindsight experience we can get . Moreover , for TRPO , one of the most significant property is the approximated monotonic converging process . Therefore , how these advantages can be preserved when the agent is trained with hindsight data also remains unsolved . In this paper , we propose a methodology called Hindsight Trust Region Policy Optimization ( HTRPO ) . Starting from TRPO , a hindsight form of policy optimization problem within trust region is theoretically derived , which can be approximately solved with the Monte Carlo estimator using severely off-policy hindsight experience data . HTRPO extends the effective and monotonically iterative policy optimization procedure within trust region to accommodate sparse reward environments . In HTRPO , both the objective function and the expectation of KL divergence between policies are estimated using generated hindsight data instead of on-policy data . To overcome the high variance and instability in KL divergence estimation , another f -divergence is applied to approximate KL divergence , and both theoretically and practically , it is proved to be more efficient and stable . We demonstrate that on several benchmark tasks , HTRPO can significantly improve the performance and sample efficiency in sparse reward scenarios while maintains the learning stability . From the experiments , we illustrate that HTRPO can be neatly applied to not only simple discrete tasks but continuous environments as well . Besides , it is verified that HTRPO can be generalized to different hyperparameter settings with little impact on performance level . 2 PRELIMINARIES . Reinforcement Learning Formulation and Notation . Consider the standard infinite-horizon reinforcement learning formulation which can be defined by tuple ( S , A , π , ρ0 , r , γ ) . S represents the set of states and A denotes the set of actions . π : S → P ( A ) is a policy that represents an agent ’ s behavior by mapping states to a probability distribution over actions . ρ0 denotes the distribution of the initial state s0 . Reward function r : S → R defines the reward obtained from the environment and γ ∈ ( 0 , 1 ) is a discount factor . In this paper , the policy is a differentiable function regarding parameter θ . We follow the standard formalism of state-action value function Q ( s , a ) , state value function V ( s ) and advantage function A ( s , a ) in Sutton & Barto ( 2018 ) . We also adopt the definition of γ-discounted state visitation distribution as ρθ ( s ) = ( 1 − γ ) ∑∞ t=0 γ tP ( st = s ) ( Ho et al. , 2016 ) , in which the coefficient 1−γ is added to keep the integration of ρθ ( s ) as 1 . Correspondingly , γ-discounted state-action visitation distribution ( Ho et al. , 2016 ) , also known as occupancy measure ( Ho & Ermon , 2016 ) , is defined as ρθ ( s , a ) = ρθ ( s ) × πθ ( a|s ) , in which πθ ( a|s ) stands for the policy under parameter θ . Trust Region Policy Optimization ( TRPO ) . Schulman et al . ( 2015a ) proposes an iterative trust region method that effectively optimizes policy by maximizing the per-iteration policy improvement . The optimization problem proposed in TRPO can be formalized as follows : max θ LTRPO ( θ ) ( 1 ) s.t . E s∼ρθ̃ ( s ) [ DKL ( πθ̃ ( a|s ) ||πθ ( a|s ) ) ] ≤ ( 2 ) in which ρθ̃ ( s ) = ∑∞ t=0 γ tP ( st = s ) . θ denotes the parameter of the new policy while θ̃ is that of the old one . Trajectory is represented by τ = s1 , a1 , s2 , a2 , .... The objective function LTRPO ( θ ) can be given out in the form of expeted return : LTRPO ( θ ) = E s , a∼ρθ̃ ( s , a ) [ πθ ( a|s ) πθ̃ ( a|s ) Aθ̃ ( s , a ) ] ( 3 ) Hindsight Policy Gradient ( HPG ) . After generalizing the concept of hindsight , Rauber et al . ( 2019 ) combines the idea with policy gradient methods . Though goal-conditioned reinforcement learning has been explored for a long time and actively investigated in recent works ( Peters & Schaal , 2008 ; Schaul et al. , 2015 ; Andrychowicz et al. , 2017 ; Nachum et al. , 2018 ; Held et al. , 2018 ; Nair et al. , 2018 ; Veeriah et al. , 2018 ) , HPG firstly extends the idea of hindsight to goal-conditioned policy gradient and shows that the policy gradient can be computed in expectation over all goals . The goal-conditioned policy gradient is derived as follows : ∇θη ( θ ) = E g [ E τ∼pθ ( τ |g ) [ T−1∑ t=1 ∇θ log πθ ( at | st , g ) Aθ ( st , at , g ) ] ] ( 4 ) Then , by applying hindsight formulation , it rewrites goal-conditioned policy gradient with trajectories conditioned on some other goal g′ using importance sampling ( Bishop , 2016 ) to improve sample efficiency in sparse-reward scenarios . In this paper , we propose an approach that introduces the idea of hindsight to TRPO , called Hindsight Trust Region Policy Optimization ( HTRPO ) , aiming to further improve policy performance and sample efficiency for reinforcement learning with sparse rewards . In Section 3 and Section 4 , we demonstrate how to redesign the objective function and the constraints starting from TRPO respectively . 3 EXPECTED RETURN AND POLICY GRADIENTS OF HTRPO . In order to apply hindsight methodology , this section presents the main steps for the derivation of HTRPO objective function . Starting from the original optimization problem in TRPO , the objective function can be written in the following variant form : Lθ̃ ( θ ) = E τ∼pθ̃ ( τ ) [ ∞∑ t=0 γt πθ ( at|st ) πθ̃ ( at|st ) Aθ̃ ( st , at ) ] ( 5 ) The derivation process of this variant form is shown explicitly in Appendix A.1 and in Schulman et al . ( 2015a ) . Given the expression above , we consider the goal-conditioned objective function of TRPO as a premise for hindsight formulation . Similar to equation 4 , Lθ̃ ( θ ) can be correspondingly given out in the following form : Lθ̃ ( θ ) = E g [ E τ∼pθ̃ ( τ |g ) [ ∞∑ t=0 γt πθ ( at|st , g ) πθ̃ ( at|st , g ) Aθ̃ ( st , at , g ) ] ] ( 6 ) For the record , though it seems that equation 6 makes it possible for off-policy learning , it can be used as the objective only when policy πθ is close to the old policy πθ̃ , i.e . within the trust region . Using severely off-policy data like hindsight experience will make the learning process diverge . Therefore , importance sampling need to be integrated to correct the difference of the trajectory distribution caused by changing the goal . Based on the goal-conditioned form of the objective function , the following theorem gives out the hindsight objective function conditioned on some goal g′ with the distribution correction derived from importance sampling . Theorem 3.1 ( HTRPO Objective Function ) . For the original goal g and an alternative goal g′ , the object function of HTRPO Lθ̃ ( θ ) is given by : Lθ̃ ( θ ) = E g′ [ E τ∼pθ ( τ |g ) [ ∞∑ t=0 t∏ k=1 πθ̃ ( ak|sk , g′ ) πθ̃ ( ak|sk , g ) γt πθ ( at|st , g′ ) πθ̃ ( at|st , g′ ) Aθ̃ ( st , at , g ′ ) ] ] , ( 7 ) in which , τ = s1 , a1 , s2 , a2 , ... , st , at . Appendix A.2 presents an explicit proof on how the hindsight-form objective function derives from equation 6 . It will be solved under a KL divergence expectation constraint , which will be discussed in detail in Section 4 . Intuitively , equation 7 provides a way to compute the expected return in terms of the advantage with new-goal-conditioned hindsight experiences which are generated from interactions directed by old goals . Naturally , Theorem 3.2 gives out the gradient of HTRPO objective function that will be applied to solve the optimization problem . Detailed steps of computing the gradient is presented in Appendix A.3 . Theorem 3.2 ( Gradient of HTRPO Objective Function ) . For the original goal g and an alternative goal g′ , the gradient∇θLθ̃ ( θ ) of HTRPO object function with respect to θ is given by the following expression : ∇θLθ̃ ( θ ) = E g′ [ E τ∼pθ ( τ |g ) [ ∞∑ t=0 t∏ k=1 πθ̃ ( ak|sk , g′ ) πθ̃ ( ak|sk , g ) γt ∇θπθ ( at|st , g′ ) πθ̃ ( at|st , g′ ) Aθ̃ ( st , at , g ′ ) ] ] , ( 8 ) in which τ = s1 , a1 , s2 , a2 , ... , st , at .
The paper builds on top of prior work in hindsight policy gradients (Rauber et.al.) and trust region policy optimization (Schulman et.al.), proposing a hindsight trust region policy optimization. Conceptually this direction makes a lot of sense, since hindsight is in general shown to be useful when training goal conditioned policies. The formulation generally appears to be sound and is a straightforward extension of the importance sampling techniques from Rabuer et.al. to the TRPO setting. Experimental results show that the proposed changes bring significant improvements over baselines on sparse reward settings.
SP:f150b1d2a3ad4d9614e1ef434ef18d742ee78e47
Deep RL for Blood Glucose Control: Lessons, Challenges, and Opportunities
Individuals with type 1 diabetes ( T1D ) lack the ability to produce the insulin their bodies need . As a result , they must continually make decisions about how much insulin to self-administer in order to adequately control their blood glucose levels . Longitudinal data streams captured from wearables , like continuous glucose monitors , can help these individuals manage their health , but currently the majority of the decision burden remains on the user . To relieve this burden , researchers are working on closed-loop solutions that combine a continuous glucose monitor and an insulin pump with a control algorithm in an ‘ artificial pancreas. ’ Such systems aim to estimate and deliver the appropriate amount of insulin . Here , we develop reinforcement learning ( RL ) techniques for automated blood glucose control . Through a series of experiments , we compare the performance of different deep RL approaches to non-RL approaches . We highlight the flexibility of RL approaches , demonstrating how they can adapt to new individuals with little additional data . On over 21k hours of simulated data across 30 patients , RL approaches outperform baseline control algorithms ( increasing time spent in normal glucose range from 71 % to 75 % ) without requiring meal announcements . Moreover , these approaches are adept at leveraging latent behavioral patterns ( increasing time in range from 58 % to 70 % ) . This work demonstrates the potential of deep RL for controlling complex physiological systems with minimal expert knowledge . 1 INTRODUCTION . Type 1 diabetes ( T1D ) is a chronic disease affecting 20-40 million people worldwide ( You & Henneberg , 2016 ) , and its incidence is increasing ( Tuomilehto , 2013 ) . People with T1D can not produce insulin , a hormone that signals cells to uptake glucose in the bloodstream . Without insulin , the body must metabolize energy in other ways that , when relied on repeatedly , can lead to lifethreatening conditions ( Kerl , 2001 ) . Tight glucose control improves both short- and long-term outcomes for people with diabetes , but can be difficult to achieve in practice ( Diabetes Control and Complications Trial Research Group , 1995 ) . Typically , blood glucose is controlled by a combination of basal insulin ( to control baseline blood glucose levels ) and bolus insulin ( to control glucose spikes after meals ) . To control blood glucose levels , individuals with T1D must continually make decisions about how much basal and bolus insulin to self-administer . This requires careful measurement of glucose levels and carbohydrate intake , resulting in at least 15-17 data points a day . If the individual uses a continuous glucose monitor ( CGM ) , this can increase to over 300 data points , or a blood glucose reading every 5 minutes ( Coffen & Dahlquist , 2009 ) . Combined with an insulin pump , a wearable device that automates the delivery of insulin , CGMs present an opportunity for closed-loop control . Such a system , known as an ‘ artificial pancreas ’ ( AP ) , automatically anticipates the amount of required insulin and delivers the appropriate dose . This would be life-changing for individuals with T1D . For many years , researchers have worked towards the creation of an AP for blood glucose control ( Kadish , 1964 ; Bequette , 2005 ; Bothe et al. , 2013 ) . Though the technology behind CGMs and insulin pumps has advanced , there remains significant room for improvement when it comes to the control algorithms ( Bothe et al. , 2013 ; Pinsker et al. , 2016 ) . Current approaches often fail to maintain sufficiently tight glucose control and require meal announcements . In this work , we investigate the utility of a deep reinforcement learning ( RL ) based approach for blood glucose control ( Bothe et al. , 2013 ) . Deep RL is particularly well-suited for this task because it : i ) makes minimal assumptions about the structure of the underlying process , allowing the same system to adapt to different individuals or to changes in individuals over time , ii ) can learn to leverage latent patterns such as regular meal times , and iii ) scales well in the presence of large amounts of training data . Finally , it can take advantage of existing FDA-approved simulators for model training . Despite these potential benefits , we are not aware of any previously published work that has rigorously explored the feasibility of deep RL for blood glucose control . While the opportunities for learning an AP algorithm using deep RL are clear , there are numerous challenges in applying standard techniques to this domain . First , there is a significant delay between actions and outcomes ; insulin can affect glucose levels hours after administration and this effect can vary significantly across individuals . Without encoding knowledge of patient-specific insulin dynamics , learning the long-term impact of insulin is challenging . Second , compared to tasks that rely on a visual input or are given ground truth state , this task must rely on a noisy observed signal that requires significant temporal context to accurately interpret . Third , because of fluctuations throughout the day and even the week , tight blood glucose control requires small changes in insulin during the day , in addition to large doses of insulin to control glucose spikes . Fourth , unlike game settings where one might have the ability to learn from hundreds of thousands of hours of gameplay , to be practical , any learning approach to blood glucose control must be able to achieve strong performance using only a limited number of days of patient-specific data . Finally , controlling blood glucose levels is a safety-critical application . This sets the bar high from an evaluation perspective . It is unsafe to deploy a system without a human-in-the-loop if there is even a small probability of failure . Given these challenges , this task represents a significant departure from deep RL baselines . Achieving strong performance in this task requires numerous careful design decisions . In this paper , we make significant progress in this regard , presenting the first deep RL approach that surpasses human-level performance in controlling blood glucose without requiring meal announcements . More specifically , we : • present an input representation that carefully balances encoding action history and recent changes in our state space , • propose a patient-specific action space that is amenable to both small and large fluctuations of insulin , • introduce an augmented reward function , designed to balance the risk of hypo- and hyper- glycemia while drastically penalizing unsafe performance , • rigorously test the ability of a recurrent architecture to learn from the noisy input , and • demonstrate how policies can transfer across individuals , dramatically lowering the amount of data required to achieve strong performance while improving safety . Further , we build on an open-source simulator and make all of our code publicly available 1 . This work can help to build the foundation of a new , tractable , and societally important benchmark for the RL community . 2 BACKGROUND AND RELATED WORKS . In recent years , researchers have started to explore RL in healthcare . Examples include matching patients to treatment in the management of sepsis ( Weng et al. , 2017 ; Komorowski et al. , 2018 ) and mechanical ventilation ( Prasad et al. , 2017 ) . In addition , RL has been explored to provide contextual suggestions for behavioral modifications ( Klasnja et al. , 2019 ) . Despite its success in other problem settings , RL has yet to be fully explored as a solution for a closed-loop AP system ( Bothe et al. , 2013 ) . RL is a promising solution to this problem , as it is well-suited to learning complex behavior that readily adapts to changing domains ( Clavera et al. , 2018 ) . Moreover , unlike many other disease settings , there exist credible simulators for the glucoregulatory system ( Visentin et al. , 2014 ) . The presence of a credible simulator alleviates many common concerns of RL applied to problems in health ( Gottesman et al. , 2019 ) . 1Currently hosted at https : //tinyurl.com/y6e2m68b , after review a formal code release will be made available on the authors github account 2.1 CURRENT AP ALGORITHMS AND RL FOR BLOOD GLUCOSE CONTROL . Among recent commercial AP products , proportional-integral-derivative ( PID ) control is one of the most common backbones ( Trevitt et al. , 2015 ) . The simplicity of PID controllers make them easy to use , and in practice they achieve strong results . For example , the Medtronic Hybrid Closed-Loop system , one of the few commercially available , is built on a PID controller ( Garg et al. , 2017 ; Ruiz et al. , 2012 ) . In this setting , a hybrid closed-loop controller automatically adjusts basal insulin rates , but still requires human-directed insulin boluses to adjust for meals . The main weakness of PID controllers , in the setting of blood glucose control , is their reactivity . As they only respond to current glucose values ( including a derivative ) , often they can not respond fast enough to meals to satisfactorily control postprandial excursions without meal announcements ( Garg et al. , 2017 ) . And , without additional safety modifications can overcorrect for these spikes , triggering postprandial hypoglycemia ( Ruiz et al. , 2012 ) . In contrast , we hypothesize that an RL approach will be able to leverage patterns associated with meal times , resulting in better policies that do not require meal announcements . Moreover , such approaches can take advantage of existing simulators for training and evaluation ( described in more detail later ) . Previous work has examined the use of RL for different aspects of blood glucose control . Weng et al . ( 2017 ) use RL to learn policies that set blood glucose targets for septic patients , but do not learn policies to achieve these targets . Several recent works have investigated the use RL to adapt existing insulin treatment regimes ( Ngo et al. , 2018 ; Oroojeni Mohammad Javad et al. , 2015 ; Sun et al. , 2018 ) . In contrast to our setting , in which we aim to learn a closed-loop control policy , this work has focused on a human-in-the-loop setting , in which the goal is to learn optimal correction factors and carbohydrate ratios that can be used in the calculation of boluses . Most similar to our own work , De Paula et al . ( 2015 ) develop a kernelized Q-learning framework for closed loop glucose control . They make use of Bayesian active learning for on-the-fly personalization . This work tackles a similar problem to our own , but uses a simple two-compartment model for the glucoregulatory system and a fully deterministic meal routine . In our simulation environment , we found that such a Q-learning did not lead to satisfactory closed-loop performance and instead we examine deep actor-critic algorithms for continuous control . 2.2 GLUCOSE MODELS AND SIMULATION . Models of the glucoregulatory system have long been important to the development and testing of an AP ( Cobelli et al. , 1982 ) . Current models are based on a combination of rigorous experimentation and expert knowledge of the underlying physiological phenomena . Typical models consist of a multi-compartment model , with various sources and sinks corresponding to physiological phenomena , involving often dozens of patient-specific parameters . One such simulator , the one we use in our experiments , is the UVA/Padova model ( Kovatchev et al. , 2009 ) . Briefly , this simulator models the glucoregulatory system as a nonlinear multi-compartment system , where glucose is generated through the liver and absorbed through the gut and controlled by externally administered insulin . A more detailed explanation can be found in ( Kovatchev et al. , 2009 ) . We use an open-source version of the UVA/Padova simulator that comes with 30 virtual patients , each of which consists of several dozen parameters fully specifying the glucoregulatory system ( Xie , 2018 ) . The patients are divided into three classes : children , adolescents , and adults , each with 10 patients . While the simulator we use includes only 10 patients per class , there is a wide range of patient types among each class , with ages ranging from 7-68 years and recommended daily insulin from 16 units to over 60 .
The paper describes an RL based approach to administer insulin for blood glucose control among type-1 diabetic patients. The paper formulates this blood glucose control problem as a closed-loop reinforcement learning problem and demonstrates its effectiveness on data generated from an FDA-approved simulator of glucoregulatory system. Compared to existing approaches, the proposed method can operate without meal announcement by potentially making use of latent meal intake patterns. The authors also demonstrate how a learned policy for one particular subject can be used as initialization to train/fine-tuned the policy of another subject so as to combat the issue of high sample complexity.
SP:33386a5a96124115a197a16ea1ca2f2ba326c34d
Deep RL for Blood Glucose Control: Lessons, Challenges, and Opportunities
Individuals with type 1 diabetes ( T1D ) lack the ability to produce the insulin their bodies need . As a result , they must continually make decisions about how much insulin to self-administer in order to adequately control their blood glucose levels . Longitudinal data streams captured from wearables , like continuous glucose monitors , can help these individuals manage their health , but currently the majority of the decision burden remains on the user . To relieve this burden , researchers are working on closed-loop solutions that combine a continuous glucose monitor and an insulin pump with a control algorithm in an ‘ artificial pancreas. ’ Such systems aim to estimate and deliver the appropriate amount of insulin . Here , we develop reinforcement learning ( RL ) techniques for automated blood glucose control . Through a series of experiments , we compare the performance of different deep RL approaches to non-RL approaches . We highlight the flexibility of RL approaches , demonstrating how they can adapt to new individuals with little additional data . On over 21k hours of simulated data across 30 patients , RL approaches outperform baseline control algorithms ( increasing time spent in normal glucose range from 71 % to 75 % ) without requiring meal announcements . Moreover , these approaches are adept at leveraging latent behavioral patterns ( increasing time in range from 58 % to 70 % ) . This work demonstrates the potential of deep RL for controlling complex physiological systems with minimal expert knowledge . 1 INTRODUCTION . Type 1 diabetes ( T1D ) is a chronic disease affecting 20-40 million people worldwide ( You & Henneberg , 2016 ) , and its incidence is increasing ( Tuomilehto , 2013 ) . People with T1D can not produce insulin , a hormone that signals cells to uptake glucose in the bloodstream . Without insulin , the body must metabolize energy in other ways that , when relied on repeatedly , can lead to lifethreatening conditions ( Kerl , 2001 ) . Tight glucose control improves both short- and long-term outcomes for people with diabetes , but can be difficult to achieve in practice ( Diabetes Control and Complications Trial Research Group , 1995 ) . Typically , blood glucose is controlled by a combination of basal insulin ( to control baseline blood glucose levels ) and bolus insulin ( to control glucose spikes after meals ) . To control blood glucose levels , individuals with T1D must continually make decisions about how much basal and bolus insulin to self-administer . This requires careful measurement of glucose levels and carbohydrate intake , resulting in at least 15-17 data points a day . If the individual uses a continuous glucose monitor ( CGM ) , this can increase to over 300 data points , or a blood glucose reading every 5 minutes ( Coffen & Dahlquist , 2009 ) . Combined with an insulin pump , a wearable device that automates the delivery of insulin , CGMs present an opportunity for closed-loop control . Such a system , known as an ‘ artificial pancreas ’ ( AP ) , automatically anticipates the amount of required insulin and delivers the appropriate dose . This would be life-changing for individuals with T1D . For many years , researchers have worked towards the creation of an AP for blood glucose control ( Kadish , 1964 ; Bequette , 2005 ; Bothe et al. , 2013 ) . Though the technology behind CGMs and insulin pumps has advanced , there remains significant room for improvement when it comes to the control algorithms ( Bothe et al. , 2013 ; Pinsker et al. , 2016 ) . Current approaches often fail to maintain sufficiently tight glucose control and require meal announcements . In this work , we investigate the utility of a deep reinforcement learning ( RL ) based approach for blood glucose control ( Bothe et al. , 2013 ) . Deep RL is particularly well-suited for this task because it : i ) makes minimal assumptions about the structure of the underlying process , allowing the same system to adapt to different individuals or to changes in individuals over time , ii ) can learn to leverage latent patterns such as regular meal times , and iii ) scales well in the presence of large amounts of training data . Finally , it can take advantage of existing FDA-approved simulators for model training . Despite these potential benefits , we are not aware of any previously published work that has rigorously explored the feasibility of deep RL for blood glucose control . While the opportunities for learning an AP algorithm using deep RL are clear , there are numerous challenges in applying standard techniques to this domain . First , there is a significant delay between actions and outcomes ; insulin can affect glucose levels hours after administration and this effect can vary significantly across individuals . Without encoding knowledge of patient-specific insulin dynamics , learning the long-term impact of insulin is challenging . Second , compared to tasks that rely on a visual input or are given ground truth state , this task must rely on a noisy observed signal that requires significant temporal context to accurately interpret . Third , because of fluctuations throughout the day and even the week , tight blood glucose control requires small changes in insulin during the day , in addition to large doses of insulin to control glucose spikes . Fourth , unlike game settings where one might have the ability to learn from hundreds of thousands of hours of gameplay , to be practical , any learning approach to blood glucose control must be able to achieve strong performance using only a limited number of days of patient-specific data . Finally , controlling blood glucose levels is a safety-critical application . This sets the bar high from an evaluation perspective . It is unsafe to deploy a system without a human-in-the-loop if there is even a small probability of failure . Given these challenges , this task represents a significant departure from deep RL baselines . Achieving strong performance in this task requires numerous careful design decisions . In this paper , we make significant progress in this regard , presenting the first deep RL approach that surpasses human-level performance in controlling blood glucose without requiring meal announcements . More specifically , we : • present an input representation that carefully balances encoding action history and recent changes in our state space , • propose a patient-specific action space that is amenable to both small and large fluctuations of insulin , • introduce an augmented reward function , designed to balance the risk of hypo- and hyper- glycemia while drastically penalizing unsafe performance , • rigorously test the ability of a recurrent architecture to learn from the noisy input , and • demonstrate how policies can transfer across individuals , dramatically lowering the amount of data required to achieve strong performance while improving safety . Further , we build on an open-source simulator and make all of our code publicly available 1 . This work can help to build the foundation of a new , tractable , and societally important benchmark for the RL community . 2 BACKGROUND AND RELATED WORKS . In recent years , researchers have started to explore RL in healthcare . Examples include matching patients to treatment in the management of sepsis ( Weng et al. , 2017 ; Komorowski et al. , 2018 ) and mechanical ventilation ( Prasad et al. , 2017 ) . In addition , RL has been explored to provide contextual suggestions for behavioral modifications ( Klasnja et al. , 2019 ) . Despite its success in other problem settings , RL has yet to be fully explored as a solution for a closed-loop AP system ( Bothe et al. , 2013 ) . RL is a promising solution to this problem , as it is well-suited to learning complex behavior that readily adapts to changing domains ( Clavera et al. , 2018 ) . Moreover , unlike many other disease settings , there exist credible simulators for the glucoregulatory system ( Visentin et al. , 2014 ) . The presence of a credible simulator alleviates many common concerns of RL applied to problems in health ( Gottesman et al. , 2019 ) . 1Currently hosted at https : //tinyurl.com/y6e2m68b , after review a formal code release will be made available on the authors github account 2.1 CURRENT AP ALGORITHMS AND RL FOR BLOOD GLUCOSE CONTROL . Among recent commercial AP products , proportional-integral-derivative ( PID ) control is one of the most common backbones ( Trevitt et al. , 2015 ) . The simplicity of PID controllers make them easy to use , and in practice they achieve strong results . For example , the Medtronic Hybrid Closed-Loop system , one of the few commercially available , is built on a PID controller ( Garg et al. , 2017 ; Ruiz et al. , 2012 ) . In this setting , a hybrid closed-loop controller automatically adjusts basal insulin rates , but still requires human-directed insulin boluses to adjust for meals . The main weakness of PID controllers , in the setting of blood glucose control , is their reactivity . As they only respond to current glucose values ( including a derivative ) , often they can not respond fast enough to meals to satisfactorily control postprandial excursions without meal announcements ( Garg et al. , 2017 ) . And , without additional safety modifications can overcorrect for these spikes , triggering postprandial hypoglycemia ( Ruiz et al. , 2012 ) . In contrast , we hypothesize that an RL approach will be able to leverage patterns associated with meal times , resulting in better policies that do not require meal announcements . Moreover , such approaches can take advantage of existing simulators for training and evaluation ( described in more detail later ) . Previous work has examined the use of RL for different aspects of blood glucose control . Weng et al . ( 2017 ) use RL to learn policies that set blood glucose targets for septic patients , but do not learn policies to achieve these targets . Several recent works have investigated the use RL to adapt existing insulin treatment regimes ( Ngo et al. , 2018 ; Oroojeni Mohammad Javad et al. , 2015 ; Sun et al. , 2018 ) . In contrast to our setting , in which we aim to learn a closed-loop control policy , this work has focused on a human-in-the-loop setting , in which the goal is to learn optimal correction factors and carbohydrate ratios that can be used in the calculation of boluses . Most similar to our own work , De Paula et al . ( 2015 ) develop a kernelized Q-learning framework for closed loop glucose control . They make use of Bayesian active learning for on-the-fly personalization . This work tackles a similar problem to our own , but uses a simple two-compartment model for the glucoregulatory system and a fully deterministic meal routine . In our simulation environment , we found that such a Q-learning did not lead to satisfactory closed-loop performance and instead we examine deep actor-critic algorithms for continuous control . 2.2 GLUCOSE MODELS AND SIMULATION . Models of the glucoregulatory system have long been important to the development and testing of an AP ( Cobelli et al. , 1982 ) . Current models are based on a combination of rigorous experimentation and expert knowledge of the underlying physiological phenomena . Typical models consist of a multi-compartment model , with various sources and sinks corresponding to physiological phenomena , involving often dozens of patient-specific parameters . One such simulator , the one we use in our experiments , is the UVA/Padova model ( Kovatchev et al. , 2009 ) . Briefly , this simulator models the glucoregulatory system as a nonlinear multi-compartment system , where glucose is generated through the liver and absorbed through the gut and controlled by externally administered insulin . A more detailed explanation can be found in ( Kovatchev et al. , 2009 ) . We use an open-source version of the UVA/Padova simulator that comes with 30 virtual patients , each of which consists of several dozen parameters fully specifying the glucoregulatory system ( Xie , 2018 ) . The patients are divided into three classes : children , adolescents , and adults , each with 10 patients . While the simulator we use includes only 10 patients per class , there is a wide range of patient types among each class , with ages ranging from 7-68 years and recommended daily insulin from 16 units to over 60 .
This paper examines reinforcement learning in the context of blood glucose control to help individuals with type 1 diabetes. The authors show that their methods lead to strong algorithms that can improve artificial pancreas systems. Their results are promising, and, very importantly, do not require meal announcements. The importance of their application is self evident.
SP:33386a5a96124115a197a16ea1ca2f2ba326c34d
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
1 INTRODUCTION . In numerous real-world applications , one is faced with various forms of adversary that are not accounted for by standard optimization algorithms . For instance , when training a machine learning model on user-provided data , malicious users can carry out a data poisoning attack : providing false data with the aim of corrupting the learned model ( Steinhardt et al. , 2017 ; Tran et al. , 2018 ; Jagielski et al. , 2018 ) . At inference time , malicious users can evade detection of multiple models in the form of adversarial example attacks ( Goodfellow et al. , 2014 ; Liu et al. , 2016 ; 2018a ) . Min-max ( robust ) optimization is a natural framework to address adversarial ( worst-case ) robustness ( Madry et al. , 2017b ; Al-Dujaili et al. , 2018b ) . It converts a standard minimization problem into a composition of an inner maximization problem and an outer minimization problem . Min-max optimization problems have been studied for multiple decades ( Wald , 1945 ) , and the majority of the proposed methods assume access to first-order ( FO ) information , i.e . gradients , to find or approximate robust solutions ( Nesterov , 2007 ; Gidel et al. , 2017 ; Hamedani et al. , 2018 ; Qian et al. , 2019 ; Rafique et al. , 2018 ; Sanjabi et al. , 2018b ; Lu et al. , 2019 ; Nouiehed et al. , 2019 ; Lu et al. , 2019 ; Jin et al. , 2019 ) . In this paper , we focus on design and analysis of black-box ( gradient-free ) min-max optimization methods , where gradients are neither symbolically nor numerically available , or they are tedious to compute ( Conn et al. , 2009 ) . Our study is particularly motivated by the design of data poisoning and evasion adversarial attacks from black-box machine learning ( ML ) or deep learning ( DL ) systems , whose internal configuration and operating mechanism are unknown to adversaries . The extension of min-max optimization from the FO domain to the gradient-free regime is challenging since the solver suffers from uncertainties in both black-box objective functions and optimization procedure and do not scale well to high-dimensional problems . We develop a provable and unified black-box min-max stochastic optimization method by integrating a query-efficient randomized zeroth-order ( ZO ) gradient estimator with a computation-efficient alternating gradient descent-ascent framework , where the former requires a small number of function queries to build a gradient estimate , and the latter needs just one-step descent/ascent update . Recently , ZO optimization has attracted increasing attention in solving ML/DL problems . For example , ZO optimization serves as a powerful and practical tool for generation of black-box adversarial examples to evaluate the adversarial robustness of ML/DL models ( Chen et al. , 2017 ; Ilyas et al. , 2018 ; Tu et al. , 2018 ; Ilyas et al. , 2019 ) . ZO optimization can also help to solve automated ML problems , where the gradients with respect to ML pipeline configuration parameters are intractable ( Aggarwal et al. , 2019 ) . Furthermore , ZO optimization provides computationally-efficient alternatives of high-order optimization methods for solving complex ML/DL tasks , e.g. , robust training by leveraging input gradient or curvature regularization ( Finlay & Oberman , 2019 ; Moosavi-Dezfooli et al. , 2019 ) , modelagnostic meta-learning ( Fallah et al. , 2019 ) , network control and management ( Chen & Giannakis , 2018 ) , and data processing in high dimension ( Liu et al. , 2018b ) . Other recent applications include generating model-agnostic contrastive explanations ( Dhurandhar et al. , 2019 ) and escaping saddle points ( Flokas et al. , 2019 ) . Current studies ( Ghadimi & Lan , 2013 ; Nesterov & Spokoiny , 2015 ; Duchi et al. , 2015 ; Ghadimi et al. , 2016 ; Shamir , 2017 ; Liu et al. , 2019 ) suggested that ZO methods typically agree with the iteration complexity of FO methods but encounter a slowdown factor up to a small-degree polynomial of the problem dimensionality . To the best of our knowledge , it was an open question whether any convergence rate analysis can be established for black-box min-max optimization . Contribution . We summarize our contributions as follows . ( i ) We first identify a class of black-box attack and robust learning problems which turn out to be min-max black-box optimization problems . ( ii ) We propose a scalable and principled framework ( ZO-Min-Max ) for solving constrained minmax saddle point problems under both one-sided and two-sided black-box objective functions . Here the one-sided setting refers to the scenario where only the outer minimization problem is black-box . ( iii ) We provide a novel convergence analysis characterizing the number of objective function evaluations required to attain locally robust solution to black-box min-max problems with nonconvex outer minimization and strongly concave inner maximization . Our analysis handles stochasticity in both objective function and ZO gradient estimator , and shows that ZO-Min-Max yields O ( 1/T + 1/b + d/q ) convergence rate , where T is number of iterations , b is mini-batch size , q is number of random direction vectors used in ZO gradient estimation , and d is number of optimization variables . ( iv ) We demonstrate the effectiveness of our proposal in practical data poisoning and evasion attack generation problems.1 2 RELATED WORK . FO min-max optimization . Gradient-based methods have been applied with celebrated success to solve min-max problems such as robust learning ( Qian et al. , 2019 ) , generative adversarial networks ( GANs ) ( Sanjabi et al. , 2018a ) , adversarial training ( Al-Dujaili et al. , 2018b ; Madry et al. , 2017a ) , and robust adversarial attack generation ( Wang et al. , 2019b ) . Some of FO methods are motivated by theoretical justifications based on Danskin ’ s theorem ( Danskin , 1966 ) , which implies that the negative of the gradient of the outer minimization problem at inner maximizer is a descent direction ( Madry et al. , 2017a ) . Convergence analysis of other FO min-max methods has been studied under different problem settings , e.g. , ( Lu et al. , 2019 ; Qian et al. , 2019 ; Rafique et al. , 2018 ; Sanjabi et al. , 2018b ; Nouiehed et al. , 2019 ) . It was shown in ( Lu et al. , 2019 ) that a deterministic FO min-max algorithm has O ( 1/T ) convergence rate . In ( Qian et al. , 2019 ; Rafique et al. , 2018 ) , stochastic FO min-max methods have also been proposed , which yield the convergence rate in the order of O ( 1/ √ T ) and O ( 1/T 1/4 ) , respectively . However , these works were restricted to unconstrained optimization at the minimization side . In ( Sanjabi et al. , 2018b ) , noncovnex-concave min-max problems were studied , but the proposed analysis requires solving the maximization problem only up to some small error . In ( Nouiehed et al. , 2019 ) , the O ( 1/T ) convergence rate was proved for nonconvex-nonconcave min-max problems under Polyak- Łojasiewicz conditions . Different from the aforementioned FO settings , ZO min-max stochastic optimization suffers randomness from both stochastic sampling in objective function and ZO gradient estimation , and this randomness would be coupled in alternating gradient descent-descent steps and thus make it more challenging in convergence analysis . Gradient-free min-max optimization . In the black-box setup , coevolutionary algorithms were used extensively to solve min-max problems ( Herrmann , 1999 ; Schmiedlechner et al. , 2018 ) . However , they may oscillate and never converge to a solution due to pathological behaviors such as focusing and relativism ( Watson & Pollack , 2001 ) . Fixes to these issues have been proposed and analyzed—e.g. , 1Source code will be released . asymmetric fitness ( Jensen , 2003 ; Branke & Rosenbusch , 2008 ) . In ( Al-Dujaili et al. , 2018c ) , the authors employed an evolution strategy as an unbiased approximate for the descent direction of the outer minimization problem and showed empirical gains over coevlutionary techniques , albeit without any theoretical guarantees . Min-max black-box problems can also be addressed by resorting to direct search and model-based descent and trust region methods ( Audet & Hare , 2017 ; Larson et al. , 2019 ; Rios & Sahinidis , 2013 ) . However , these methods lack convergence rate analysis and are difficult to scale to high-dimensional problems . For example , the off-the-shelf model-based solver COBYLA only supports problems with 216 variables at maximum in SciPy Python library ( Jones et al. , 2001 ) , which is even smaller than the size of a single ImageNet image . The recent work ( Bogunovic et al. , 2018 ) proposed a robust Bayesian optimization ( BO ) algorithm and established a theoretical lower bound on the required number of the min-max objective evaluations to find a near-optimal point . However , BO approaches are often tailored to low-dimensional problems and its computational complexity prohibits scalable application . From a game theory perspective , the min-max solution for some problems correspond to the Nash equilibrium between the outer minimizer and the inner maximizer , and hence black-box Nash equilibria solvers can be used ( Picheny et al. , 2019 ; Al-Dujaili et al. , 2018a ) . This setup , however , does not always hold in general . Our work contrasts with the above lines of work in designing and analyzing black-box min-max techniques that are both scalable and theoretically grounded . 3 PROBLEM SETUP . In this section , we define the black-box min-max problem and briefly motivate its applications . By min-max , we mean that the problem is a composition of inner maximization and outer minimization of the objective function f . By black-box , we mean that the objective function f is only accessible via point-wise functional evaluations . Mathematically , we have min x∈X max y∈Y f ( x , y ) ( 1 ) where x and y are optimization variables , f is a differentiable objective function , and X ⊂ Rdx and Y ⊂ Rdy are compact convex sets . For ease of notation , let dx = dy = d. In ( 1 ) , the objective function f could represent either a deterministic loss or stochastic loss f ( x , y ) = Eξ∼p [ f ( x , y ; ξ ) ] , where ξ is a random variable following the distribution p. In this paper , we consider the stochastic variant in ( 1 ) . We focus on two black-box scenarios in which gradients ( or stochastic gradients under randomly sampled ξ ) of f w.r.t . x or y are not accessed . ( a ) One-sided black-box : f ( x , y ) is a white box w.r.t . y but a black box w.r.t . x . ( b ) Two-sided black-box : f ( x , y ) is a black box w.r.t . both x and y . Motivation of setup ( a ) and ( b ) . Both setups are well motivated from the design of black-box adversarial attacks . The formulation of the one-sided black-box min-max problem corresponds to a particular type of attack , known as black-box ensemble evasion attack , where the attacker generates adversarial examples ( i.e. , crafted examples with slight perturbations for misclassification at the testing phase ) and optimizes its worst-case performance against an ensemble of black-box classifiers and/or example classes . The formulation of two-sided black-box min-max problem represents another type of attack at the training phase , known as black-box poisoning attack , where the attacker deliberately influences the training data ( by injecting poisoned samples ) to manipulate the results of a black-box predictive model . Although problems of designing ensemble evasion attack ( Liu et al. , 2016 ; 2018a ; Wang et al. , 2019b ) and data poisoning attack ( Jagielski et al. , 2018 ; Wang et al. , 2019a ) have been studied in the literature , most of them assumed that the adversary has the full knowledge of the target ML model , leading to an impractical white-box attack setting . By contrast , we provide a solution to min-max attack generation under black-box ML models . We refer readers to Section 6 for further discussion and demonstration of our framework on these problems .
The paper presents an algorithm for performing min-max optimisation without gradients and analyses its convergence. The algorithm is evaluated for the min-max problems that arise in the context of adversarial attacks. The presented algorithm is a natural application of a zeroth-order gradient estimator and the authors also prove that the algorithm has a sublinear convergence rate (in a specific sense).
SP:a85b6e1281b4c5f84e891b0897affe5971d4ff7a
Min-Max Optimization without Gradients: Convergence and Applications to Adversarial ML
1 INTRODUCTION . In numerous real-world applications , one is faced with various forms of adversary that are not accounted for by standard optimization algorithms . For instance , when training a machine learning model on user-provided data , malicious users can carry out a data poisoning attack : providing false data with the aim of corrupting the learned model ( Steinhardt et al. , 2017 ; Tran et al. , 2018 ; Jagielski et al. , 2018 ) . At inference time , malicious users can evade detection of multiple models in the form of adversarial example attacks ( Goodfellow et al. , 2014 ; Liu et al. , 2016 ; 2018a ) . Min-max ( robust ) optimization is a natural framework to address adversarial ( worst-case ) robustness ( Madry et al. , 2017b ; Al-Dujaili et al. , 2018b ) . It converts a standard minimization problem into a composition of an inner maximization problem and an outer minimization problem . Min-max optimization problems have been studied for multiple decades ( Wald , 1945 ) , and the majority of the proposed methods assume access to first-order ( FO ) information , i.e . gradients , to find or approximate robust solutions ( Nesterov , 2007 ; Gidel et al. , 2017 ; Hamedani et al. , 2018 ; Qian et al. , 2019 ; Rafique et al. , 2018 ; Sanjabi et al. , 2018b ; Lu et al. , 2019 ; Nouiehed et al. , 2019 ; Lu et al. , 2019 ; Jin et al. , 2019 ) . In this paper , we focus on design and analysis of black-box ( gradient-free ) min-max optimization methods , where gradients are neither symbolically nor numerically available , or they are tedious to compute ( Conn et al. , 2009 ) . Our study is particularly motivated by the design of data poisoning and evasion adversarial attacks from black-box machine learning ( ML ) or deep learning ( DL ) systems , whose internal configuration and operating mechanism are unknown to adversaries . The extension of min-max optimization from the FO domain to the gradient-free regime is challenging since the solver suffers from uncertainties in both black-box objective functions and optimization procedure and do not scale well to high-dimensional problems . We develop a provable and unified black-box min-max stochastic optimization method by integrating a query-efficient randomized zeroth-order ( ZO ) gradient estimator with a computation-efficient alternating gradient descent-ascent framework , where the former requires a small number of function queries to build a gradient estimate , and the latter needs just one-step descent/ascent update . Recently , ZO optimization has attracted increasing attention in solving ML/DL problems . For example , ZO optimization serves as a powerful and practical tool for generation of black-box adversarial examples to evaluate the adversarial robustness of ML/DL models ( Chen et al. , 2017 ; Ilyas et al. , 2018 ; Tu et al. , 2018 ; Ilyas et al. , 2019 ) . ZO optimization can also help to solve automated ML problems , where the gradients with respect to ML pipeline configuration parameters are intractable ( Aggarwal et al. , 2019 ) . Furthermore , ZO optimization provides computationally-efficient alternatives of high-order optimization methods for solving complex ML/DL tasks , e.g. , robust training by leveraging input gradient or curvature regularization ( Finlay & Oberman , 2019 ; Moosavi-Dezfooli et al. , 2019 ) , modelagnostic meta-learning ( Fallah et al. , 2019 ) , network control and management ( Chen & Giannakis , 2018 ) , and data processing in high dimension ( Liu et al. , 2018b ) . Other recent applications include generating model-agnostic contrastive explanations ( Dhurandhar et al. , 2019 ) and escaping saddle points ( Flokas et al. , 2019 ) . Current studies ( Ghadimi & Lan , 2013 ; Nesterov & Spokoiny , 2015 ; Duchi et al. , 2015 ; Ghadimi et al. , 2016 ; Shamir , 2017 ; Liu et al. , 2019 ) suggested that ZO methods typically agree with the iteration complexity of FO methods but encounter a slowdown factor up to a small-degree polynomial of the problem dimensionality . To the best of our knowledge , it was an open question whether any convergence rate analysis can be established for black-box min-max optimization . Contribution . We summarize our contributions as follows . ( i ) We first identify a class of black-box attack and robust learning problems which turn out to be min-max black-box optimization problems . ( ii ) We propose a scalable and principled framework ( ZO-Min-Max ) for solving constrained minmax saddle point problems under both one-sided and two-sided black-box objective functions . Here the one-sided setting refers to the scenario where only the outer minimization problem is black-box . ( iii ) We provide a novel convergence analysis characterizing the number of objective function evaluations required to attain locally robust solution to black-box min-max problems with nonconvex outer minimization and strongly concave inner maximization . Our analysis handles stochasticity in both objective function and ZO gradient estimator , and shows that ZO-Min-Max yields O ( 1/T + 1/b + d/q ) convergence rate , where T is number of iterations , b is mini-batch size , q is number of random direction vectors used in ZO gradient estimation , and d is number of optimization variables . ( iv ) We demonstrate the effectiveness of our proposal in practical data poisoning and evasion attack generation problems.1 2 RELATED WORK . FO min-max optimization . Gradient-based methods have been applied with celebrated success to solve min-max problems such as robust learning ( Qian et al. , 2019 ) , generative adversarial networks ( GANs ) ( Sanjabi et al. , 2018a ) , adversarial training ( Al-Dujaili et al. , 2018b ; Madry et al. , 2017a ) , and robust adversarial attack generation ( Wang et al. , 2019b ) . Some of FO methods are motivated by theoretical justifications based on Danskin ’ s theorem ( Danskin , 1966 ) , which implies that the negative of the gradient of the outer minimization problem at inner maximizer is a descent direction ( Madry et al. , 2017a ) . Convergence analysis of other FO min-max methods has been studied under different problem settings , e.g. , ( Lu et al. , 2019 ; Qian et al. , 2019 ; Rafique et al. , 2018 ; Sanjabi et al. , 2018b ; Nouiehed et al. , 2019 ) . It was shown in ( Lu et al. , 2019 ) that a deterministic FO min-max algorithm has O ( 1/T ) convergence rate . In ( Qian et al. , 2019 ; Rafique et al. , 2018 ) , stochastic FO min-max methods have also been proposed , which yield the convergence rate in the order of O ( 1/ √ T ) and O ( 1/T 1/4 ) , respectively . However , these works were restricted to unconstrained optimization at the minimization side . In ( Sanjabi et al. , 2018b ) , noncovnex-concave min-max problems were studied , but the proposed analysis requires solving the maximization problem only up to some small error . In ( Nouiehed et al. , 2019 ) , the O ( 1/T ) convergence rate was proved for nonconvex-nonconcave min-max problems under Polyak- Łojasiewicz conditions . Different from the aforementioned FO settings , ZO min-max stochastic optimization suffers randomness from both stochastic sampling in objective function and ZO gradient estimation , and this randomness would be coupled in alternating gradient descent-descent steps and thus make it more challenging in convergence analysis . Gradient-free min-max optimization . In the black-box setup , coevolutionary algorithms were used extensively to solve min-max problems ( Herrmann , 1999 ; Schmiedlechner et al. , 2018 ) . However , they may oscillate and never converge to a solution due to pathological behaviors such as focusing and relativism ( Watson & Pollack , 2001 ) . Fixes to these issues have been proposed and analyzed—e.g. , 1Source code will be released . asymmetric fitness ( Jensen , 2003 ; Branke & Rosenbusch , 2008 ) . In ( Al-Dujaili et al. , 2018c ) , the authors employed an evolution strategy as an unbiased approximate for the descent direction of the outer minimization problem and showed empirical gains over coevlutionary techniques , albeit without any theoretical guarantees . Min-max black-box problems can also be addressed by resorting to direct search and model-based descent and trust region methods ( Audet & Hare , 2017 ; Larson et al. , 2019 ; Rios & Sahinidis , 2013 ) . However , these methods lack convergence rate analysis and are difficult to scale to high-dimensional problems . For example , the off-the-shelf model-based solver COBYLA only supports problems with 216 variables at maximum in SciPy Python library ( Jones et al. , 2001 ) , which is even smaller than the size of a single ImageNet image . The recent work ( Bogunovic et al. , 2018 ) proposed a robust Bayesian optimization ( BO ) algorithm and established a theoretical lower bound on the required number of the min-max objective evaluations to find a near-optimal point . However , BO approaches are often tailored to low-dimensional problems and its computational complexity prohibits scalable application . From a game theory perspective , the min-max solution for some problems correspond to the Nash equilibrium between the outer minimizer and the inner maximizer , and hence black-box Nash equilibria solvers can be used ( Picheny et al. , 2019 ; Al-Dujaili et al. , 2018a ) . This setup , however , does not always hold in general . Our work contrasts with the above lines of work in designing and analyzing black-box min-max techniques that are both scalable and theoretically grounded . 3 PROBLEM SETUP . In this section , we define the black-box min-max problem and briefly motivate its applications . By min-max , we mean that the problem is a composition of inner maximization and outer minimization of the objective function f . By black-box , we mean that the objective function f is only accessible via point-wise functional evaluations . Mathematically , we have min x∈X max y∈Y f ( x , y ) ( 1 ) where x and y are optimization variables , f is a differentiable objective function , and X ⊂ Rdx and Y ⊂ Rdy are compact convex sets . For ease of notation , let dx = dy = d. In ( 1 ) , the objective function f could represent either a deterministic loss or stochastic loss f ( x , y ) = Eξ∼p [ f ( x , y ; ξ ) ] , where ξ is a random variable following the distribution p. In this paper , we consider the stochastic variant in ( 1 ) . We focus on two black-box scenarios in which gradients ( or stochastic gradients under randomly sampled ξ ) of f w.r.t . x or y are not accessed . ( a ) One-sided black-box : f ( x , y ) is a white box w.r.t . y but a black box w.r.t . x . ( b ) Two-sided black-box : f ( x , y ) is a black box w.r.t . both x and y . Motivation of setup ( a ) and ( b ) . Both setups are well motivated from the design of black-box adversarial attacks . The formulation of the one-sided black-box min-max problem corresponds to a particular type of attack , known as black-box ensemble evasion attack , where the attacker generates adversarial examples ( i.e. , crafted examples with slight perturbations for misclassification at the testing phase ) and optimizes its worst-case performance against an ensemble of black-box classifiers and/or example classes . The formulation of two-sided black-box min-max problem represents another type of attack at the training phase , known as black-box poisoning attack , where the attacker deliberately influences the training data ( by injecting poisoned samples ) to manipulate the results of a black-box predictive model . Although problems of designing ensemble evasion attack ( Liu et al. , 2016 ; 2018a ; Wang et al. , 2019b ) and data poisoning attack ( Jagielski et al. , 2018 ; Wang et al. , 2019a ) have been studied in the literature , most of them assumed that the adversary has the full knowledge of the target ML model , leading to an impractical white-box attack setting . By contrast , we provide a solution to min-max attack generation under black-box ML models . We refer readers to Section 6 for further discussion and demonstration of our framework on these problems .
This paper considers zeroth-order method for min-max optimization (ZO-MIN-MAX) in two cases: one-sided black box (for outer minimization) and two-sided black box (for both inner maximization and outer minimization). Convergence analysis is carefully provided to show that ZO-MIN-MAX converges to a neighborhood of stationary points. Then, the authors empirically compare several methods on
SP:a85b6e1281b4c5f84e891b0897affe5971d4ff7a
OBJECT-ORIENTED REPRESENTATION OF 3D SCENES
1 INTRODUCTION . The shortcomings of contemporary deep learning such as interpretability , sample efficiency , ability for reasoning and causal inference , transferability , and compositionality , are where the symbolic AI has traditionally shown its strengths ( Garnelo & Shanahan , 2019 ) . Thus , one of the grand challenges in machine learning has been to make deep learning embrace the benefits of symbolic representation so that symbolic entities can emerge from high-dimensional observations such as visual scenes . In particular , for learning from visual observations of the physical world , such representation should consider the following criteria . First , it should focus on objects ( and their relations ) which are foundational entities constructing the physical world . These can be considered as units on which we can build a modular model . The modular nature also helps compositionality ( Andreas et al. , 2016 ) and transferability ( Kansky et al. , 2017 ) . Second , being three-dimensional ( 3D ) is a decisive property of the physical world . We humans , equipped with such 3D representation in our brain ( Yamane et al. , 2008 ) , can retain consistency on the identity of an object even if it is observed from different viewpoints . Lastly , learning such representation should be unsupervised . Although there have been remarkable advances in supervised methods to object perception ( Redmon et al. , 2016 ; Ren et al. , 2015 ; Long et al. , 2015 ) , the technology should advance toward unsupervised learning as we humans do . This not only avoids expensive labeling efforts but also allows adaptability and flexibility to the evolving goals of various downstream tasks because “ objectness ” itself can vary on the situation . In this paper , we propose a probabilistic generative model that can learn , without supervision , objectoriented 3D representation of a 3D scene from its partial 2D observations . We call the proposed model ROOTS ( Representation of Object-Oriented Three-dimensional Scenes ) . We base our model on the framework of Generative Query Networks ( GQN ) ( Eslami et al. , 2018 ; Kumar et al. , 2018 ) . However , unlike GQN which provides only a scene-level representation that encodes the whole 3D scene into a single continuous vector , the scene representation of ROOTS is decomposed into objectwise representations each of which is also an independent , modular , and 3D representation . Further , ROOTS learns to model a background representation separately for the non-object part of the scene . The object-oriented representation of ROOTS is more interpretable , composible , and transferable . Besides , ROOTS provides the two-level hierarchy of the object-oriented representation : one for a global 3D scene and another for local 2D images . This makes the model more interpretable and provides more useful structure for downstream tasks . In experiments , we show the above abilities of ROOTS on the 3D-Room dataset containing images of 3D rooms with several objects of different colors and shapes . We also show that these new abilities are achieved without sacrificing generation quality compared to GQN . Our proposed problem and method are significantly different from existing works on visual 3D learning although some of those partly tackle some of our challenges . First , our model learns factorized object-oriented 3D representations which are independent and modular , from a scene containing multiple objects with occlusion and partial observability rather than a single object . Second , our method is unsupervised , not using any 3D structure annotation such as voxels , cloud points , or meshes as well as bounding boxes or segmentation annotation . Third , our model is a probabilistic generative model learning both representation and rendering with uncertainty modeling . Lastly , it is trained end-to-end . In Section 4 , we provide more discussion on the related works . The main contributions are : ( i ) We propose , in the GQN framework , a new problem of learning object-oriented 3D representations of a 3D scene containing multiple objects with occlusion and partial observability in the challenging setting described above . ( ii ) We achieve this by proposing a new probabilistic model and neural architecture . ( iii ) We demonstrate that our model enables various new abilities such as compositionality and transferability while not losing generation quality . 2 PRELIMINARY : GENERATIVE QUERY NETWORKS . The generative query networks ( GQN ) ( Eslami et al. , 2018 ) is a probabilistic generative latentvariable model providing a framework to learn a 3D representation of a 3D scene . In this framework , an agent navigating a scene i collects K images xki from 2D viewpoint v k i . We refer this collection to context observations Ci = { ( xki , vki ) } k=1 , ... , K . While GQN is trained on a set of scenes , in the following , we omit the scene index i for brevity and discuss a single scene without loss of generality . GQN learns scene representation z from context C. The learned representation z of GQN is a 3Dviewpoint invariant representation of the scene in the sense that , given an arbitrary query viewpoint vq , its corresponding 2D image xq can be generated from the representation . In the GQN framework , there are two versions . The standard GQN model ( Eslami et al. , 2018 ) uses the query viewpoint to generate representation whereas the Consistent GQN ( CGQN ) ( Kumar et al. , 2018 ) uses the query after generating the scene representation in order to obtain queryindependent scene-level representation . Although we use CGQN as our base framework to obtain query-independent scene-level representation , in the rest of the paper we use the abbreviation GQN instead of CGQN to indicate the general GQN framework embracing both GQN and CGQN . The generative process of GQN is written as follows : p ( xq|vq , C ) = ∫ p ( xq|vq , z ) p ( z|C ) dz . As shown , GQN uses a conditional prior p ( z|C ) to learn scene representation z from context . To do this , it first obtains a neural scene representation r from the representation network r = frepr-gqn ( C ) which combines the encodings of ( vk , xk ) ∈ C in an order-invariant way such as sum or mean . It then uses ConvDRAW ( Gregor et al. , 2016 ) to generate the scene latent variable z from scene representation r by p ( z|C ) = ∏ l=1 : L p ( z l|z < l , r ) = ConvDRAW ( r ) with L autoregressive rollout steps . Due to intractability of the posterior distribution p ( z|C , vq , xq ) , GQN uses variational inference for posterior approximation and the reparameterization trick ( Kingma & Welling , 2013 ) for backpropagation through stochastic variables . The objective is to maximize the following evidence lower bound ( ELBO ) via gradient-based optimization . log pθ ( x q | vq , C ) ≥ Eqφ ( z|C , vq , xq ) [ log pθ ( x q | vq , z ) ] −KL ( qφ ( z | C , vq , xq ) ‖ pθ ( z | C ) ) . Note that although in this paper we use a single target observation D = ( xq , vq ) for brevity , the model is in general trained on a set of target observations D = { ( xqj , v q j ) } j . 3 ROOTS : REPRESENTATION OF OBJECT-ORIENTED 3D SCENES . 3.1 GENERATIVE PROCESS . The main difference of our model from GQN is that we have a 3D representation per object present in the target 3D space while GQN has a single 3D representation compressing the whole 3D space into a vector without object-level decomposition . We begin this modeling by introducing the number of objects M in the target space as a random variable . Then , we can write the representation prior of ROOTS as p ( z , M |C ) = p ( M |C ) ∏M m=1 p ( z ( m ) |C ) . To implement such a model with a variable number of objects , in AIR ( Eslami et al. , 2016 ) , the authors proposed to use an RNN that rolls out M steps , processing one object per step . However , according to our preliminary investigation ( under review ) and other works ( Crawford & Pineau , 2019 ) , it turned out that this approach is computationally inefficient and shows severe performance degradation with growing M . Object-Factorized Conditional Prior . To resolve this problem , in ROOTS we propose to process objects in a spatially local and parallel way instead of sequential processing . This is done by first introducing the scene-volume feature-map . The scene-volume feature-map is obtained by encoding contextC into a 3D tensor ofN = ( H×W×L ) cells . Each cell n ∈ { 1 , . . . , N } is then associated to D-dimensional volume feature rn ∈ RD . Thus , the actual output of the encoder is a 4-dimensional tensor r = frepr-scene ( C ) . Each volume feature rn represents a local 3D space in the target 3D space in a similar way that a feature vector in a 2D feature-map of CNN models a local area of an input image . Note , however , that the introduction of the scene-volume feature-map is not the same as the feature-map of 2D images because , unlike the CNN feature-map for images , the actual 3D target space is not directly observable—it is only observed through a proxy of 2D images . For the detail implementation of the encoder frepr-scene , refer to the Appendix A.3 . Given the scene-volume feature-map , for each volume cell n = 1 , . . . , N but in parallel , we obtain three latent variables ( zpresn , z pos n , z what n ) = zn from the 3D-object prior model p ( zn|rn ) . We refer this collection of object latent variables z = { zn } Nn=1 to the scene-object latent-map as z is generated from the scene-volume feature-map r. Here , zpresn is a Bernoulli random variable indicating whether an object is associated ( present ) to the volume cell or not , zposn is a 3-dimensional coordinate indicating the position of an object in the target 3D space , and zwhatn is a representation vector for the appearance of the object . We defer a more detail description of the 3D-object prior model p ( zn|rn ) to the next section . Note that in ROOTS we obtain zwhatn as a 3D representation which is invariant to 3D viewpoints . The position and appearance latents for cell n are defined only when the cell has an associated object to represent , i.e. , zpresn = 1 . From this modeling using scenevolume feature-map , we can obtain M = ∑ n z pres n and the previous prior model can be written as follows : p ( z , M | C ) = p ( M | C ) ∏M m=1 p ( z ( m ) | C ) = N∏ n=1 p ( zn | rn ) = N∏ n=1 p ( zpresn |rn ) [ p ( zposn | rn ) p ( zwhatn | rn , zposn ) ] zpresn . ( 1 ) In addition to allowing spatially parallel and local object processing , another key idea behind introducing a presence variable per volume cell is to reflect the inductive bias of physics : two objects can not co-exist at the same position . This helps remove the sequential object processing because dealing with an object does not need to consider other objects if their features are from spatially distant areas . Note also that the scene-volume feature-map is not to strictly partition the target 3D space and that the presence variable represents the existence of the center position of an object not the full volume of an object . Thus , information about an object can exist across neighboring cells . Hierarchical Object-Oriented Representation . The object-oriented representations z = { zn } provided by the above prior model is global in the sense that it contains all objects in the whole target 3D space , independently to a query viewpoint . From this global representation and given a query viewpoint , ROOTS generates a 2D image corresponding to a query viewpoint . This is done first by learning the view-dependent representation of the target image . In a naive approach , this may be done simply by learning a single vector representation p ( zq|z , vq ) but in this case , we lose important information : the correspondence between a rendered object in the image and a global object representation zn . That is , we can not track from which object representation zn an object in the image is rendered . In ROOTS , we resolve this problem by introducing local 2D-level objectoriented representation layer . This local object-oriented and view-dependent representation allows additional useful structure and more interpretability . This 2D local representation is similar to those in AIR ( Eslami et al. , 2016 ) and SPAIR ( Crawford & Pineau , 2019 ) . Specifically , for n for which zpresn = 1 , a local object representation sn is generated by conditioning on the global representation set z and the query vq . Our local object representation model is written as : p ( s|z , vq ) = ∏N n=1 p ( sn|z , vq ) . Similar to the decomposition of zn , local object representation sn consists of ( spresn , s pos n , s scale n , s what n ) . Here , s pres n indicates whether an object n should be rendered in the target image from the perspective of the query . Thus , even if an object exists in the target 3D space , i.e. , zpresn = 1 , s pres n can be set to zero if that object should be invisible from the query viewpoint . Similarly , sposn and s scale n represent respectively the position and scale of object n in the image not in the 3D space , and swhatn represents the appearance to be rendered into the image ( thus not 3D invariant ) . For more details about how to obtain ( spresn , s pos n , s scale n , s what n ) from z and vq , we describe in the next section . Given s = { sn } , we then render to the canvas to obtain the target image p ( xq|s ) . Combining all , the generative process of ROOTS is written as follows : p ( xq|vq , C ) = ∫ p ( xq|s ) N∏ n=1 p ( sn|z , vq ) N∏ n=1 p ( zn|C ) dzds . ( 2 ) See Figure 1 for the overview of the generation process .
The paper proposes a model building off of the generative query network model that takes in as input multiple images, builds a model of the 3D scene, and renders it. This can be trained end to end. The insight of the method is that one can factor the underlying representation into different objects. The system is trained on scenes rendered in mujoco.
SP:41867edbd1bb96ff8340c8decefba2127a67dced
OBJECT-ORIENTED REPRESENTATION OF 3D SCENES
1 INTRODUCTION . The shortcomings of contemporary deep learning such as interpretability , sample efficiency , ability for reasoning and causal inference , transferability , and compositionality , are where the symbolic AI has traditionally shown its strengths ( Garnelo & Shanahan , 2019 ) . Thus , one of the grand challenges in machine learning has been to make deep learning embrace the benefits of symbolic representation so that symbolic entities can emerge from high-dimensional observations such as visual scenes . In particular , for learning from visual observations of the physical world , such representation should consider the following criteria . First , it should focus on objects ( and their relations ) which are foundational entities constructing the physical world . These can be considered as units on which we can build a modular model . The modular nature also helps compositionality ( Andreas et al. , 2016 ) and transferability ( Kansky et al. , 2017 ) . Second , being three-dimensional ( 3D ) is a decisive property of the physical world . We humans , equipped with such 3D representation in our brain ( Yamane et al. , 2008 ) , can retain consistency on the identity of an object even if it is observed from different viewpoints . Lastly , learning such representation should be unsupervised . Although there have been remarkable advances in supervised methods to object perception ( Redmon et al. , 2016 ; Ren et al. , 2015 ; Long et al. , 2015 ) , the technology should advance toward unsupervised learning as we humans do . This not only avoids expensive labeling efforts but also allows adaptability and flexibility to the evolving goals of various downstream tasks because “ objectness ” itself can vary on the situation . In this paper , we propose a probabilistic generative model that can learn , without supervision , objectoriented 3D representation of a 3D scene from its partial 2D observations . We call the proposed model ROOTS ( Representation of Object-Oriented Three-dimensional Scenes ) . We base our model on the framework of Generative Query Networks ( GQN ) ( Eslami et al. , 2018 ; Kumar et al. , 2018 ) . However , unlike GQN which provides only a scene-level representation that encodes the whole 3D scene into a single continuous vector , the scene representation of ROOTS is decomposed into objectwise representations each of which is also an independent , modular , and 3D representation . Further , ROOTS learns to model a background representation separately for the non-object part of the scene . The object-oriented representation of ROOTS is more interpretable , composible , and transferable . Besides , ROOTS provides the two-level hierarchy of the object-oriented representation : one for a global 3D scene and another for local 2D images . This makes the model more interpretable and provides more useful structure for downstream tasks . In experiments , we show the above abilities of ROOTS on the 3D-Room dataset containing images of 3D rooms with several objects of different colors and shapes . We also show that these new abilities are achieved without sacrificing generation quality compared to GQN . Our proposed problem and method are significantly different from existing works on visual 3D learning although some of those partly tackle some of our challenges . First , our model learns factorized object-oriented 3D representations which are independent and modular , from a scene containing multiple objects with occlusion and partial observability rather than a single object . Second , our method is unsupervised , not using any 3D structure annotation such as voxels , cloud points , or meshes as well as bounding boxes or segmentation annotation . Third , our model is a probabilistic generative model learning both representation and rendering with uncertainty modeling . Lastly , it is trained end-to-end . In Section 4 , we provide more discussion on the related works . The main contributions are : ( i ) We propose , in the GQN framework , a new problem of learning object-oriented 3D representations of a 3D scene containing multiple objects with occlusion and partial observability in the challenging setting described above . ( ii ) We achieve this by proposing a new probabilistic model and neural architecture . ( iii ) We demonstrate that our model enables various new abilities such as compositionality and transferability while not losing generation quality . 2 PRELIMINARY : GENERATIVE QUERY NETWORKS . The generative query networks ( GQN ) ( Eslami et al. , 2018 ) is a probabilistic generative latentvariable model providing a framework to learn a 3D representation of a 3D scene . In this framework , an agent navigating a scene i collects K images xki from 2D viewpoint v k i . We refer this collection to context observations Ci = { ( xki , vki ) } k=1 , ... , K . While GQN is trained on a set of scenes , in the following , we omit the scene index i for brevity and discuss a single scene without loss of generality . GQN learns scene representation z from context C. The learned representation z of GQN is a 3Dviewpoint invariant representation of the scene in the sense that , given an arbitrary query viewpoint vq , its corresponding 2D image xq can be generated from the representation . In the GQN framework , there are two versions . The standard GQN model ( Eslami et al. , 2018 ) uses the query viewpoint to generate representation whereas the Consistent GQN ( CGQN ) ( Kumar et al. , 2018 ) uses the query after generating the scene representation in order to obtain queryindependent scene-level representation . Although we use CGQN as our base framework to obtain query-independent scene-level representation , in the rest of the paper we use the abbreviation GQN instead of CGQN to indicate the general GQN framework embracing both GQN and CGQN . The generative process of GQN is written as follows : p ( xq|vq , C ) = ∫ p ( xq|vq , z ) p ( z|C ) dz . As shown , GQN uses a conditional prior p ( z|C ) to learn scene representation z from context . To do this , it first obtains a neural scene representation r from the representation network r = frepr-gqn ( C ) which combines the encodings of ( vk , xk ) ∈ C in an order-invariant way such as sum or mean . It then uses ConvDRAW ( Gregor et al. , 2016 ) to generate the scene latent variable z from scene representation r by p ( z|C ) = ∏ l=1 : L p ( z l|z < l , r ) = ConvDRAW ( r ) with L autoregressive rollout steps . Due to intractability of the posterior distribution p ( z|C , vq , xq ) , GQN uses variational inference for posterior approximation and the reparameterization trick ( Kingma & Welling , 2013 ) for backpropagation through stochastic variables . The objective is to maximize the following evidence lower bound ( ELBO ) via gradient-based optimization . log pθ ( x q | vq , C ) ≥ Eqφ ( z|C , vq , xq ) [ log pθ ( x q | vq , z ) ] −KL ( qφ ( z | C , vq , xq ) ‖ pθ ( z | C ) ) . Note that although in this paper we use a single target observation D = ( xq , vq ) for brevity , the model is in general trained on a set of target observations D = { ( xqj , v q j ) } j . 3 ROOTS : REPRESENTATION OF OBJECT-ORIENTED 3D SCENES . 3.1 GENERATIVE PROCESS . The main difference of our model from GQN is that we have a 3D representation per object present in the target 3D space while GQN has a single 3D representation compressing the whole 3D space into a vector without object-level decomposition . We begin this modeling by introducing the number of objects M in the target space as a random variable . Then , we can write the representation prior of ROOTS as p ( z , M |C ) = p ( M |C ) ∏M m=1 p ( z ( m ) |C ) . To implement such a model with a variable number of objects , in AIR ( Eslami et al. , 2016 ) , the authors proposed to use an RNN that rolls out M steps , processing one object per step . However , according to our preliminary investigation ( under review ) and other works ( Crawford & Pineau , 2019 ) , it turned out that this approach is computationally inefficient and shows severe performance degradation with growing M . Object-Factorized Conditional Prior . To resolve this problem , in ROOTS we propose to process objects in a spatially local and parallel way instead of sequential processing . This is done by first introducing the scene-volume feature-map . The scene-volume feature-map is obtained by encoding contextC into a 3D tensor ofN = ( H×W×L ) cells . Each cell n ∈ { 1 , . . . , N } is then associated to D-dimensional volume feature rn ∈ RD . Thus , the actual output of the encoder is a 4-dimensional tensor r = frepr-scene ( C ) . Each volume feature rn represents a local 3D space in the target 3D space in a similar way that a feature vector in a 2D feature-map of CNN models a local area of an input image . Note , however , that the introduction of the scene-volume feature-map is not the same as the feature-map of 2D images because , unlike the CNN feature-map for images , the actual 3D target space is not directly observable—it is only observed through a proxy of 2D images . For the detail implementation of the encoder frepr-scene , refer to the Appendix A.3 . Given the scene-volume feature-map , for each volume cell n = 1 , . . . , N but in parallel , we obtain three latent variables ( zpresn , z pos n , z what n ) = zn from the 3D-object prior model p ( zn|rn ) . We refer this collection of object latent variables z = { zn } Nn=1 to the scene-object latent-map as z is generated from the scene-volume feature-map r. Here , zpresn is a Bernoulli random variable indicating whether an object is associated ( present ) to the volume cell or not , zposn is a 3-dimensional coordinate indicating the position of an object in the target 3D space , and zwhatn is a representation vector for the appearance of the object . We defer a more detail description of the 3D-object prior model p ( zn|rn ) to the next section . Note that in ROOTS we obtain zwhatn as a 3D representation which is invariant to 3D viewpoints . The position and appearance latents for cell n are defined only when the cell has an associated object to represent , i.e. , zpresn = 1 . From this modeling using scenevolume feature-map , we can obtain M = ∑ n z pres n and the previous prior model can be written as follows : p ( z , M | C ) = p ( M | C ) ∏M m=1 p ( z ( m ) | C ) = N∏ n=1 p ( zn | rn ) = N∏ n=1 p ( zpresn |rn ) [ p ( zposn | rn ) p ( zwhatn | rn , zposn ) ] zpresn . ( 1 ) In addition to allowing spatially parallel and local object processing , another key idea behind introducing a presence variable per volume cell is to reflect the inductive bias of physics : two objects can not co-exist at the same position . This helps remove the sequential object processing because dealing with an object does not need to consider other objects if their features are from spatially distant areas . Note also that the scene-volume feature-map is not to strictly partition the target 3D space and that the presence variable represents the existence of the center position of an object not the full volume of an object . Thus , information about an object can exist across neighboring cells . Hierarchical Object-Oriented Representation . The object-oriented representations z = { zn } provided by the above prior model is global in the sense that it contains all objects in the whole target 3D space , independently to a query viewpoint . From this global representation and given a query viewpoint , ROOTS generates a 2D image corresponding to a query viewpoint . This is done first by learning the view-dependent representation of the target image . In a naive approach , this may be done simply by learning a single vector representation p ( zq|z , vq ) but in this case , we lose important information : the correspondence between a rendered object in the image and a global object representation zn . That is , we can not track from which object representation zn an object in the image is rendered . In ROOTS , we resolve this problem by introducing local 2D-level objectoriented representation layer . This local object-oriented and view-dependent representation allows additional useful structure and more interpretability . This 2D local representation is similar to those in AIR ( Eslami et al. , 2016 ) and SPAIR ( Crawford & Pineau , 2019 ) . Specifically , for n for which zpresn = 1 , a local object representation sn is generated by conditioning on the global representation set z and the query vq . Our local object representation model is written as : p ( s|z , vq ) = ∏N n=1 p ( sn|z , vq ) . Similar to the decomposition of zn , local object representation sn consists of ( spresn , s pos n , s scale n , s what n ) . Here , s pres n indicates whether an object n should be rendered in the target image from the perspective of the query . Thus , even if an object exists in the target 3D space , i.e. , zpresn = 1 , s pres n can be set to zero if that object should be invisible from the query viewpoint . Similarly , sposn and s scale n represent respectively the position and scale of object n in the image not in the 3D space , and swhatn represents the appearance to be rendered into the image ( thus not 3D invariant ) . For more details about how to obtain ( spresn , s pos n , s scale n , s what n ) from z and vq , we describe in the next section . Given s = { sn } , we then render to the canvas to obtain the target image p ( xq|s ) . Combining all , the generative process of ROOTS is written as follows : p ( xq|vq , C ) = ∫ p ( xq|s ) N∏ n=1 p ( sn|z , vq ) N∏ n=1 p ( zn|C ) dzds . ( 2 ) See Figure 1 for the overview of the generation process .
The paper presents a framework for 3D representation learning from images of 2D scenes. The proposed architecture, which the authors call ROOTS (Representation of Object-Oriented Three-dimension Scenes), is based on the CGQN (Consistent Generative Query Networks) network. The paper provides 2 modifications. The representation is 1. factorized to differentiate objects and background and 2. hierarchical to first have a view point invariant 3D representation and then a view-point dependent 2D representation. Qualitative and qualitative experiments are performed using the MuJoCo physics simulator [1] (please add citation in the paper).
SP:41867edbd1bb96ff8340c8decefba2127a67dced
Higher-Order Function Networks for Learning Composable 3D Object Representations
We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second ‘ mapping ’ network . This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space , such as the unit sphere . We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset . We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters . Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters . Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects . ‡ 1 INTRODUCTION . This paper is primarily concerned with the problem of learning compact 3D object representations and estimating them from images . If we consider an object to be a continuous surface in R3 , it is not straightforward to directly represent this infinite set of points in memory . In working around this problem , many learning-based approaches to 3D object representation suffer from problems related to memory usage , computational burden , or sampling efficiency . Nonetheless , neural networks with tens of millions of parameters have proven effective tools for learning expressive representations of geometric data . In this work , we show that object geometries can be encoded into neural networks with thousands , rather than millions , of parameters with little or no loss in reconstruction quality . To this end , we propose an object representation that encodes an object as a function that maps points from a canonical space , such as the unit sphere , to the set of points defining the object . In this work , the function is approximated with a small multilayer perceptron . The parameters of this function are estimated by a ‘ higher order ’ encoder network , thus motivating the name for our method : Higher-Order Function networks ( HOF ) . This procedure is shown in Figure 1 . There are two key ideas that distinguish HOF from prior work in 3D object representation learning : fast-weights object encoding and interpolation through function composition . ( 1 ) Fast-weights object encoding : ‘ Fast-weights ’ in this context generally refers to methods that use network weights and biases that are not fixed ; at least some of these parameters are estimated on a per-sample basis . Our fast-weights approach stands in contrast to existing methods which encode objects as vector-valued inputs to a decoder network with fixed weights . Empirically , we find that our approach enables a dramatic reduction ( two orders of magnitude ) in the size of the mapping network compared to the decoder networks employed by other methods . ( 2 ) Interpolation through function composition : Our functional formulation allows for interpolation between inputs by composing the roots of our reconstruction functions . We demonstrate that the 1Stanford University 2Samsung AI Center - New York 3University of Minnesota †Work performed while an intern at Samsung AI Center - New York . ‡ See https : //saic-ny.github.io/hof for code and additional information . Correspondence to : Eric Mitchell < eric.mitchell @ cs.stanford.edu > . functional representation learned by HOF provides a rich latent space in which we can ‘ interpolate ’ between objects , producing new , coherent objects sharing properties of the ‘ parent ’ objects . In order to position HOF among other methods for 3D reconstruction , we first define a taxonomy of existing work and show that HOF provides a generalization of current best-performing methods . Afterwards , we demonstrate the effectiveness of HOF on the task of 3D reconstruction from an RGB image using a subset of the ShapeNet dataset ( Chang et al. , 2015 ) . The results , reported in Tables 1 and 2 and Figure 2 , show state-of-the-art reconstruction quality using orders of magnitude fewer parameters than other methods . 2 RELATED WORK . The selection of object representation is a crucial design choice for methods addressing 3D reconstruction . Voxel-based approaches ( Choy et al. , 2016 ; Häne et al. , 2017 ) typically use a uniform discretization of R3 in order to extend highly successful convolutional neural network ( CNN ) based approaches to three dimensions . However , the inherent sparsity of surfaces in 3D space make voxelization inefficient in terms of both memory and computation time . Partition-based approaches such as octrees ( Tatarchenko et al. , 2017 ; Riegler et al. , 2017 ) address the space efficiency shortcomings of voxelization , but they are tedious to implement and more computationally demanding to query . Graph-based models such as meshes ( Wang et al. , 2018 ; Gkioxari et al. , 2019 ; Smith et al. , 2019 ; Hanocka et al. , 2019 ) provide a compact representation for capturing topology and surface level information , however their irregular structure makes them harder to learn . Point set representations , discrete ( and typically finite ) subsets of the continuous geometric object , have also gained popularity due to the fact that they retain the simplicity of voxel based methods while eliminating their storage and computational burden ( Qi et al. , 2017a ; Fan et al. , 2017 ; Qi et al. , 2017b ; Yang et al. , 2018 ; Park et al. , 2019 ) . The PointNet architecture ( Qi et al. , 2017a ; b ) was an architectural milestone that made manipulating point sets with deep learning methods a competitive alternative to earlier approaches ; however , PointNet is concerned with processing , rather than generating , point clouds . Further , while point clouds are more flexible than voxels in terms of information density , it is still not obvious how to adapt them to the task of producing arbitrary- or varied-resolution predictions . Independently regressing each point in the point set requires additional parameters for each additional point ( Fan et al. , 2017 ; Achlioptas et al. , 2018 ) , which is an undesirable property if the goal is high-resolution point clouds . Many current approaches to representation and reconstruction follow an encoder-decoder paradigm , where the encoder and decoder both have learned weights that are fixed at the end of training . An image or set of 3D points is encoded as a latent vector ‘ codeword ’ either with a learned encoder as in Yang et al . ( 2018 ) ; Lin et al . ( 2018 ) ; Yan et al . ( 2016 ) or by direct optimization of the latent vector itself with respect to a reconstruction-based objective function as in Park et al . ( 2019 ) . Afterwards , the latent code is decoded by a learned decoder into a reconstruction of the desired object by one of two methods , which we call direct decoding and contextual mapping . Direct decoding methods directly map the latent code into a fixed set of points ( Choy et al. , 2016 ; Fan et al. , 2017 ; Lin et al. , 2018 ; Michalkiewicz et al. , 2019 ) ; contextual mapping methods map the latent code into a function that can be sampled or otherwise manipulated to acquire a reconstruction ( Yang et al. , 2018 ; Park et al. , 2019 ; Michalkiewicz et al. , 2019 ; Mescheder et al. , 2019 ) . Direct decoding methods generally suffer from the limitation that their predictions are of fixed resolution ; they can not be sampled more or less precisely . With contextual mapping methods , it is possible in principle to sample the object to arbitrarily high resolution with the correct decoder function . However , sampling can provide a significant computational burden for some contextual mapping approaches as those proposed by Park et al . ( 2019 ) and Michalkiewicz et al . ( 2019 ) . Another hurdle is the need for post-processing such as applying the Marching Cubes algorithm developed by Lorensen and Cline ( 1987 ) . We call contextual mapping approaches that encode context by concatenating a duplicate of a latent context vector with each input latent vector concatenation ( LVC ) methods . In particular , we compare with LVC architectures used in FoldingNet ( Yang et al. , 2018 ) and DeepSDF ( Park et al. , 2019 ) . HOF is a contextual mapping method that distinguishes itself from other methods within this class through its approach to representing the mapping function : HOF uses one neural network to estimate the weights of another . Conceptually related methods have been previously studied under nomenclature such as the ‘ fast-weight ’ paradigm ( Schmidhuber , 1992 ; De Brabandere et al. , 2016 ; Klein et al. , 2015 ; Riegler et al. , 2015 ) and more recently ‘ hypernetworks ’ ( Ha et al. , 2016 ) . However , the work by Schmidhuber ( 1992 ) deals with encoding memories in sequence learning tasks . Ha et al . ( 2016 ) suggest that estimating weights of one network with another might lead to improvements in parameter-efficiency . However , this work does not leverage the key insight of using network parameters that are estimated per sample in vision tasks . 3 HIGHER-ORDER FUNCTION NETWORKS . HOF is motivated by the independent observations by both Yang et al . ( 2018 ) and Park et al . ( 2019 ) that LVC methods do not perform competitively when the context vector is injected by simply concatenating it with each input . In both works , the LVC methods proposed required architectural workarounds to produce sufficient performance on reconstruction tasks , including injecting the latent code multiple times at various layers in the network . HOF does not suffer from these shortcomings due to its richer context encoding ( the entire mapping network encodes context ) in comparison with LVC . We compare the HOF and LVC regimes more precisely in Section 3.2 . Quantitative comparisons of HOF with existing methods can be found in Table 1 . 3.1 A FAST-WEIGHTS APPROACH TO 3D OBJECT REPRESENTATION AND RECONSTRUCTION . We consider the task of reconstructing an object point cloud O from an image . We start by training a neural network gφ with parameters φ ( Figure 1 , top-left ) to output the parameters θ of a mapping function fθ , which reconstructs the object when applied to a set of points X sampled uniformly from a canonical set such as the unit sphere ( Figure 1 , top-right ) . We note that the number of samples in X can be increased or decreased to produce higher or lower resolution reconstructions without changing the network architecture or retraining , in contrast with direct decoding methods and some contextual mapping methods which use fixed , non-random samples from X ( Yang et al. , 2018 ) . The input to gφ is an RGB image I ; our implementation takes 64× 64× 3 RGB images as input , but our method is general to any input representation for which a corresponding differentiable encoder network can be constructed to estimate θ ( e.g . PointNet ( Qi et al. , 2017a ) for point cloud completion ) . Given I , we compute the parameters of the mapping network θI as θI = gφ ( I ) ( 1 ) That is , the encoder gφ : R3×64×64 → Rd directly regresses the d-dimensional parameters θI of the mapping network fθI : Rc → R3 , which maps c-dimensional points in the canonical space X to points in the reconstruction Ô ( see Figure 1 ) . We then transform our canonical space X with fθI in the same manner as other contextual mapping methods : Ô = { fθI ( xi ) : xi ∈ X } ( 2 ) During training , we sample an image I and the corresponding ground truth point cloud model O , where O contains 10,000 points sampled from the surface of the true object . We then obtain the mapping fθI = gφ ( I ) and produce an estimated reconstruction of O as in Equation 2 . In our training , we only compute fθI ( x ) for a sample of 1000 points in X . However , we find that sampling many more points ( 10-100× as many ) at test time still yields high-quality reconstructions . This sample is drawn from a uniform distribution over the set X . We then compute a loss for the prediction Ô using a differentiable set similarity metric such as Chamfer distance or Earth Mover ’ s Distance . We focus on the Chamfer distance as both a training objective and metric for assessing reconstruction quality . The asymmetric Chamfer distance CD ( X , Y ) is often used for quantifying the similarity of two point sets X and Y and is given as CD ( X , Y ) = 1 |X| ∑ xi∈X min yi∈Y ||xi − yi||22 ( 3 ) The Chamfer distance is defined even if sets X and Y have different cardinality . We train gφ to minimize the symmetric objective function ` ( Ô , O ) = CD ( Ô , O ) + CD ( O , Ô ) as in Fan et al . ( 2017 ) .
This paper presents a method for single image 3D reconstruction. It is inspired by implicit shape models, like presented in Park et al. and Mescheder et al., that given a latent code project 3D positions to signed distance, or occupancy values, respectively. However, instead of a latent vector, the proposed method directly outputs the network parameters of a second (mapping) network that displaces 3D points from a given canonical object, i.e., a unit sphere. As the second network maps 3D points to 3D points it is composable, which can be used to interpolate between different shapes. Evaluations are conducted on the standard ShapeNet dataset and the yields results close to the state-of-the-art, but using significantly less parameters.
SP:05a329e1e9faa9917c278dd2ba1eb5090189bdf9
Higher-Order Function Networks for Learning Composable 3D Object Representations
We present a new approach to 3D object representation where a neural network encodes the geometry of an object directly into the weights and biases of a second ‘ mapping ’ network . This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space , such as the unit sphere . We study the effectiveness of our method through various experiments on subsets of the ShapeNet dataset . We find that the proposed approach can reconstruct encoded objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters . Our smallest mapping network has only about 7000 parameters and shows reconstruction quality on par with state-of-the-art object decoder architectures with millions of parameters . Further experiments on feature mixing through the composition of learned functions show that the encoding captures a meaningful subspace of objects . ‡ 1 INTRODUCTION . This paper is primarily concerned with the problem of learning compact 3D object representations and estimating them from images . If we consider an object to be a continuous surface in R3 , it is not straightforward to directly represent this infinite set of points in memory . In working around this problem , many learning-based approaches to 3D object representation suffer from problems related to memory usage , computational burden , or sampling efficiency . Nonetheless , neural networks with tens of millions of parameters have proven effective tools for learning expressive representations of geometric data . In this work , we show that object geometries can be encoded into neural networks with thousands , rather than millions , of parameters with little or no loss in reconstruction quality . To this end , we propose an object representation that encodes an object as a function that maps points from a canonical space , such as the unit sphere , to the set of points defining the object . In this work , the function is approximated with a small multilayer perceptron . The parameters of this function are estimated by a ‘ higher order ’ encoder network , thus motivating the name for our method : Higher-Order Function networks ( HOF ) . This procedure is shown in Figure 1 . There are two key ideas that distinguish HOF from prior work in 3D object representation learning : fast-weights object encoding and interpolation through function composition . ( 1 ) Fast-weights object encoding : ‘ Fast-weights ’ in this context generally refers to methods that use network weights and biases that are not fixed ; at least some of these parameters are estimated on a per-sample basis . Our fast-weights approach stands in contrast to existing methods which encode objects as vector-valued inputs to a decoder network with fixed weights . Empirically , we find that our approach enables a dramatic reduction ( two orders of magnitude ) in the size of the mapping network compared to the decoder networks employed by other methods . ( 2 ) Interpolation through function composition : Our functional formulation allows for interpolation between inputs by composing the roots of our reconstruction functions . We demonstrate that the 1Stanford University 2Samsung AI Center - New York 3University of Minnesota †Work performed while an intern at Samsung AI Center - New York . ‡ See https : //saic-ny.github.io/hof for code and additional information . Correspondence to : Eric Mitchell < eric.mitchell @ cs.stanford.edu > . functional representation learned by HOF provides a rich latent space in which we can ‘ interpolate ’ between objects , producing new , coherent objects sharing properties of the ‘ parent ’ objects . In order to position HOF among other methods for 3D reconstruction , we first define a taxonomy of existing work and show that HOF provides a generalization of current best-performing methods . Afterwards , we demonstrate the effectiveness of HOF on the task of 3D reconstruction from an RGB image using a subset of the ShapeNet dataset ( Chang et al. , 2015 ) . The results , reported in Tables 1 and 2 and Figure 2 , show state-of-the-art reconstruction quality using orders of magnitude fewer parameters than other methods . 2 RELATED WORK . The selection of object representation is a crucial design choice for methods addressing 3D reconstruction . Voxel-based approaches ( Choy et al. , 2016 ; Häne et al. , 2017 ) typically use a uniform discretization of R3 in order to extend highly successful convolutional neural network ( CNN ) based approaches to three dimensions . However , the inherent sparsity of surfaces in 3D space make voxelization inefficient in terms of both memory and computation time . Partition-based approaches such as octrees ( Tatarchenko et al. , 2017 ; Riegler et al. , 2017 ) address the space efficiency shortcomings of voxelization , but they are tedious to implement and more computationally demanding to query . Graph-based models such as meshes ( Wang et al. , 2018 ; Gkioxari et al. , 2019 ; Smith et al. , 2019 ; Hanocka et al. , 2019 ) provide a compact representation for capturing topology and surface level information , however their irregular structure makes them harder to learn . Point set representations , discrete ( and typically finite ) subsets of the continuous geometric object , have also gained popularity due to the fact that they retain the simplicity of voxel based methods while eliminating their storage and computational burden ( Qi et al. , 2017a ; Fan et al. , 2017 ; Qi et al. , 2017b ; Yang et al. , 2018 ; Park et al. , 2019 ) . The PointNet architecture ( Qi et al. , 2017a ; b ) was an architectural milestone that made manipulating point sets with deep learning methods a competitive alternative to earlier approaches ; however , PointNet is concerned with processing , rather than generating , point clouds . Further , while point clouds are more flexible than voxels in terms of information density , it is still not obvious how to adapt them to the task of producing arbitrary- or varied-resolution predictions . Independently regressing each point in the point set requires additional parameters for each additional point ( Fan et al. , 2017 ; Achlioptas et al. , 2018 ) , which is an undesirable property if the goal is high-resolution point clouds . Many current approaches to representation and reconstruction follow an encoder-decoder paradigm , where the encoder and decoder both have learned weights that are fixed at the end of training . An image or set of 3D points is encoded as a latent vector ‘ codeword ’ either with a learned encoder as in Yang et al . ( 2018 ) ; Lin et al . ( 2018 ) ; Yan et al . ( 2016 ) or by direct optimization of the latent vector itself with respect to a reconstruction-based objective function as in Park et al . ( 2019 ) . Afterwards , the latent code is decoded by a learned decoder into a reconstruction of the desired object by one of two methods , which we call direct decoding and contextual mapping . Direct decoding methods directly map the latent code into a fixed set of points ( Choy et al. , 2016 ; Fan et al. , 2017 ; Lin et al. , 2018 ; Michalkiewicz et al. , 2019 ) ; contextual mapping methods map the latent code into a function that can be sampled or otherwise manipulated to acquire a reconstruction ( Yang et al. , 2018 ; Park et al. , 2019 ; Michalkiewicz et al. , 2019 ; Mescheder et al. , 2019 ) . Direct decoding methods generally suffer from the limitation that their predictions are of fixed resolution ; they can not be sampled more or less precisely . With contextual mapping methods , it is possible in principle to sample the object to arbitrarily high resolution with the correct decoder function . However , sampling can provide a significant computational burden for some contextual mapping approaches as those proposed by Park et al . ( 2019 ) and Michalkiewicz et al . ( 2019 ) . Another hurdle is the need for post-processing such as applying the Marching Cubes algorithm developed by Lorensen and Cline ( 1987 ) . We call contextual mapping approaches that encode context by concatenating a duplicate of a latent context vector with each input latent vector concatenation ( LVC ) methods . In particular , we compare with LVC architectures used in FoldingNet ( Yang et al. , 2018 ) and DeepSDF ( Park et al. , 2019 ) . HOF is a contextual mapping method that distinguishes itself from other methods within this class through its approach to representing the mapping function : HOF uses one neural network to estimate the weights of another . Conceptually related methods have been previously studied under nomenclature such as the ‘ fast-weight ’ paradigm ( Schmidhuber , 1992 ; De Brabandere et al. , 2016 ; Klein et al. , 2015 ; Riegler et al. , 2015 ) and more recently ‘ hypernetworks ’ ( Ha et al. , 2016 ) . However , the work by Schmidhuber ( 1992 ) deals with encoding memories in sequence learning tasks . Ha et al . ( 2016 ) suggest that estimating weights of one network with another might lead to improvements in parameter-efficiency . However , this work does not leverage the key insight of using network parameters that are estimated per sample in vision tasks . 3 HIGHER-ORDER FUNCTION NETWORKS . HOF is motivated by the independent observations by both Yang et al . ( 2018 ) and Park et al . ( 2019 ) that LVC methods do not perform competitively when the context vector is injected by simply concatenating it with each input . In both works , the LVC methods proposed required architectural workarounds to produce sufficient performance on reconstruction tasks , including injecting the latent code multiple times at various layers in the network . HOF does not suffer from these shortcomings due to its richer context encoding ( the entire mapping network encodes context ) in comparison with LVC . We compare the HOF and LVC regimes more precisely in Section 3.2 . Quantitative comparisons of HOF with existing methods can be found in Table 1 . 3.1 A FAST-WEIGHTS APPROACH TO 3D OBJECT REPRESENTATION AND RECONSTRUCTION . We consider the task of reconstructing an object point cloud O from an image . We start by training a neural network gφ with parameters φ ( Figure 1 , top-left ) to output the parameters θ of a mapping function fθ , which reconstructs the object when applied to a set of points X sampled uniformly from a canonical set such as the unit sphere ( Figure 1 , top-right ) . We note that the number of samples in X can be increased or decreased to produce higher or lower resolution reconstructions without changing the network architecture or retraining , in contrast with direct decoding methods and some contextual mapping methods which use fixed , non-random samples from X ( Yang et al. , 2018 ) . The input to gφ is an RGB image I ; our implementation takes 64× 64× 3 RGB images as input , but our method is general to any input representation for which a corresponding differentiable encoder network can be constructed to estimate θ ( e.g . PointNet ( Qi et al. , 2017a ) for point cloud completion ) . Given I , we compute the parameters of the mapping network θI as θI = gφ ( I ) ( 1 ) That is , the encoder gφ : R3×64×64 → Rd directly regresses the d-dimensional parameters θI of the mapping network fθI : Rc → R3 , which maps c-dimensional points in the canonical space X to points in the reconstruction Ô ( see Figure 1 ) . We then transform our canonical space X with fθI in the same manner as other contextual mapping methods : Ô = { fθI ( xi ) : xi ∈ X } ( 2 ) During training , we sample an image I and the corresponding ground truth point cloud model O , where O contains 10,000 points sampled from the surface of the true object . We then obtain the mapping fθI = gφ ( I ) and produce an estimated reconstruction of O as in Equation 2 . In our training , we only compute fθI ( x ) for a sample of 1000 points in X . However , we find that sampling many more points ( 10-100× as many ) at test time still yields high-quality reconstructions . This sample is drawn from a uniform distribution over the set X . We then compute a loss for the prediction Ô using a differentiable set similarity metric such as Chamfer distance or Earth Mover ’ s Distance . We focus on the Chamfer distance as both a training objective and metric for assessing reconstruction quality . The asymmetric Chamfer distance CD ( X , Y ) is often used for quantifying the similarity of two point sets X and Y and is given as CD ( X , Y ) = 1 |X| ∑ xi∈X min yi∈Y ||xi − yi||22 ( 3 ) The Chamfer distance is defined even if sets X and Y have different cardinality . We train gφ to minimize the symmetric objective function ` ( Ô , O ) = CD ( Ô , O ) + CD ( O , Ô ) as in Fan et al . ( 2017 ) .
This work is focused on learning 3D object representations (decoders) that can be computed more efficiently than existing methods. The computational inefficiency of these methods is that you learn a (big) fixed decoder for all objects (all z latents), and then need to apply it individually on either each point cloud point you want to produce, or each voxel in the output (this problem exists for both the class of methods that deform a uniform distribution R^3 -> R^3 a la FoldingNet, or directly predict the 3D function R^3 -> R e.g. DeepSDF). The authors propose that the encoder directly predict the weights and biases of a decoder network that, since it is specific to the particular object being reconstructed, can be much smaller and thus much cheaper to compute.
SP:05a329e1e9faa9917c278dd2ba1eb5090189bdf9
Rethinking Curriculum Learning With Incremental Labels And Adaptive Compensation
1 INTRODUCTION . Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples . However , successfully training deep networks to solve problems under such conditions is mystifyingly hard ( Erhan et al . ( 2009 ) ; Larochelle et al . ( 2007 ) ) . The go-to solution in most cases is Stochastic Gradient Descent with mini-batches ( simple batch learning ) and its derivatives . While offering a standardized solution , simple batch learning often fails to find solutions that are simultaneously stable , highly generalizable and scalable to large systems ( Das et al . ( 2016 ) ; Keskar et al . ( 2016 ) ; Goyal et al . ( 2017 ) ; You et al . ( 2017 ) ) . This is a by-product of how mini-batches are constructed . For example , the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution ; small batch sizes help achieve more generalizable solutions , but do not scale as well to vast computational resources as large mini-batches . It is hard to construct a solution that is a perfect compromise between all cases . Two lines of work , curriculum learning and label smoothing , offer alternative strategies to improve learning in deep networks . Curriculum learning , inspired by strategies used for humans ( Skinner ( 1958 ) ; Avrahami et al . ( 1997 ) ) , works by gradually increasing the conceptual difficulty of samples used to train deep networks ( Bengio et al . ( 2009 ) ; Florensa et al . ( 2017 ) ; Graves et al . ( 2017 ) ) . This has been shown to improve performance on corrupted ( Jiang et al . ( 2017 ) ) and small datasets ( Fan et al . ( 2018 ) ) . More recently , deep networks have been used to categorize samples ( Weinshall et al . ( 2018 ) ) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth ( Hacohen & Weinshall ( 2019 ) ) . To the best of our knowledge , previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order . This introduces computational overheads e.g . pre-computing the relative difficulty of samples , and also reduces the effective amount of data from which a model can learn in early epochs . Further , curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks . A complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima . In this regard , label smoothing offers an important solution that is invariant to the underlying architecture . Early works like Xie et al . ( 2016 ) replace ground-truth labels with noise while Reed et al . ( 2014 ) uses other models ’ outputs to prevent over-fitting . This idea was extended in Bagherinezhad et al . ( 2018 ) to an iterative method which uses logits obtained from previously trained versions of the same deep network . While Miyato et al . ( 2015 ) use local distributional smoothness , based on the robustness of a model ’ s distribution around a data point , to regularize outcomes , Pereyra et al . ( 2017 ) penalized highly confident outputs directly . Closest in spirit to our work is the label smoothing method defined in Szegedy et al . ( 2016 ) , which offers an alternative target distribution for all training samples with no extra data augmentation . In general , label smoothing is applied to all examples regardless of how it affects the network ’ s understanding of them . Further , in methods which use other models to provide logits/labels , often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset , both of which introduce additional computation . In this work , we propose LILAC , Learning with Incremental Labels and Adaptive Compensation , which emphasizes a label-based curriculum and adaptive compensation , to improve upon previous methods and obtain highly accurate and stable solutions . LILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples . It works in two key phases , 1 ) incremental label introduction and 2 ) adaptive compensation . In the first phase , we incrementally introduce groups of labels in the training process . Data , corresponding to labels not yet introduced to the model , use a single fake label selected from within the dataset . Once a network has been trained for a fixed number of epochs with this setup , an additional set of ground-truth labels is introduced to the network and the training process continues . In recursively revealing labels , LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples . Once all ground-truth labels are revealed the adaptive compensation phase of training is initiated . This phase mirrors conventional batch learning , except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution . Thus , we avoid adjusting labels across the entire dataset , like previous methods , while elevating the stability and average performance of the model . Further , instead of being pre-computed by an alternative model , these softer distributions are generated on-the-fly from the outputs of the model being trained . We apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines . While incremental and continual learning work on evolving data distributions with the addition of memory constraints ( ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ) and derivative works ) , knowledge distillation ( ( Schwarz et al. , 2018 ; Rolnick et al. , 2018 ) and similar works ) or other requirements , this work is a departure into using negative mining and focused training to improve learning on a fully available dataset . In incremental/continual learning works , often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset , distinguished by Seen and Unseen labels . Thus , it avoids data deficient learning . Further , works like Bucher et al . ( 2016 ) ; Li et al . ( 2013 ) ; Wang & Gupta ( 2015 ) emphasize the importance of hard negative mining , both in size and diversity , in improving learning . Although the original formulation of negative mining was based on imbalanced data , recent object detection works have highlighted its importance in contrasting and improving learning in neural networks . To summarize , our main contributions in LILAC are as follows , • we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples , • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads , • we improve average recognition accuracy and decrease the standard deviation of perfor- mance across several image classification benchmarks compared to batch learning , a property not shared by other curriculum learning and label smoothing methods . 2 LILAC . In LILAC , our main focus is to induce better learning in deep networks . Instead of the conventional curriculum learning approach of ranking samples , we consider all samples equally beneficial . Early on , we focus on learning labels in fixed increments ( Section 2.1 ) . Once the network has had a chance to learn all the labels , we shift to regularizing the network to prevent over-fitting by providing a softer distribution as the target vector for previously misclassified samples ( Section 2.2 ) . An overview of the entire algorithm discussed is available in the appendix as Algorithm 1 . 2.1 INCREMENTAL LABEL INTRODUCTION PHASE . In the incremental phase , we initially replace the ground-truth labels of several class using a constant held-out label . Gradually , over the course of several fixed intervals of training we reveal the true label . Within a fixed interval of training , we keep constant two sets of data , ” Seen ” , whose groundtruth labels are known and ” Unseen ” , whose labels are replaced by a fake value . When training , Full Dataset ( Virtual Data Partition in LILAC ( : Seen : Unseen Incremental Step 1 Incremental Step 2 ( b ) Evolution of data partition over each incremental step , ( a ) Data setup in LILAC Uses Ground-truth labels Uses a fake label Incremental Step 3 ( Final Incremental Step ) = 4 ) = 1 ) ( ) ( ) − = 1 mini-batches are uniformly sampled from the entire training set , but the instances from ” Unseen ” classes use the held-out label . By the end of the final interval , we reveal all ground-truth labels . We now describe the incremental phase in more detail . At the beginning of the incremental label introduction phase , we virtually partition data into two mutually exclusive sets , S : Seen and U : Unseen , as shown in Fig . 1 . Data samples in S use their ground-truth labels as target values while those in U use a designated unseen label , which is held constant throughout the entire training process . LILAC assumes a random ordering of labels , Or ( M ) , where M denotes the total number of labels in the dataset . Within this ordering , the number of labels and corresponding data initially placed in S is defined by the variable b . The remaining labels , M − b , are initially placed in U and incrementally revealed in intervals of m labels , a hyper-parameter defined by the user . Training in the incremental phase happens at fixed intervals of E epochs each . Within a fixed interval , the virtual data partition is held constant . Every mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch , labels are obtained based on their placement in S or U . Then the number of samples from U is reduced or augmented , using a uniform prior , to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label . Finally , the curated mini-batches of data are used to train the neural network . At the end of each fixed interval , we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval . 2.2 ADAPTIVE COMPENSATION . Once all the ground-truth labels are available to the deep network , we begin the adaptive compensation phase of training . The main idea behind adaptive compensation is , if the network is unable to correctly predict a sample ’ s label even after allowing sufficient training time , then we alter the target vector to a less peaked distribution . Compared to learning one-hot vectors , this softer distribution can more easily be learned by the network . Unlike prior methods we adaptively modify the target vector only for incorrectly classified samples on-the-fly . In this phase , the network is trained for a small number of epochs using standard batch learning . Let T be the total number of training epochs in the incremental phase and batch learning . During the adaptive compensation phase , we start at epoch e , where e > T . For a mini-batch of samples in epoch e , predictions from the model at e− 1 are used to determine the final target vector used in the objective function ; specifically , we soften the target vector for an instance iff it was misclassified by the model at the end of epoch e − 1 . The final target vector for the ith instance at epoch e , te , i , is computed based on the model φe−1 using Equation 1. te , i = { ( M−1M−1 ) δyi + ( 1− M−1 ) 1 , argmax ( φe−1 ( xi ) ) 6= y i δyi , otherwise . ( 1 ) Here , ( xi , yi ) denote a training sample and its corresponding ground-truth label for sample index i while δyi represents the corresponding one-hot vector . 1 is a vector ofM dimensions with all entries as 1 and is a scaling hyper-parameter .
This paper proposes a novel direction for curriculum learning. Previous works in the area of curriculum learning focused on choosing easier samples first and harder samples later when learning the neural network models. This is problematic since we need to first compute how difficult each samples are, which introduces computational overheads. In this work, the paper propose to gradually learn with a class-wise perspective instead. The neural network has only access to the labels of certain classes (chosen randomly) in the beginning, and the samples that belong to the rest of the classes are treated as unseen samples but with a label forced into the last class. Then, the true labels of unseen classes are gradually revealed, and this is repeated until in the final incremental step, all labels are revealed. The method further has an adaptive compensation step, which use a less peaked distribution label for supervision only for the incorrectly predicted samples. The experiments show that with only the first step, the proposed method is worse than the original batch learning, but by adding the second label smoothing step, there is improvement over the original batch learning setup.
SP:7f6ef5f3fa7627e799377aa06561904b80c5c1c4
Rethinking Curriculum Learning With Incremental Labels And Adaptive Compensation
1 INTRODUCTION . Deep networks have seen rich applications in high-dimensional problems characterized by a large number of labels and a high volume of samples . However , successfully training deep networks to solve problems under such conditions is mystifyingly hard ( Erhan et al . ( 2009 ) ; Larochelle et al . ( 2007 ) ) . The go-to solution in most cases is Stochastic Gradient Descent with mini-batches ( simple batch learning ) and its derivatives . While offering a standardized solution , simple batch learning often fails to find solutions that are simultaneously stable , highly generalizable and scalable to large systems ( Das et al . ( 2016 ) ; Keskar et al . ( 2016 ) ; Goyal et al . ( 2017 ) ; You et al . ( 2017 ) ) . This is a by-product of how mini-batches are constructed . For example , the uniform prior assumption over datasets emphasizes equal contributions from each data point regardless of the underlying distribution ; small batch sizes help achieve more generalizable solutions , but do not scale as well to vast computational resources as large mini-batches . It is hard to construct a solution that is a perfect compromise between all cases . Two lines of work , curriculum learning and label smoothing , offer alternative strategies to improve learning in deep networks . Curriculum learning , inspired by strategies used for humans ( Skinner ( 1958 ) ; Avrahami et al . ( 1997 ) ) , works by gradually increasing the conceptual difficulty of samples used to train deep networks ( Bengio et al . ( 2009 ) ; Florensa et al . ( 2017 ) ; Graves et al . ( 2017 ) ) . This has been shown to improve performance on corrupted ( Jiang et al . ( 2017 ) ) and small datasets ( Fan et al . ( 2018 ) ) . More recently , deep networks have been used to categorize samples ( Weinshall et al . ( 2018 ) ) and variations on the pace with which these samples were shown to deep networks were analyzed in-depth ( Hacohen & Weinshall ( 2019 ) ) . To the best of our knowledge , previous works assumed that samples cover a broad spectrum of difficulty and hence need to be categorized and presented in a specific order . This introduces computational overheads e.g . pre-computing the relative difficulty of samples , and also reduces the effective amount of data from which a model can learn in early epochs . Further , curriculum learning approaches have not been shown to compete with simple training strategies at the top end of performance in image benchmarks . A complementary approach to obtaining generalizable solutions is to avoid over-fitting or getting stuck in local minima . In this regard , label smoothing offers an important solution that is invariant to the underlying architecture . Early works like Xie et al . ( 2016 ) replace ground-truth labels with noise while Reed et al . ( 2014 ) uses other models ’ outputs to prevent over-fitting . This idea was extended in Bagherinezhad et al . ( 2018 ) to an iterative method which uses logits obtained from previously trained versions of the same deep network . While Miyato et al . ( 2015 ) use local distributional smoothness , based on the robustness of a model ’ s distribution around a data point , to regularize outcomes , Pereyra et al . ( 2017 ) penalized highly confident outputs directly . Closest in spirit to our work is the label smoothing method defined in Szegedy et al . ( 2016 ) , which offers an alternative target distribution for all training samples with no extra data augmentation . In general , label smoothing is applied to all examples regardless of how it affects the network ’ s understanding of them . Further , in methods which use other models to provide logits/labels , often the parent network used to provide those labels is trained using an alternate objective function or needs to be fully re-trained on the current dataset , both of which introduce additional computation . In this work , we propose LILAC , Learning with Incremental Labels and Adaptive Compensation , which emphasizes a label-based curriculum and adaptive compensation , to improve upon previous methods and obtain highly accurate and stable solutions . LILAC is conceived as a method to learn strong embeddings by using the recursive training strategy of incremental learning alongside the use of unlabelled/wrongly-labelled data as hard negative examples . It works in two key phases , 1 ) incremental label introduction and 2 ) adaptive compensation . In the first phase , we incrementally introduce groups of labels in the training process . Data , corresponding to labels not yet introduced to the model , use a single fake label selected from within the dataset . Once a network has been trained for a fixed number of epochs with this setup , an additional set of ground-truth labels is introduced to the network and the training process continues . In recursively revealing labels , LILAC allows the model sufficient time to develop a strong understanding of each class by contrasting against a large and diverse set of negative examples . Once all ground-truth labels are revealed the adaptive compensation phase of training is initiated . This phase mirrors conventional batch learning , except we adaptively replace the target one-hot vector of incorrectly classified samples with a softer distribution . Thus , we avoid adjusting labels across the entire dataset , like previous methods , while elevating the stability and average performance of the model . Further , instead of being pre-computed by an alternative model , these softer distributions are generated on-the-fly from the outputs of the model being trained . We apply LILAC to three standard image benchmarks and compare its performance to the strongest known baselines . While incremental and continual learning work on evolving data distributions with the addition of memory constraints ( ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ) and derivative works ) , knowledge distillation ( ( Schwarz et al. , 2018 ; Rolnick et al. , 2018 ) and similar works ) or other requirements , this work is a departure into using negative mining and focused training to improve learning on a fully available dataset . In incremental/continual learning works , often the amount of data used to retrain the network is small compared to the original dataset while in LILAC we fully use the entire dataset , distinguished by Seen and Unseen labels . Thus , it avoids data deficient learning . Further , works like Bucher et al . ( 2016 ) ; Li et al . ( 2013 ) ; Wang & Gupta ( 2015 ) emphasize the importance of hard negative mining , both in size and diversity , in improving learning . Although the original formulation of negative mining was based on imbalanced data , recent object detection works have highlighted its importance in contrasting and improving learning in neural networks . To summarize , our main contributions in LILAC are as follows , • we introduce a new take on curriculum learning by incrementally learning labels as opposed to samples , • our method adaptively compensates incorrectly labelled samples by softening their target distribution which improves performance and removes external computational overheads , • we improve average recognition accuracy and decrease the standard deviation of perfor- mance across several image classification benchmarks compared to batch learning , a property not shared by other curriculum learning and label smoothing methods . 2 LILAC . In LILAC , our main focus is to induce better learning in deep networks . Instead of the conventional curriculum learning approach of ranking samples , we consider all samples equally beneficial . Early on , we focus on learning labels in fixed increments ( Section 2.1 ) . Once the network has had a chance to learn all the labels , we shift to regularizing the network to prevent over-fitting by providing a softer distribution as the target vector for previously misclassified samples ( Section 2.2 ) . An overview of the entire algorithm discussed is available in the appendix as Algorithm 1 . 2.1 INCREMENTAL LABEL INTRODUCTION PHASE . In the incremental phase , we initially replace the ground-truth labels of several class using a constant held-out label . Gradually , over the course of several fixed intervals of training we reveal the true label . Within a fixed interval of training , we keep constant two sets of data , ” Seen ” , whose groundtruth labels are known and ” Unseen ” , whose labels are replaced by a fake value . When training , Full Dataset ( Virtual Data Partition in LILAC ( : Seen : Unseen Incremental Step 1 Incremental Step 2 ( b ) Evolution of data partition over each incremental step , ( a ) Data setup in LILAC Uses Ground-truth labels Uses a fake label Incremental Step 3 ( Final Incremental Step ) = 4 ) = 1 ) ( ) ( ) − = 1 mini-batches are uniformly sampled from the entire training set , but the instances from ” Unseen ” classes use the held-out label . By the end of the final interval , we reveal all ground-truth labels . We now describe the incremental phase in more detail . At the beginning of the incremental label introduction phase , we virtually partition data into two mutually exclusive sets , S : Seen and U : Unseen , as shown in Fig . 1 . Data samples in S use their ground-truth labels as target values while those in U use a designated unseen label , which is held constant throughout the entire training process . LILAC assumes a random ordering of labels , Or ( M ) , where M denotes the total number of labels in the dataset . Within this ordering , the number of labels and corresponding data initially placed in S is defined by the variable b . The remaining labels , M − b , are initially placed in U and incrementally revealed in intervals of m labels , a hyper-parameter defined by the user . Training in the incremental phase happens at fixed intervals of E epochs each . Within a fixed interval , the virtual data partition is held constant . Every mini-batch of data is sampled uniformly from the entire original dataset and within each mini-batch , labels are obtained based on their placement in S or U . Then the number of samples from U is reduced or augmented , using a uniform prior , to match the number of samples from S. This is done to ensure no unfair skew in predictions towards U since all data points use the same designated label . Finally , the curated mini-batches of data are used to train the neural network . At the end of each fixed interval , we reveal another set of m groundtruth labels and move samples of those classes from U to S after which the entire data curation and training process is repeated for the next interval . 2.2 ADAPTIVE COMPENSATION . Once all the ground-truth labels are available to the deep network , we begin the adaptive compensation phase of training . The main idea behind adaptive compensation is , if the network is unable to correctly predict a sample ’ s label even after allowing sufficient training time , then we alter the target vector to a less peaked distribution . Compared to learning one-hot vectors , this softer distribution can more easily be learned by the network . Unlike prior methods we adaptively modify the target vector only for incorrectly classified samples on-the-fly . In this phase , the network is trained for a small number of epochs using standard batch learning . Let T be the total number of training epochs in the incremental phase and batch learning . During the adaptive compensation phase , we start at epoch e , where e > T . For a mini-batch of samples in epoch e , predictions from the model at e− 1 are used to determine the final target vector used in the objective function ; specifically , we soften the target vector for an instance iff it was misclassified by the model at the end of epoch e − 1 . The final target vector for the ith instance at epoch e , te , i , is computed based on the model φe−1 using Equation 1. te , i = { ( M−1M−1 ) δyi + ( 1− M−1 ) 1 , argmax ( φe−1 ( xi ) ) 6= y i δyi , otherwise . ( 1 ) Here , ( xi , yi ) denote a training sample and its corresponding ground-truth label for sample index i while δyi represents the corresponding one-hot vector . 1 is a vector ofM dimensions with all entries as 1 and is a scaling hyper-parameter .
This paper makes the observation that a curriculum need not depend on the difficulty of examples, as most (maybe all) prior works do. They suggest instead a curriculum based on learning one class at a time, starting with one and masking the label of all others as 'unknown' (i.e. treating them as negative examples), and unmasking classes as learning progresses. This is the "incremental labels" part. They make another observation, that label smoothing is applied to all examples regardless of difficulty, and propose an alternative "adaptive" version where labels are smoothed only for difficult examples. This is the "adaptive compensation" part.
SP:7f6ef5f3fa7627e799377aa06561904b80c5c1c4
Support-guided Adversarial Imitation Learning
1 INTRODUCTION . The class of Adversarial Imitation Learning ( AIL ) algorithms learns robust policies that imitate an expert ’ s actions from a small number of expert trajectories , without further access to the expert or environment signals . AIL iterates between refining a reward via adversarial training , and reinforcement learning ( RL ) with the learned adversarial reward . For instance , Generative Adversarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) shows the equivalence between some settings of inverse reinforcement learning and Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , and recasts imitation learning as distribution matching between the expert and the RL agent . Similarly , Adversarial Inverse Reinforcement Learning ( AIRL ) ( Fu et al. , 2017 ) modifies the GAIL discriminator to learn a reward function robust to changes in dynamics or environment properties . AIL mitigates the issue of distributional drift from behavioral cloning ( Ross et al. , 2011 ) , a classical imitation learning algorithm , and demonstrates good performance with only a small number of expert demonstrations . However , AIL has several important challenges , including implicit reward bias ( Kostrikov et al. , 2019 ) , potential training instability ( Salimans et al. , 2016 ; Brock et al. , 2018 ) , and potential sample inefficiency with respect to environment interaction ( Sasaki et al. , 2019 ) . In this paper , we propose a principled approach towards addressing these issues . Wang et al . ( 2019 ) demonstrated that imitation learning is also feasible by constructing a fixed reward function via estimating the support of the expert policy . Since support estimation only requires expert demonstrations , the method sidesteps the training instability associated with adversarial training . However , we show in Section 4.2 that the reward learned via support estimation deteriorates when expert data is sparse , and leads to poor policy performances . Support estimation and adversarial reward represent two different yet complementary RL signals for imitation learning , both learnable from expert demonstrations . We unify both signals into Supportguided Adversarial Imitation Learning ( SAIL ) , a generic imitation learning framework . SAIL leverages the adversarial reward to guide policy exploration and constrains the policy search to the estimated support of the expert policy . It is compatible with existing AIL algorithms , such as GAIL and AIRL . We also show that SAIL is at least as efficient as standard AIL . In an extensive evaluation , we demonstrate that SAIL mitigates the implicit reward bias and achieves better performance and training stability against baseline methods over a series of benchmark control tasks . 2 BACKGROUND . We briefly review the Markov Decision Process ( MDP ) , the context of our imitation learning task , followed by related works on imitation learning . Markov Decision Process We consider an infinite-horizon discounted MDP ( S , A , P , r , p0 , γ ) , where S is the set of states , A the set of actions , P : S × A × S → [ 0 , 1 ] the transition probability , r : S × A → R the reward function , p0 : S → [ 0 , 1 ] the distribution over initial states , and γ ∈ ( 0 , 1 ) the discount factor . Let π be a stochastic policy π : S × A → [ 0 , 1 ] with expected discounted reward Eπ ( r ( s , a ) ) , E ( ∑∞ t=0 γ tr ( st , at ) ) where s0 ∼ p0 , at ∼ π ( ·|st ) , and st+1 ∼ P ( ·|st , at ) for t ≥ 0 . We denote πE the expert policy . Behavioral Cloning ( BC ) learns a policy π : S → A directly from expert trajectories via supervised learning . BC is simple to implement , and effective when expert data is abundant . However , BC is prone to distributional drift : the state distribution of expert demonstrations deviates from that of the agent policy , due to accumulation of small mistakes during policy execution . Distributional drift may lead to catastrophic errors ( Ross et al. , 2011 ) . While several methods address the issue ( Ross & Bagnell , 2010 ; Sun et al. , 2017 ) , they often assume further access to the expert during training . Inverse Reinforcement Learning ( IRL ) first estimates a reward from expert demonstrations , followed by RL using the estimated reward ( Ng & Russell , 2000 ; Abbeel & Ng , 2004 ) . Building upon a maximum entropy formulation of IRL ( Ziebart et al. , 2008 ) , Finn et al . ( 2016 ) and Fu et al . ( 2017 ) explore adversarial IRL and its connection to Generative Adversarial Imitation Learning ( Ho & Ermon , 2016 ) . Imitation Learning via Distribution Matching Generative Adversarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) frames imitation learning as distribution matching between the expert and the RL agent . The authors show the connection between IRL and GANs . Specifically , GAIL imitates the expert by formulating a minimax game : min π max D∈ ( 0,1 ) Eπ ( logD ( s , a ) ) + EπE ( log ( 1−D ( s , a ) ) ) , ( 1 ) where the expectations Eπ and EπE denote the joint distributions over state-actions of the RL agent and the expert , respectively . GAIL is able to achieve expert performance with a small number of expert trajectories on various benchmark tasks . However , GAIL is relatively sample inefficient with respect to environment interaction , and inherits issues associated with adversarial learning , such as vanishing gradients , training instability and overfitting to expert demonstrations ( Arjovsky & Bottou , 2017 ; Brock et al. , 2018 ) . Recent works have improved the sample efficiency and stability of GAIL . For instance , Generative Moment Matching Imitation Learning ( Kim & Park , 2018 ) replaces the adversarial reward with a non-parametric maximum mean discrepancy estimator to sidestep adversarial learning . Baram et al . ( 2017 ) improve sample efficiency with a model-based RL algorithm . Kostrikov et al . ( 2019 ) and Sasaki et al . ( 2019 ) demonstrate significant gain in sample efficiency with off-policy RL algorithms . In addition , Generative Predecessor Models for Imitation Learning ( Schroecker et al. , 2019 ) imitates the expert policy using generative models to reason about alternative histories of demonstrated states . Our proposed method is closely related to the broad family of AIL algorithms including GAIL and adversarial IRL . It is also complementary to many techniques for improving the algorithmic efficiency and stability , as discussed above . In particular , we focus on improving the quality of the learned reward by constraining adversarial reward to the estimated support of the expert policy . Imitation Learning via Support Estimation Alternative to AIL , Wang et al . ( 2019 ) demonstrate the feasibility of using a fixed RL reward via estimating the support of the expert policy from expert demonstrations . Connecting kernel-based support estimation ( De Vito et al. , 2014 ) to Random Network Distillation ( Burda et al. , 2018 ) , the authors propose Random Expert Distillation ( RED ) to learn a reward function based on support estimation . Specifically , RED learns the reward parameter θ̂ by minimizing : min θ̂ Es , a∼πE ||fθ̂ ( s , a ) − fθ ( s , a ) || 2 2 , ( 2 ) where fθ : S ×A→ RK projects ( s , a ) from expert demonstrations to some embedding of size K , with randomly initialized θ . The reward is then defined as : rred ( s , a ) = exp ( −σ||fθ̂ ( s , a ) − fθ ( s , a ) || 2 2 ) , ( 3 ) where σ is a hyperparameter . As optimizing Eq . ( 2 ) only requires expert data , RED sidesteps adversarial learning , and casts imitation learning as a standard RL task using the learned reward . While RED works well given sufficient expert data , we show in the experiments that its performance suffers in the more challenging setting of sparse expert data . 3 METHOD . Formally , we consider the task of learning a reward function r̂ ( s , a ) from a finite set of trajectories { τi } Ni=1 , sampled from the expert policy πE within a MDP . Each trajectory is a sequence of stateaction tuples in the form of τi = { s1 , a1 , s2 , a2 , ... , sT , aT } . Assuming that the expert trajectories are consistent with some latent reward function r∗ ( s , a ) , we aim to learn a policy that achieves good performance with respect to r∗ ( s , a ) by applying RL on the learned reward function r̂ ( s , a ) . In this section , we first discuss the advantages and shortcomings of AIL to motivate our method . We then introduce Support-guided Adversarial Learning ( SAIL ) , and present a theoretical analysis that compares SAIL with the existing methods , specifically GAIL . 3.1 ADVERSARIAL IMITATION LEARNING . A clear advantage of AIL resides in its low sample complexity with respect to expert data . For instance , GAIL requires as little as 200 state-action tuples from the expert to achieve imitation . The reason is that the adversarial reward may be interpreted as an effective exploration mechanism for the RL agent . To see this , consider the learned reward function under the optimality assumption . With the optimal discriminator to Eq . ( 1 ) D∗ ( s , a ) = pπ ( s , a ) pπE ( s , a ) +pπ ( s , a ) , a common reward for GAIL is rgail ( s , a ) = − log ( D∗ ( s , a ) ) = log ( 1 + pπE ( s , a ) pπ ( s , a ) ) = log ( 1 + φ ( s , a ) ) . ( 4 ) Eq . ( 4 ) shows that the adversarial reward only depends on the ratio φ ( s , a ) = pπE ( s , a ) pπ ( s , a ) . Intuitively , rgail incentivizes the RL agent towards under-visited state-actions , where φ ( s , a ) > 1 , and away from over-visited state-actions , where φ ( s , a ) < 1 . When πE and π match exactly , rgail converges to an indicator function for the support of πE , since φ ( s , a ) = 1 ∀ ( s , a ) ∈ supp ( πE ) ( Goodfellow et al. , 2014 ) . In practice , the adversarial reward is unlikely to converge , as pπE is estimated from a finite set of expert demonstrations . Instead , the adversarial reward continuously drives the agent to explore by evolving the reward landscape . However , AIL also presents several challenges . Kostrikov et al . ( 2019 ) demonstrated that the reward − logD ( s , a ) suffers from an implicit survival bias , as the non-negative reward may lead to suboptimal behaviors in goal-oriented tasks where the agent learns to move around the goal to accumulate rewards , instead of completing the tasks . While the authors resolve the issue by introducing absorbing states , the solution assumes extra RL signals from the environment , including access to the time limit of an environment to detect early termination of training episodes . In Section 4.1 , we empirically demonstrate the survival bias on Lunar Lander , a common RL benchmark , by showing that agents trained with GAIL often hover over the goal location1 . We also show that our proposed method is able to robustly imitate the expert . Another challenge with AIL is potential training instability . Wang et al . ( 2019 ) demonstrated empirically that the adversarial reward could be unreliable in regions where the expert data is sparse , causing the agent to diverge from the intended behavior . When the agent policy is substantially different from the expert policy , the discriminator could differentiate them with high confidence , resulting in very low rewards and significant slow down in training , similar to the vanishing gradient problem in GAN training ( Arjovsky & Bottou , 2017 ) .
The paper proposes an imitation learning algorithm that combines support estimation with adversarial training. The key idea is simple: multiply the reward from Random Expert Distillation (RED) with the reward from Generative Adversarial Imitation Learning (GAIL). The new reward combines the best of both methods. Like the GAIL reward, the new reward encourages exploration and can be estimated from a small number of demonstrations. Like the RED reward, the new reward avoids survival bias and is more stable than the adversarial reward.
SP:c3a5a5600463b8f590e9a2b10f7984973410b043
Support-guided Adversarial Imitation Learning
1 INTRODUCTION . The class of Adversarial Imitation Learning ( AIL ) algorithms learns robust policies that imitate an expert ’ s actions from a small number of expert trajectories , without further access to the expert or environment signals . AIL iterates between refining a reward via adversarial training , and reinforcement learning ( RL ) with the learned adversarial reward . For instance , Generative Adversarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) shows the equivalence between some settings of inverse reinforcement learning and Generative Adversarial Networks ( GANs ) ( Goodfellow et al. , 2014 ) , and recasts imitation learning as distribution matching between the expert and the RL agent . Similarly , Adversarial Inverse Reinforcement Learning ( AIRL ) ( Fu et al. , 2017 ) modifies the GAIL discriminator to learn a reward function robust to changes in dynamics or environment properties . AIL mitigates the issue of distributional drift from behavioral cloning ( Ross et al. , 2011 ) , a classical imitation learning algorithm , and demonstrates good performance with only a small number of expert demonstrations . However , AIL has several important challenges , including implicit reward bias ( Kostrikov et al. , 2019 ) , potential training instability ( Salimans et al. , 2016 ; Brock et al. , 2018 ) , and potential sample inefficiency with respect to environment interaction ( Sasaki et al. , 2019 ) . In this paper , we propose a principled approach towards addressing these issues . Wang et al . ( 2019 ) demonstrated that imitation learning is also feasible by constructing a fixed reward function via estimating the support of the expert policy . Since support estimation only requires expert demonstrations , the method sidesteps the training instability associated with adversarial training . However , we show in Section 4.2 that the reward learned via support estimation deteriorates when expert data is sparse , and leads to poor policy performances . Support estimation and adversarial reward represent two different yet complementary RL signals for imitation learning , both learnable from expert demonstrations . We unify both signals into Supportguided Adversarial Imitation Learning ( SAIL ) , a generic imitation learning framework . SAIL leverages the adversarial reward to guide policy exploration and constrains the policy search to the estimated support of the expert policy . It is compatible with existing AIL algorithms , such as GAIL and AIRL . We also show that SAIL is at least as efficient as standard AIL . In an extensive evaluation , we demonstrate that SAIL mitigates the implicit reward bias and achieves better performance and training stability against baseline methods over a series of benchmark control tasks . 2 BACKGROUND . We briefly review the Markov Decision Process ( MDP ) , the context of our imitation learning task , followed by related works on imitation learning . Markov Decision Process We consider an infinite-horizon discounted MDP ( S , A , P , r , p0 , γ ) , where S is the set of states , A the set of actions , P : S × A × S → [ 0 , 1 ] the transition probability , r : S × A → R the reward function , p0 : S → [ 0 , 1 ] the distribution over initial states , and γ ∈ ( 0 , 1 ) the discount factor . Let π be a stochastic policy π : S × A → [ 0 , 1 ] with expected discounted reward Eπ ( r ( s , a ) ) , E ( ∑∞ t=0 γ tr ( st , at ) ) where s0 ∼ p0 , at ∼ π ( ·|st ) , and st+1 ∼ P ( ·|st , at ) for t ≥ 0 . We denote πE the expert policy . Behavioral Cloning ( BC ) learns a policy π : S → A directly from expert trajectories via supervised learning . BC is simple to implement , and effective when expert data is abundant . However , BC is prone to distributional drift : the state distribution of expert demonstrations deviates from that of the agent policy , due to accumulation of small mistakes during policy execution . Distributional drift may lead to catastrophic errors ( Ross et al. , 2011 ) . While several methods address the issue ( Ross & Bagnell , 2010 ; Sun et al. , 2017 ) , they often assume further access to the expert during training . Inverse Reinforcement Learning ( IRL ) first estimates a reward from expert demonstrations , followed by RL using the estimated reward ( Ng & Russell , 2000 ; Abbeel & Ng , 2004 ) . Building upon a maximum entropy formulation of IRL ( Ziebart et al. , 2008 ) , Finn et al . ( 2016 ) and Fu et al . ( 2017 ) explore adversarial IRL and its connection to Generative Adversarial Imitation Learning ( Ho & Ermon , 2016 ) . Imitation Learning via Distribution Matching Generative Adversarial Imitation Learning ( GAIL ) ( Ho & Ermon , 2016 ) frames imitation learning as distribution matching between the expert and the RL agent . The authors show the connection between IRL and GANs . Specifically , GAIL imitates the expert by formulating a minimax game : min π max D∈ ( 0,1 ) Eπ ( logD ( s , a ) ) + EπE ( log ( 1−D ( s , a ) ) ) , ( 1 ) where the expectations Eπ and EπE denote the joint distributions over state-actions of the RL agent and the expert , respectively . GAIL is able to achieve expert performance with a small number of expert trajectories on various benchmark tasks . However , GAIL is relatively sample inefficient with respect to environment interaction , and inherits issues associated with adversarial learning , such as vanishing gradients , training instability and overfitting to expert demonstrations ( Arjovsky & Bottou , 2017 ; Brock et al. , 2018 ) . Recent works have improved the sample efficiency and stability of GAIL . For instance , Generative Moment Matching Imitation Learning ( Kim & Park , 2018 ) replaces the adversarial reward with a non-parametric maximum mean discrepancy estimator to sidestep adversarial learning . Baram et al . ( 2017 ) improve sample efficiency with a model-based RL algorithm . Kostrikov et al . ( 2019 ) and Sasaki et al . ( 2019 ) demonstrate significant gain in sample efficiency with off-policy RL algorithms . In addition , Generative Predecessor Models for Imitation Learning ( Schroecker et al. , 2019 ) imitates the expert policy using generative models to reason about alternative histories of demonstrated states . Our proposed method is closely related to the broad family of AIL algorithms including GAIL and adversarial IRL . It is also complementary to many techniques for improving the algorithmic efficiency and stability , as discussed above . In particular , we focus on improving the quality of the learned reward by constraining adversarial reward to the estimated support of the expert policy . Imitation Learning via Support Estimation Alternative to AIL , Wang et al . ( 2019 ) demonstrate the feasibility of using a fixed RL reward via estimating the support of the expert policy from expert demonstrations . Connecting kernel-based support estimation ( De Vito et al. , 2014 ) to Random Network Distillation ( Burda et al. , 2018 ) , the authors propose Random Expert Distillation ( RED ) to learn a reward function based on support estimation . Specifically , RED learns the reward parameter θ̂ by minimizing : min θ̂ Es , a∼πE ||fθ̂ ( s , a ) − fθ ( s , a ) || 2 2 , ( 2 ) where fθ : S ×A→ RK projects ( s , a ) from expert demonstrations to some embedding of size K , with randomly initialized θ . The reward is then defined as : rred ( s , a ) = exp ( −σ||fθ̂ ( s , a ) − fθ ( s , a ) || 2 2 ) , ( 3 ) where σ is a hyperparameter . As optimizing Eq . ( 2 ) only requires expert data , RED sidesteps adversarial learning , and casts imitation learning as a standard RL task using the learned reward . While RED works well given sufficient expert data , we show in the experiments that its performance suffers in the more challenging setting of sparse expert data . 3 METHOD . Formally , we consider the task of learning a reward function r̂ ( s , a ) from a finite set of trajectories { τi } Ni=1 , sampled from the expert policy πE within a MDP . Each trajectory is a sequence of stateaction tuples in the form of τi = { s1 , a1 , s2 , a2 , ... , sT , aT } . Assuming that the expert trajectories are consistent with some latent reward function r∗ ( s , a ) , we aim to learn a policy that achieves good performance with respect to r∗ ( s , a ) by applying RL on the learned reward function r̂ ( s , a ) . In this section , we first discuss the advantages and shortcomings of AIL to motivate our method . We then introduce Support-guided Adversarial Learning ( SAIL ) , and present a theoretical analysis that compares SAIL with the existing methods , specifically GAIL . 3.1 ADVERSARIAL IMITATION LEARNING . A clear advantage of AIL resides in its low sample complexity with respect to expert data . For instance , GAIL requires as little as 200 state-action tuples from the expert to achieve imitation . The reason is that the adversarial reward may be interpreted as an effective exploration mechanism for the RL agent . To see this , consider the learned reward function under the optimality assumption . With the optimal discriminator to Eq . ( 1 ) D∗ ( s , a ) = pπ ( s , a ) pπE ( s , a ) +pπ ( s , a ) , a common reward for GAIL is rgail ( s , a ) = − log ( D∗ ( s , a ) ) = log ( 1 + pπE ( s , a ) pπ ( s , a ) ) = log ( 1 + φ ( s , a ) ) . ( 4 ) Eq . ( 4 ) shows that the adversarial reward only depends on the ratio φ ( s , a ) = pπE ( s , a ) pπ ( s , a ) . Intuitively , rgail incentivizes the RL agent towards under-visited state-actions , where φ ( s , a ) > 1 , and away from over-visited state-actions , where φ ( s , a ) < 1 . When πE and π match exactly , rgail converges to an indicator function for the support of πE , since φ ( s , a ) = 1 ∀ ( s , a ) ∈ supp ( πE ) ( Goodfellow et al. , 2014 ) . In practice , the adversarial reward is unlikely to converge , as pπE is estimated from a finite set of expert demonstrations . Instead , the adversarial reward continuously drives the agent to explore by evolving the reward landscape . However , AIL also presents several challenges . Kostrikov et al . ( 2019 ) demonstrated that the reward − logD ( s , a ) suffers from an implicit survival bias , as the non-negative reward may lead to suboptimal behaviors in goal-oriented tasks where the agent learns to move around the goal to accumulate rewards , instead of completing the tasks . While the authors resolve the issue by introducing absorbing states , the solution assumes extra RL signals from the environment , including access to the time limit of an environment to detect early termination of training episodes . In Section 4.1 , we empirically demonstrate the survival bias on Lunar Lander , a common RL benchmark , by showing that agents trained with GAIL often hover over the goal location1 . We also show that our proposed method is able to robustly imitate the expert . Another challenge with AIL is potential training instability . Wang et al . ( 2019 ) demonstrated empirically that the adversarial reward could be unreliable in regions where the expert data is sparse , causing the agent to diverge from the intended behavior . When the agent policy is substantially different from the expert policy , the discriminator could differentiate them with high confidence , resulting in very low rewards and significant slow down in training , similar to the vanishing gradient problem in GAN training ( Arjovsky & Bottou , 2017 ) .
This paper proposes an approach for improving adversarial imitation learning, by combining it with support-estimation-based imitation learning. In particular, the paper explores a combination of GAIL (Ho and Ermon, 2016) and RED (Wang et. al., 2019), where the reward for the policy-gradient is a product of the rewards obtained from them separately. The motivation is that, while AIL methods are sample-efficient (in terms of expert data) and implicitly promote useful exploration, they could be unreliable outside the support of the expert policy. Therefore, augmenting them by constraining the imitator to the support of the expert policy (with a method such as RED) could result in an overall better imitation learning algorithm.
SP:c3a5a5600463b8f590e9a2b10f7984973410b043
Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling
Imitation learning , followed by reinforcement learning algorithms , is a promising paradigm to solve complex control tasks sample-efficiently . However , learning from demonstrations often suffers from the covariate shift problem , which results in cascading errors of the learned policy . We introduce a notion of conservativelyextrapolated value functions , which provably lead to policies with self-correction . We design an algorithm Value Iteration with Negative Sampling ( VINS ) that practically learns such value functions with conservative extrapolation . We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks . We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm , which is shown to outperform prior works in sample efficiency . 1 INTRODUCTION . Reinforcement learning ( RL ) algorithms , especially with sparse rewards , often require a large amount of trial-and-errors . Imitation learning from a small number of demonstrations followed by RL finetuning is a promising paradigm to improve the sample efficiency ( Rajeswaran et al. , 2017 ; Večerík et al. , 2017 ; Hester et al. , 2018 ; Nair et al. , 2018 ; Gao et al. , 2018 ) . The key technical challenge of learning from demonstrations is the covariate shift : the distribution of the states visited by the demonstrations often has a low-dimensional support ; however , knowledge learned from this distribution may not necessarily transfer to other distributions of interests . This phenomenon applies to both learning the policy and the value function . The policy learned from behavioral cloning has compounding errors after we execute the policy for multiple steps and reach unseen states ( Bagnell , 2015 ; Ross & Bagnell , 2010 ) . The value function learned from the demonstrations can also extrapolate falsely to unseen states . See Figure 1a for an illustration of the false extrapolation in a toy environment . We develop an algorithm that learns a value function that extrapolates to unseen states more conservatively , as an approach to attack the optimistic extrapolation problem ( Fujimoto et al. , 2018a ) . Consider a state s in the demonstration and its nearby state s̃ that is not in the demonstration . The key intuition is that s̃ should have a lower value than s , because otherwise s̃ likely should have been visited by the demonstrations in the first place . If a value function has this property for most of the pair ( s , s̃ ) of this type , the corresponding policy will tend to correct its errors by driving back to the demonstration states because the demonstration states have locally higher values . We formalize the intuition in Section 4 by defining the so-called conservatively-extrapolated value function , which is guaranteed to induce a policy that stays close to the demonstrations states ( Theorem 4.4 ) . In Section 5 , we design a practical algorithm for learning the conservatively-extrapolated value function by a negative sampling technique inspired by work on learning embeddings Mikolov et al . ( 2013 ) ; Gutmann & Hyvärinen ( 2012 ) . We also learn a dynamical model by standard supervised learning so that we compute actions by maximizing the values of the predicted next states . This algorithm does not use any additional environment interactions , and we show that it empirically helps correct errors of the behavioral cloning policy . When additional environment interactions are available , we use the learned value function and the dynamical model to initialize an RL algorithm . This approach relieves the inefficiency in the prior work ( Hester et al. , 2018 ; Nair et al. , 2018 ; Rajeswaran et al. , 2017 ) that the randomly-initialized Q functions require a significant amount of time and samples to be warmed up , even though the initial policy already has a non-trivial success rate . Empirically , the proposed algorithm outperforms the prior work in the number of environment interactions needed to achieve near-optimal success rate . In summary , our main contributions are : 1 ) we formalize the notion of values functions with conservative extrapolation which are proved to induce policies that stay close to demonstration states and achieve near-optimal performances , 2 ) we propose the algorithm Value Iteration with Negative Sampling ( VINS ) that outperforms behavioral cloning on three simulated robotics benchmark tasks with sparse rewards , and 3 ) we show that initializing an RL algorithm from VINS outperforms prior work in sample efficiency on the same set of benchmark tasks . 2 RELATED WORK . Imitation learning . Imitation learning is commonly adopted as a standard approach in robotics ( Pomerleau , 1989 ; Schaal , 1997 ; Argall et al. , 2009 ; Osa et al. , 2017 ; Ye & Alterovitz , 2017 ; Aleotti & Caselli , 2006 ; Lawitzky et al. , 2012 ; Torabi et al. , 2018 ; Le et al. , 2017 ; 2018 ) and many other areas such as playing games ( Mnih et al. , 2013 ) . Behavioral cloning ( Bain & Sommut , 1999 ) is one of the underlying central approaches . See Osa et al . ( 2018 ) for a thorough survey and more references therein . If we are allowed to access an expert policy ( instead of trajectories ) or an approximate value function , in the training time or in the phase of collecting demonstrations , then , stronger algorithms can be designed , such as DAgger ( Ross et al. , 2011 ) , AggreVaTe ( Ross & Bagnell , 2014 ) , AggreVaTeD ( Sun et al. , 2017 ) , DART ( Laskey et al. , 2017 ) , THOR Sun et al . ( 2018a ) . Our setting is that we have only clean demonstrations trajectories and a sparse reward ( but we still hope to learn the self-correctable policy . ) Ho & Ermon ( 2016 ) ; Wang et al . ( 2017 ) ; Schroecker et al . ( 2018 ) successfully combine generative models in the setting where a large amount of environment interaction without rewards are allowed . The sample efficiency of ( Ho & Ermon , 2016 ) has been improved in various ways , including maximum mean discrepancy minimization ( Kim & Park , 2018 ) , a Bayesian formulation of GAIL ( Jeon et al. , 2018 ) , using an off-policy RL algorithm and solving reward bias problem ( Kostrikov et al. , 2018 ) , and bypassing the learning of reward function ( Sasaki et al. , 2018 ) . By contrast , we would like to minimize the amount of environment interactions needed , but are allowed to access a sparse reward . The work ( Schroecker & Isbell , 2017 ) also aims to learn policies that can stay close to the demonstration sets , but through a quite different approach of estimating the true MAP estimate of the policy . The algorithm also requires environment interactions , whereas one of our main goals is to improve upon behavioral cloning without any environment interactions . Inverse reinforcement learning ( e.g. , see ( Abbeel & Ng , 2004 ; Ng et al. , 2000 ; Ziebart et al. , 2008 ; Finn et al. , 2016a ; b ; Fu et al. , 2017 ) ) is another important and successful line of ideas for imitation learning . It relates to our approach in the sense that it aims to learn a reward function that the expert is optimizing . In contrast , we construct a model to learn the value function ( of the trivial sparse reward R ( s , a ) = −1 ) , rather than the reward function . Some of these works ( e.g. , ( Finn et al. , 2016a ; b ; Fu et al. , 2017 ) ) use techniques that are reminiscent of negative sampling or contrastive learning , although unlike our methods , they use “ negative samples ” that are sampled from the environments . Leveraging demonstrations for sample-efficient reinforcement learning . Demonstrations have been widely used to improve the efficiency of RL ( Kim et al. , 2013 ; Chemali & Lazaric , 2015 ; Piot et al. , 2014 ; Sasaki et al. , 2018 ) , and a common paradigm for continuous state and action space is to initialize with RL algorithms with a good policy or Q function ( Rajeswaran et al. , 2017 ; Nair et al. , 2018 ; Večerík et al. , 2017 ; Hester et al. , 2018 ; Gao et al. , 2018 ) . We experimentally compare with the previous state-of-the-art algorithm in Nair et al . ( 2018 ) on the same type of tasks . Gao et al . ( 2018 ) has introduced soft version of actor-critic to tackle the false extrapolation of Q in the argument of a when the action space is discrete . In contrast , we deal with the extrapolation of the states in a continuous state and action space . Model-based reinforcement learning . Even though we will learn a dynamical model in our algorithms , we do not use it to generate fictitious samples for planning . Instead , the learned dynamics are only used in combination with the value function to get a Q function . Therefore , we do not consider our algorithm as model-based techniques . We refer to ( Kurutach et al. , 2018 ; Clavera et al. , 2018 ; Sun et al. , 2018b ; Chua et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ; Pascanu et al. , 2017 ; Khansari-Zadeh & Billard , 2011 ; Luo et al. , 2018 ) and the reference therein for recent work on model-based RL . Off-policy reinforcement learning There is a large body of prior works in the domain of off-policy RL , including extensions of policy gradient ( Gu et al. , 2016 ; Degris et al. , 2012 ; Wang et al. , 2016 ) or Q-learning ( Watkins & Dayan , 1992 ; Haarnoja et al. , 2018 ; Munos et al. , 2016 ) . Fujimoto et al . ( 2018a ) propose to solve off-policy reinforcement learning by constraining the action space , and Fujimoto et al . ( 2018c ) use double Q-learning ( Van Hasselt et al. , 2016 ) to alleviate the optimistic extrapolation issue . In contrast , our method adjusts the erroneously extrapolated value function by explicitly penalizing the unseen states ( which is customized to the particular demonstration offpolicy data ) . For most of the off-policy methods , their convergence are based on the assumption of visiting each state-action pair sufficiently many times . In the learning from demonstration setting , the demonstrations states are highly biased or structured ; thus off-policy method may not be able to learn much from the demonstrations . 3 PROBLEM SETUP AND CHALLENGES . We consider a setting with a deterministic MDP with continuous state and action space , and sparse rewards . Let S = Rd be the state space andA = Rk be the action space , and letM ? : Rd×Rk → Rd be the deterministic dynamics . At test time , a random initial state s0 is generated from some distribution Ds0 . We assume Ds0 has a low-dimensional bounded support because typically initial states have special structures . We aim to find a policy π such that executing π from state s0 will lead to a set of goal states G. All the goal states are terminal states , and we run the policy for at most T steps if none of the goal states is reached . Let τ = ( s0 , a1 , s1 , . . . , ) be the trajectory obtained by executing a deterministic policy π from s0 , where at = π ( st ) , and st+1 = M ? ( st , at ) . The success rate of the policy π is defined as succ ( π ) = E [ 1 { ∃t ≤ T , st ∈ G } ] ( 3.1 ) where the expectation is taken over the randomness of s0 . Note that the problem comes with a natural sparse reward : R ( s , a ) = −1 for every s and a . This will encourage reaching the goal with as small number of steps as possible : the total payoff of a trajectory is equal to negative the number of steps if the trajectory succeeds , or −T otherwise . Let πe be an expert policy 1 from which a set of n demonstrations are sampled . Concretely , n independent initial states { s ( i ) 0 } ni=1 from Ds0 are generated , and the expert executes πe to collect a set of n trajectories { τ ( i ) } ni=1 . We only have the access to the trajectories but not the expert policy itself . We will design algorithms for two different settings : Imitation learning without environment interactions : The goal is to learn a policy π from the demonstration trajectories { τ ( i ) } ni=1 without having any additional interactions with the environment . Leveraging demonstrations in reinforcement learning : Here , in addition to the demonstrations , we can also interact with the environment ( by sampling s0 ∼ Ds0 and executing a policy ) and observe if the trajectory reaches the goal . We aim is to minimize the amount of environment interactions by efficiently leveraging the demonstrations . Let U be the set of states that can be visited by the demonstration policy from a random state s0 with positive probability . Throughout this paper , we consider the situation where the set U is only a small subset or a low-dimensional manifold of the entire state space . This is typical for continuous state space control problems in robotics , because the expert policy may only visit a very special kind of states that are the most efficient for reaching the goal . For example , in the toy example in Figure 1 , the set U only contains those entries with black edges.2 To put our theoretical motivation in Section 4 into context , next we summarize a few challenges of imitation learning that are particularly caused by that U is only a small subset of the state space . Cascading errors for behavioral cloning . As pointed out by Bagnell ( 2015 ) ; Ross & Bagnell ( 2010 ) , the errors of the policy can compound into a long sequence of mistakes and in the worst case cascade quadratically in the number of time steps T . From a statistical point of view , the fundamental issue is that the distribution of the states that a learned policy may encounter is different from the demonstration state distribution . Concretely , the behavioral cloning πBC performs well on the states in U but not on those states far away from U . However , small errors of the learned policy can drive the state to leave U , and then the errors compound as we move further and further away from U . As shown in Section 4 , our key idea is to design policies that correct themselves to stay close to the set U . Degeneracy in learning value or Q functions from only demonstrations . When U is a small subset or a low-dimensional manifold of the state space , off-policy evaluation of V πe and Qπe is fundamentally problematic in the following sense . The expert policy πe is not uniquely defined outside U because any arbitrary extension of πe outside U would not affect the performance of the expert policy ( because those states outside U will never be visited by πe from s0 ∼ Ds0 ) . As a result , the value function V πe and Qπe is not uniquely defined outside U . In Section 4 , we will propose a conservative extrapolation of the value function that encourages the policy to stay close to U . Fitting Qπe is in fact even more problematic . We refer to Section A for detailed discussions and why our approach can alleviate the problem . Success and challenges of initializing RL with imitation learning . A successful paradigm for sample-efficient RL is to initialize the RL policy by some coarse imitation learning algorithm such as BC ( Rajeswaran et al. , 2017 ; Večerík et al. , 2017 ; Hester et al. , 2018 ; Nair et al. , 2018 ; Gao et al. , 2018 ) . However , the authors suspect that the method can still be improved , because the value function or the Q function are only randomly initialized so that many samples are burned to warm them up . As alluded before and shown in Section 4 , we will propose a way to learn a value function from the demonstrations so that the following RL algorithm can be initialized by a policy , value function , and Q function ( which is a composition of value and dynamical model ) and thus converge faster .
This paper tackles an issue imitation learning approaches face. More specifically, policies learned in this manner can often fail when they encounter new states not seen in demonstrations. The paper proposes a method for learning value functions that are more conservative on unseen states, which encourages the learned policies to stay within the distribution of training states. Theoretical results are derived to provide some support for the approach. A practical algorithm is also presented and experiments on continuous control tasks display the effectiveness of the method, with particularly good results on imitation learning followed by reinforcement learning.
SP:812c4e2bd2b3e6b25fc6869775bea958498cbfd1
Learning Self-Correctable Policies and Value Functions from Demonstrations with Negative Sampling
Imitation learning , followed by reinforcement learning algorithms , is a promising paradigm to solve complex control tasks sample-efficiently . However , learning from demonstrations often suffers from the covariate shift problem , which results in cascading errors of the learned policy . We introduce a notion of conservativelyextrapolated value functions , which provably lead to policies with self-correction . We design an algorithm Value Iteration with Negative Sampling ( VINS ) that practically learns such value functions with conservative extrapolation . We show that VINS can correct mistakes of the behavioral cloning policy on simulated robotics benchmark tasks . We also propose the algorithm of using VINS to initialize a reinforcement learning algorithm , which is shown to outperform prior works in sample efficiency . 1 INTRODUCTION . Reinforcement learning ( RL ) algorithms , especially with sparse rewards , often require a large amount of trial-and-errors . Imitation learning from a small number of demonstrations followed by RL finetuning is a promising paradigm to improve the sample efficiency ( Rajeswaran et al. , 2017 ; Večerík et al. , 2017 ; Hester et al. , 2018 ; Nair et al. , 2018 ; Gao et al. , 2018 ) . The key technical challenge of learning from demonstrations is the covariate shift : the distribution of the states visited by the demonstrations often has a low-dimensional support ; however , knowledge learned from this distribution may not necessarily transfer to other distributions of interests . This phenomenon applies to both learning the policy and the value function . The policy learned from behavioral cloning has compounding errors after we execute the policy for multiple steps and reach unseen states ( Bagnell , 2015 ; Ross & Bagnell , 2010 ) . The value function learned from the demonstrations can also extrapolate falsely to unseen states . See Figure 1a for an illustration of the false extrapolation in a toy environment . We develop an algorithm that learns a value function that extrapolates to unseen states more conservatively , as an approach to attack the optimistic extrapolation problem ( Fujimoto et al. , 2018a ) . Consider a state s in the demonstration and its nearby state s̃ that is not in the demonstration . The key intuition is that s̃ should have a lower value than s , because otherwise s̃ likely should have been visited by the demonstrations in the first place . If a value function has this property for most of the pair ( s , s̃ ) of this type , the corresponding policy will tend to correct its errors by driving back to the demonstration states because the demonstration states have locally higher values . We formalize the intuition in Section 4 by defining the so-called conservatively-extrapolated value function , which is guaranteed to induce a policy that stays close to the demonstrations states ( Theorem 4.4 ) . In Section 5 , we design a practical algorithm for learning the conservatively-extrapolated value function by a negative sampling technique inspired by work on learning embeddings Mikolov et al . ( 2013 ) ; Gutmann & Hyvärinen ( 2012 ) . We also learn a dynamical model by standard supervised learning so that we compute actions by maximizing the values of the predicted next states . This algorithm does not use any additional environment interactions , and we show that it empirically helps correct errors of the behavioral cloning policy . When additional environment interactions are available , we use the learned value function and the dynamical model to initialize an RL algorithm . This approach relieves the inefficiency in the prior work ( Hester et al. , 2018 ; Nair et al. , 2018 ; Rajeswaran et al. , 2017 ) that the randomly-initialized Q functions require a significant amount of time and samples to be warmed up , even though the initial policy already has a non-trivial success rate . Empirically , the proposed algorithm outperforms the prior work in the number of environment interactions needed to achieve near-optimal success rate . In summary , our main contributions are : 1 ) we formalize the notion of values functions with conservative extrapolation which are proved to induce policies that stay close to demonstration states and achieve near-optimal performances , 2 ) we propose the algorithm Value Iteration with Negative Sampling ( VINS ) that outperforms behavioral cloning on three simulated robotics benchmark tasks with sparse rewards , and 3 ) we show that initializing an RL algorithm from VINS outperforms prior work in sample efficiency on the same set of benchmark tasks . 2 RELATED WORK . Imitation learning . Imitation learning is commonly adopted as a standard approach in robotics ( Pomerleau , 1989 ; Schaal , 1997 ; Argall et al. , 2009 ; Osa et al. , 2017 ; Ye & Alterovitz , 2017 ; Aleotti & Caselli , 2006 ; Lawitzky et al. , 2012 ; Torabi et al. , 2018 ; Le et al. , 2017 ; 2018 ) and many other areas such as playing games ( Mnih et al. , 2013 ) . Behavioral cloning ( Bain & Sommut , 1999 ) is one of the underlying central approaches . See Osa et al . ( 2018 ) for a thorough survey and more references therein . If we are allowed to access an expert policy ( instead of trajectories ) or an approximate value function , in the training time or in the phase of collecting demonstrations , then , stronger algorithms can be designed , such as DAgger ( Ross et al. , 2011 ) , AggreVaTe ( Ross & Bagnell , 2014 ) , AggreVaTeD ( Sun et al. , 2017 ) , DART ( Laskey et al. , 2017 ) , THOR Sun et al . ( 2018a ) . Our setting is that we have only clean demonstrations trajectories and a sparse reward ( but we still hope to learn the self-correctable policy . ) Ho & Ermon ( 2016 ) ; Wang et al . ( 2017 ) ; Schroecker et al . ( 2018 ) successfully combine generative models in the setting where a large amount of environment interaction without rewards are allowed . The sample efficiency of ( Ho & Ermon , 2016 ) has been improved in various ways , including maximum mean discrepancy minimization ( Kim & Park , 2018 ) , a Bayesian formulation of GAIL ( Jeon et al. , 2018 ) , using an off-policy RL algorithm and solving reward bias problem ( Kostrikov et al. , 2018 ) , and bypassing the learning of reward function ( Sasaki et al. , 2018 ) . By contrast , we would like to minimize the amount of environment interactions needed , but are allowed to access a sparse reward . The work ( Schroecker & Isbell , 2017 ) also aims to learn policies that can stay close to the demonstration sets , but through a quite different approach of estimating the true MAP estimate of the policy . The algorithm also requires environment interactions , whereas one of our main goals is to improve upon behavioral cloning without any environment interactions . Inverse reinforcement learning ( e.g. , see ( Abbeel & Ng , 2004 ; Ng et al. , 2000 ; Ziebart et al. , 2008 ; Finn et al. , 2016a ; b ; Fu et al. , 2017 ) ) is another important and successful line of ideas for imitation learning . It relates to our approach in the sense that it aims to learn a reward function that the expert is optimizing . In contrast , we construct a model to learn the value function ( of the trivial sparse reward R ( s , a ) = −1 ) , rather than the reward function . Some of these works ( e.g. , ( Finn et al. , 2016a ; b ; Fu et al. , 2017 ) ) use techniques that are reminiscent of negative sampling or contrastive learning , although unlike our methods , they use “ negative samples ” that are sampled from the environments . Leveraging demonstrations for sample-efficient reinforcement learning . Demonstrations have been widely used to improve the efficiency of RL ( Kim et al. , 2013 ; Chemali & Lazaric , 2015 ; Piot et al. , 2014 ; Sasaki et al. , 2018 ) , and a common paradigm for continuous state and action space is to initialize with RL algorithms with a good policy or Q function ( Rajeswaran et al. , 2017 ; Nair et al. , 2018 ; Večerík et al. , 2017 ; Hester et al. , 2018 ; Gao et al. , 2018 ) . We experimentally compare with the previous state-of-the-art algorithm in Nair et al . ( 2018 ) on the same type of tasks . Gao et al . ( 2018 ) has introduced soft version of actor-critic to tackle the false extrapolation of Q in the argument of a when the action space is discrete . In contrast , we deal with the extrapolation of the states in a continuous state and action space . Model-based reinforcement learning . Even though we will learn a dynamical model in our algorithms , we do not use it to generate fictitious samples for planning . Instead , the learned dynamics are only used in combination with the value function to get a Q function . Therefore , we do not consider our algorithm as model-based techniques . We refer to ( Kurutach et al. , 2018 ; Clavera et al. , 2018 ; Sun et al. , 2018b ; Chua et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ; Pascanu et al. , 2017 ; Khansari-Zadeh & Billard , 2011 ; Luo et al. , 2018 ) and the reference therein for recent work on model-based RL . Off-policy reinforcement learning There is a large body of prior works in the domain of off-policy RL , including extensions of policy gradient ( Gu et al. , 2016 ; Degris et al. , 2012 ; Wang et al. , 2016 ) or Q-learning ( Watkins & Dayan , 1992 ; Haarnoja et al. , 2018 ; Munos et al. , 2016 ) . Fujimoto et al . ( 2018a ) propose to solve off-policy reinforcement learning by constraining the action space , and Fujimoto et al . ( 2018c ) use double Q-learning ( Van Hasselt et al. , 2016 ) to alleviate the optimistic extrapolation issue . In contrast , our method adjusts the erroneously extrapolated value function by explicitly penalizing the unseen states ( which is customized to the particular demonstration offpolicy data ) . For most of the off-policy methods , their convergence are based on the assumption of visiting each state-action pair sufficiently many times . In the learning from demonstration setting , the demonstrations states are highly biased or structured ; thus off-policy method may not be able to learn much from the demonstrations . 3 PROBLEM SETUP AND CHALLENGES . We consider a setting with a deterministic MDP with continuous state and action space , and sparse rewards . Let S = Rd be the state space andA = Rk be the action space , and letM ? : Rd×Rk → Rd be the deterministic dynamics . At test time , a random initial state s0 is generated from some distribution Ds0 . We assume Ds0 has a low-dimensional bounded support because typically initial states have special structures . We aim to find a policy π such that executing π from state s0 will lead to a set of goal states G. All the goal states are terminal states , and we run the policy for at most T steps if none of the goal states is reached . Let τ = ( s0 , a1 , s1 , . . . , ) be the trajectory obtained by executing a deterministic policy π from s0 , where at = π ( st ) , and st+1 = M ? ( st , at ) . The success rate of the policy π is defined as succ ( π ) = E [ 1 { ∃t ≤ T , st ∈ G } ] ( 3.1 ) where the expectation is taken over the randomness of s0 . Note that the problem comes with a natural sparse reward : R ( s , a ) = −1 for every s and a . This will encourage reaching the goal with as small number of steps as possible : the total payoff of a trajectory is equal to negative the number of steps if the trajectory succeeds , or −T otherwise . Let πe be an expert policy 1 from which a set of n demonstrations are sampled . Concretely , n independent initial states { s ( i ) 0 } ni=1 from Ds0 are generated , and the expert executes πe to collect a set of n trajectories { τ ( i ) } ni=1 . We only have the access to the trajectories but not the expert policy itself . We will design algorithms for two different settings : Imitation learning without environment interactions : The goal is to learn a policy π from the demonstration trajectories { τ ( i ) } ni=1 without having any additional interactions with the environment . Leveraging demonstrations in reinforcement learning : Here , in addition to the demonstrations , we can also interact with the environment ( by sampling s0 ∼ Ds0 and executing a policy ) and observe if the trajectory reaches the goal . We aim is to minimize the amount of environment interactions by efficiently leveraging the demonstrations . Let U be the set of states that can be visited by the demonstration policy from a random state s0 with positive probability . Throughout this paper , we consider the situation where the set U is only a small subset or a low-dimensional manifold of the entire state space . This is typical for continuous state space control problems in robotics , because the expert policy may only visit a very special kind of states that are the most efficient for reaching the goal . For example , in the toy example in Figure 1 , the set U only contains those entries with black edges.2 To put our theoretical motivation in Section 4 into context , next we summarize a few challenges of imitation learning that are particularly caused by that U is only a small subset of the state space . Cascading errors for behavioral cloning . As pointed out by Bagnell ( 2015 ) ; Ross & Bagnell ( 2010 ) , the errors of the policy can compound into a long sequence of mistakes and in the worst case cascade quadratically in the number of time steps T . From a statistical point of view , the fundamental issue is that the distribution of the states that a learned policy may encounter is different from the demonstration state distribution . Concretely , the behavioral cloning πBC performs well on the states in U but not on those states far away from U . However , small errors of the learned policy can drive the state to leave U , and then the errors compound as we move further and further away from U . As shown in Section 4 , our key idea is to design policies that correct themselves to stay close to the set U . Degeneracy in learning value or Q functions from only demonstrations . When U is a small subset or a low-dimensional manifold of the state space , off-policy evaluation of V πe and Qπe is fundamentally problematic in the following sense . The expert policy πe is not uniquely defined outside U because any arbitrary extension of πe outside U would not affect the performance of the expert policy ( because those states outside U will never be visited by πe from s0 ∼ Ds0 ) . As a result , the value function V πe and Qπe is not uniquely defined outside U . In Section 4 , we will propose a conservative extrapolation of the value function that encourages the policy to stay close to U . Fitting Qπe is in fact even more problematic . We refer to Section A for detailed discussions and why our approach can alleviate the problem . Success and challenges of initializing RL with imitation learning . A successful paradigm for sample-efficient RL is to initialize the RL policy by some coarse imitation learning algorithm such as BC ( Rajeswaran et al. , 2017 ; Večerík et al. , 2017 ; Hester et al. , 2018 ; Nair et al. , 2018 ; Gao et al. , 2018 ) . However , the authors suspect that the method can still be improved , because the value function or the Q function are only randomly initialized so that many samples are burned to warm them up . As alluded before and shown in Section 4 , we will propose a way to learn a value function from the demonstrations so that the following RL algorithm can be initialized by a policy , value function , and Q function ( which is a composition of value and dynamical model ) and thus converge faster .
This work presents the value iteration with negative sampling (VINS) algorithm, a method for accelerating reinforcement learning using expert demonstrations. In addition to learning an expert policy through behavioral cloning, VINS learns an initial value function which is biased to assign smaller expected values to states not encountered during demonstrations. This is done by augmenting the demonstration data with states that have been randomly perturbed, and penalizing the value targets for these states by a factor proportional to their Euclidean distance to the original state. In addition to the policy and value function, VINS also learns a one-step dynamics model used to select actions against the learned value function. As the value function learned in VINS is only defined with respect to the current state, action values are estimated by sampling future states using the learned model, and computing the value of these sampled states.
SP:812c4e2bd2b3e6b25fc6869775bea958498cbfd1
Gradient-Based Neural DAG Learning
1 INTRODUCTION . Structure learning and causal inference have many important applications in different areas of science such as genetics ( Koller & Friedman , 2009 ; Peters et al. , 2017 ) , biology ( Sachs et al. , 2005 ) and economics ( Pearl , 2009 ) . Bayesian networks ( BN ) , which encode conditional independencies using directed acyclic graphs ( DAG ) , are powerful models which are both interpretable and computationally tractable . Causal graphical models ( CGM ) ( Peters et al. , 2017 ) are BNs which support interventional queries like : What will happen if someone external to the system intervenes on variable X ? Recent work suggests that causality could partially solve challenges faced by current machine learning systems such as robustness to out-of-distribution samples , adaptability and explainability ( Pearl , 2019 ; Magliacane et al. , 2018 ) . However , structure and causal learning are daunting tasks due to both the combinatorial nature of the space of structures ( the number of DAGs grows super exponentially with the number of nodes ) and the question of structure identifiability ( see Section 2.2 ) . Nevertheless , these graphical models known qualities and promises of improvement for machine intelligence renders the quest for structure/causal learning appealing . The typical motivation for learning a causal graphical model is to predict the effect of various interventions . A CGM can be best estimated when given interventional data , but interventions are often costly or impossible to obtained . As an alternative , one can use exclusively observational data and rely on different assumptions which make the graph identifiable from the distribution ( see Section 2.2 ) . This is the approach employed in this paper . We propose a score-based method ( Koller & Friedman , 2009 ) for structure learning named GraNDAG which makes use of a recent reformulation of the original combinatorial problem of finding an optimal DAG into a continuous constrained optimization problem . In the original method named NOTEARS ( Zheng et al. , 2018 ) , the directed graph is encoded as a weighted adjacency matrix which represents coefficients in a linear structural equation model ( SEM ) ( Pearl , 2009 ) ( see Section 2.3 ) and enforces acyclicity using a constraint which is both efficiently computable and easily differentiable , thus allowing the use of numerical solvers . This continuous approach improved upon popular methods while avoiding the design of greedy algorithms based on heuristics . Our first contribution is to extend the framework of Zheng et al . ( 2018 ) to deal with nonlinear relationships between variables using neural networks ( NN ) ( Goodfellow et al. , 2016 ) . To adapt the acyclicity constraint to our nonlinear model , we use an argument similar to what is used in †Canada CIFAR AI Chair Correspondence to : sebastien.lachapelle @ umontreal.ca Zheng et al . ( 2018 ) and apply it first at the level of neural network paths and then at the level of graph paths . Although GraN-DAG is general enough to deal with a large variety of parametric families of conditional probability distributions , our experiments focus on the special case of nonlinear Gaussian additive noise models since , under specific assumptions , it provides appealing theoretical guarantees easing the comparison to other graph search procedures ( see Section 2.2 & 3.3 ) . On both synthetic and real-world tasks , we show GraN-DAG often outperforms other approaches which leverage the continuous paradigm , including DAG-GNN ( Yu et al. , 2019 ) , a recent nonlinear extension of Zheng et al . ( 2018 ) which uses an evidence lower bound as score . Our second contribution is to provide a missing empirical comparison to existing methods that support nonlinear relationships but tackle the optimization problem in its discrete form using greedy search procedures , namely CAM ( Bühlmann et al. , 2014 ) and GSF ( Huang et al. , 2018 ) . We show that GraN-DAG is competitive on the wide range of tasks we considered , while using pre- and post-processing steps similar to CAM . We provide an implementation of GraN-DAG here . 2 BACKGROUND . Before presenting GraN-DAG , we review concepts relevant to structure and causal learning . 2.1 CAUSAL GRAPHICAL MODELS . We suppose the natural phenomenon of interest can be described by a random vector X ∈ Rd entailed by an underlying CGM ( PX , G ) where PX is a probability distribution over X and G = ( V , E ) is a DAG ( Peters et al. , 2017 ) . Each node j ∈ V corresponds to exactly one variable in the system . Let πGj denote the set of parents of node j in G and let XπGj denote the random vector containing the variables corresponding to the parents of j in G. Throughout the paper , we assume there are no hidden variables . In a CGM , the distribution PX is said to be Markov to G , i.e . we can write the probability density function ( pdf ) of PX as p ( x ) = ∏d j=1 pj ( xj |xπGj ) where pj ( xj |xπGj ) is the conditional pdf of variable Xj given XπGj . A CGM can be thought of as a BN in which directed edges are given a causal meaning , allowing it to answer queries regarding interventional distributions ( Koller & Friedman , 2009 ) . 2.2 STRUCTURE IDENTIFIABILITY . In general , it is impossible to recover G given only samples from PX , i.e . without interventional data . It is , however , customary to rely on a set of assumptions to render the structure fully or partially identifiable . Definition 1 Given a set of assumptions A on a CGM M = ( PX , G ) , its graph G is said to be identifiable from PX if there exists no other CGM M̃ = ( P̃X , G̃ ) satisfying all assumptions in A such that G̃ 6= G and P̃X = PX . There are many examples of graph identifiability results for continuous variables ( Peters et al. , 2014 ; Peters & Bühlman , 2014 ; Shimizu et al. , 2006 ; Zhang & Hyvärinen , 2009 ) as well as for discrete variables ( Peters et al. , 2011 ) . These results are obtained by assuming that the conditional densities belong to a specific parametric family . For example , if one assumes that the distribution PX is entailed by a structural equation model of the form Xj : = fj ( XπGj ) +Nj with Nj ∼ N ( 0 , σ2j ) ∀j ∈ V ( 1 ) where fj is a nonlinear function satisfying some mild regularity conditions and the noises Nj are mutually independent , then G is identifiable from PX ( see Peters et al . ( 2014 ) for the complete theorem and its proof ) . This is a particular instance of additive noise models ( ANM ) . We will make use of this result in our experiments in Section 4 . One can consider weaker assumptions such as faithfulness ( Peters et al. , 2017 ) . This assumption allows one to identify , not G itself , but the Markov equivalence class to which it belongs ( Spirtes et al. , 2000 ) . A Markov equivalence class is a set of DAGs which encode exactly the same set of conditional independence statements and can be characterized by a graphical object named a completed partially directed acyclic graph ( CPDAG ) ( Koller & Friedman , 2009 ; Peters et al. , 2017 ) . Some algorithms we use as baselines in Section 4 output only a CPDAG . 2.3 NOTEARS : CONTINUOUS OPTIMIZATION FOR STRUCTURE LEARNING . Structure learning is the problem of learning G using a data set of n samples { x ( 1 ) , ... , x ( n ) } from PX . Score-based approaches cast this problem as an optimization problem , i.e . Ĝ = arg maxG∈DAG S ( G ) where S ( G ) is a regularized maximum likelihood under graph G. Since the number of DAGs is super exponential in the number of nodes , most methods rely on various heuristic greedy search procedures to approximately solve the problem ( see Section 5 for a review ) . We now present the work of Zheng et al . ( 2018 ) which proposes to cast this combinatorial optimization problem into a continuous constrained one . To do so , the authors propose to encode the graph G on d nodes as a weighted adjacency matrix U = [ u1| . . . |ud ] ∈ Rd×d which represents ( possibly negative ) coefficients in a linear SEM of the form Xj : = u > j X + Ni ∀j where Nj is a noise variable . Let GU be the directed graph associated with the SEM and let AU be the ( binary ) adjacency matrix associated with GU . One can see that the following equivalence holds : ( AU ) ij = 0 ⇐⇒ Uij = 0 ( 2 ) To make sure GU is acyclic , the authors propose the following constraint on U : Tr eU U − d = 0 ( 3 ) where eM , ∑∞ k=0 Mk k ! is the matrix exponential and is the Hadamard product . To see why this constraint characterizes acyclicity , first note that ( AUk ) jj is the number of cycles of length k passing through node j in graph GU . Clearly , for GU to be acyclic , we must have TrAU k = 0 for k = 1 , 2 , ... , ∞ . By equivalence ( 2 ) , this is true when Tr ( U U ) k = 0 for k = 1 , 2 , ... , ∞ . From there , one can simply apply the definition of the matrix exponential to see why constraint ( 3 ) characterizes acyclicity ( see Zheng et al . ( 2018 ) for the full development ) . The authors propose to use a regularized negative least square score ( maximum likelihood for a Gaussian noise model ) . The resulting continuous constrained problem is max U S ( U , X ) , − 1 2n ‖X−XU‖2F − λ‖U‖1 s.t . Tr eU U − d = 0 ( 4 ) where X ∈ Rn×d is the design matrix containing all n samples . The nature of the problem has been drastically changed : we went from a combinatorial to a continuous problem . The difficulties of combinatorial optimization have been replaced by those of non-convex optimization , since the feasible set is non-convex . Nevertheless , a standard numerical solver for constrained optimization such has an augmented Lagrangian method ( Bertsekas , 1999 ) can be applied to get an approximate solution , hence there is no need to design a greedy search procedure . Moreover , this approach is more global than greedy methods in the sense that the whole matrix U is updated at each iteration . Continuous approaches to combinatorial optimization have sometimes demonstrated improved performance over discrete approaches in the literature ( see for example Alayrac et al . ( 2018 , §5.2 ) where they solve the multiple sequence alignment problem with a continuous optimization method ) . 3 GRAN-DAG : GRADIENT-BASED NEURAL DAG LEARNING . We propose a new nonlinear extension to the framework presented in Section 2.3 . For each variable Xj , we learn a fully connected neural network with L hidden layers parametrized by φ ( j ) , { W ( 1 ) ( j ) , . . . , W ( L+1 ) ( j ) } where W ( ` ) ( j ) is the ` th weight matrix of the jth NN ( biases are omitted for clarity ) . Each NN takes as input X−j ∈ Rd , i.e . the vector X with the jth component masked to zero , and outputs θ ( j ) ∈ Rm , the m-dimensional parameter vector of the desired distribution family for variable Xj .1 The fully connected NNs have the following form θ ( j ) , W ( L+1 ) ( j ) g ( . . . g ( W ( 2 ) ( j ) g ( W ( 1 ) ( j ) X−j ) ) . . . ) ∀j ( 5 ) where g is a nonlinearity applied element-wise . Note that the evaluation of all NNs can be parallelized on GPU . Distribution families need not be the same for each variable . Let φ , { φ ( 1 ) , . . . , φ ( d ) } represents all parameters of all d NNs . Without any constraint on its parameter φ ( j ) , neural network j models the conditional pdf pj ( xj |x−j ; φ ( j ) ) . Note that the product∏d j=1 pj ( xj |x−j ; φ ( j ) ) does not integrate to one ( i.e . it is not a joint pdf ) , since it does not decompose according to a DAG . We now show how one can constrain φ to make sure the product of all conditionals outputted by the NNs is a joint pdf . The idea is to define a new weighted adjacency matrixAφ similar to the one encountered in Section 2.3 , which can be directly used inside the constraint of Equation 3 to enforce acyclicity .
The authors propose a prediction model for directed acyclic graphs (DAGs) over a fixed set of vertices based on a neural network. The present work follows the previous work on undirected acyclic graphs, where the key constraint is (3), ensuring the acyclic property. The proposed method performed favorably on artificial/real data compared to previous baselines.
SP:c2dfaba3df490671f8ce20bf69df96d0887aa19d
Gradient-Based Neural DAG Learning
1 INTRODUCTION . Structure learning and causal inference have many important applications in different areas of science such as genetics ( Koller & Friedman , 2009 ; Peters et al. , 2017 ) , biology ( Sachs et al. , 2005 ) and economics ( Pearl , 2009 ) . Bayesian networks ( BN ) , which encode conditional independencies using directed acyclic graphs ( DAG ) , are powerful models which are both interpretable and computationally tractable . Causal graphical models ( CGM ) ( Peters et al. , 2017 ) are BNs which support interventional queries like : What will happen if someone external to the system intervenes on variable X ? Recent work suggests that causality could partially solve challenges faced by current machine learning systems such as robustness to out-of-distribution samples , adaptability and explainability ( Pearl , 2019 ; Magliacane et al. , 2018 ) . However , structure and causal learning are daunting tasks due to both the combinatorial nature of the space of structures ( the number of DAGs grows super exponentially with the number of nodes ) and the question of structure identifiability ( see Section 2.2 ) . Nevertheless , these graphical models known qualities and promises of improvement for machine intelligence renders the quest for structure/causal learning appealing . The typical motivation for learning a causal graphical model is to predict the effect of various interventions . A CGM can be best estimated when given interventional data , but interventions are often costly or impossible to obtained . As an alternative , one can use exclusively observational data and rely on different assumptions which make the graph identifiable from the distribution ( see Section 2.2 ) . This is the approach employed in this paper . We propose a score-based method ( Koller & Friedman , 2009 ) for structure learning named GraNDAG which makes use of a recent reformulation of the original combinatorial problem of finding an optimal DAG into a continuous constrained optimization problem . In the original method named NOTEARS ( Zheng et al. , 2018 ) , the directed graph is encoded as a weighted adjacency matrix which represents coefficients in a linear structural equation model ( SEM ) ( Pearl , 2009 ) ( see Section 2.3 ) and enforces acyclicity using a constraint which is both efficiently computable and easily differentiable , thus allowing the use of numerical solvers . This continuous approach improved upon popular methods while avoiding the design of greedy algorithms based on heuristics . Our first contribution is to extend the framework of Zheng et al . ( 2018 ) to deal with nonlinear relationships between variables using neural networks ( NN ) ( Goodfellow et al. , 2016 ) . To adapt the acyclicity constraint to our nonlinear model , we use an argument similar to what is used in †Canada CIFAR AI Chair Correspondence to : sebastien.lachapelle @ umontreal.ca Zheng et al . ( 2018 ) and apply it first at the level of neural network paths and then at the level of graph paths . Although GraN-DAG is general enough to deal with a large variety of parametric families of conditional probability distributions , our experiments focus on the special case of nonlinear Gaussian additive noise models since , under specific assumptions , it provides appealing theoretical guarantees easing the comparison to other graph search procedures ( see Section 2.2 & 3.3 ) . On both synthetic and real-world tasks , we show GraN-DAG often outperforms other approaches which leverage the continuous paradigm , including DAG-GNN ( Yu et al. , 2019 ) , a recent nonlinear extension of Zheng et al . ( 2018 ) which uses an evidence lower bound as score . Our second contribution is to provide a missing empirical comparison to existing methods that support nonlinear relationships but tackle the optimization problem in its discrete form using greedy search procedures , namely CAM ( Bühlmann et al. , 2014 ) and GSF ( Huang et al. , 2018 ) . We show that GraN-DAG is competitive on the wide range of tasks we considered , while using pre- and post-processing steps similar to CAM . We provide an implementation of GraN-DAG here . 2 BACKGROUND . Before presenting GraN-DAG , we review concepts relevant to structure and causal learning . 2.1 CAUSAL GRAPHICAL MODELS . We suppose the natural phenomenon of interest can be described by a random vector X ∈ Rd entailed by an underlying CGM ( PX , G ) where PX is a probability distribution over X and G = ( V , E ) is a DAG ( Peters et al. , 2017 ) . Each node j ∈ V corresponds to exactly one variable in the system . Let πGj denote the set of parents of node j in G and let XπGj denote the random vector containing the variables corresponding to the parents of j in G. Throughout the paper , we assume there are no hidden variables . In a CGM , the distribution PX is said to be Markov to G , i.e . we can write the probability density function ( pdf ) of PX as p ( x ) = ∏d j=1 pj ( xj |xπGj ) where pj ( xj |xπGj ) is the conditional pdf of variable Xj given XπGj . A CGM can be thought of as a BN in which directed edges are given a causal meaning , allowing it to answer queries regarding interventional distributions ( Koller & Friedman , 2009 ) . 2.2 STRUCTURE IDENTIFIABILITY . In general , it is impossible to recover G given only samples from PX , i.e . without interventional data . It is , however , customary to rely on a set of assumptions to render the structure fully or partially identifiable . Definition 1 Given a set of assumptions A on a CGM M = ( PX , G ) , its graph G is said to be identifiable from PX if there exists no other CGM M̃ = ( P̃X , G̃ ) satisfying all assumptions in A such that G̃ 6= G and P̃X = PX . There are many examples of graph identifiability results for continuous variables ( Peters et al. , 2014 ; Peters & Bühlman , 2014 ; Shimizu et al. , 2006 ; Zhang & Hyvärinen , 2009 ) as well as for discrete variables ( Peters et al. , 2011 ) . These results are obtained by assuming that the conditional densities belong to a specific parametric family . For example , if one assumes that the distribution PX is entailed by a structural equation model of the form Xj : = fj ( XπGj ) +Nj with Nj ∼ N ( 0 , σ2j ) ∀j ∈ V ( 1 ) where fj is a nonlinear function satisfying some mild regularity conditions and the noises Nj are mutually independent , then G is identifiable from PX ( see Peters et al . ( 2014 ) for the complete theorem and its proof ) . This is a particular instance of additive noise models ( ANM ) . We will make use of this result in our experiments in Section 4 . One can consider weaker assumptions such as faithfulness ( Peters et al. , 2017 ) . This assumption allows one to identify , not G itself , but the Markov equivalence class to which it belongs ( Spirtes et al. , 2000 ) . A Markov equivalence class is a set of DAGs which encode exactly the same set of conditional independence statements and can be characterized by a graphical object named a completed partially directed acyclic graph ( CPDAG ) ( Koller & Friedman , 2009 ; Peters et al. , 2017 ) . Some algorithms we use as baselines in Section 4 output only a CPDAG . 2.3 NOTEARS : CONTINUOUS OPTIMIZATION FOR STRUCTURE LEARNING . Structure learning is the problem of learning G using a data set of n samples { x ( 1 ) , ... , x ( n ) } from PX . Score-based approaches cast this problem as an optimization problem , i.e . Ĝ = arg maxG∈DAG S ( G ) where S ( G ) is a regularized maximum likelihood under graph G. Since the number of DAGs is super exponential in the number of nodes , most methods rely on various heuristic greedy search procedures to approximately solve the problem ( see Section 5 for a review ) . We now present the work of Zheng et al . ( 2018 ) which proposes to cast this combinatorial optimization problem into a continuous constrained one . To do so , the authors propose to encode the graph G on d nodes as a weighted adjacency matrix U = [ u1| . . . |ud ] ∈ Rd×d which represents ( possibly negative ) coefficients in a linear SEM of the form Xj : = u > j X + Ni ∀j where Nj is a noise variable . Let GU be the directed graph associated with the SEM and let AU be the ( binary ) adjacency matrix associated with GU . One can see that the following equivalence holds : ( AU ) ij = 0 ⇐⇒ Uij = 0 ( 2 ) To make sure GU is acyclic , the authors propose the following constraint on U : Tr eU U − d = 0 ( 3 ) where eM , ∑∞ k=0 Mk k ! is the matrix exponential and is the Hadamard product . To see why this constraint characterizes acyclicity , first note that ( AUk ) jj is the number of cycles of length k passing through node j in graph GU . Clearly , for GU to be acyclic , we must have TrAU k = 0 for k = 1 , 2 , ... , ∞ . By equivalence ( 2 ) , this is true when Tr ( U U ) k = 0 for k = 1 , 2 , ... , ∞ . From there , one can simply apply the definition of the matrix exponential to see why constraint ( 3 ) characterizes acyclicity ( see Zheng et al . ( 2018 ) for the full development ) . The authors propose to use a regularized negative least square score ( maximum likelihood for a Gaussian noise model ) . The resulting continuous constrained problem is max U S ( U , X ) , − 1 2n ‖X−XU‖2F − λ‖U‖1 s.t . Tr eU U − d = 0 ( 4 ) where X ∈ Rn×d is the design matrix containing all n samples . The nature of the problem has been drastically changed : we went from a combinatorial to a continuous problem . The difficulties of combinatorial optimization have been replaced by those of non-convex optimization , since the feasible set is non-convex . Nevertheless , a standard numerical solver for constrained optimization such has an augmented Lagrangian method ( Bertsekas , 1999 ) can be applied to get an approximate solution , hence there is no need to design a greedy search procedure . Moreover , this approach is more global than greedy methods in the sense that the whole matrix U is updated at each iteration . Continuous approaches to combinatorial optimization have sometimes demonstrated improved performance over discrete approaches in the literature ( see for example Alayrac et al . ( 2018 , §5.2 ) where they solve the multiple sequence alignment problem with a continuous optimization method ) . 3 GRAN-DAG : GRADIENT-BASED NEURAL DAG LEARNING . We propose a new nonlinear extension to the framework presented in Section 2.3 . For each variable Xj , we learn a fully connected neural network with L hidden layers parametrized by φ ( j ) , { W ( 1 ) ( j ) , . . . , W ( L+1 ) ( j ) } where W ( ` ) ( j ) is the ` th weight matrix of the jth NN ( biases are omitted for clarity ) . Each NN takes as input X−j ∈ Rd , i.e . the vector X with the jth component masked to zero , and outputs θ ( j ) ∈ Rm , the m-dimensional parameter vector of the desired distribution family for variable Xj .1 The fully connected NNs have the following form θ ( j ) , W ( L+1 ) ( j ) g ( . . . g ( W ( 2 ) ( j ) g ( W ( 1 ) ( j ) X−j ) ) . . . ) ∀j ( 5 ) where g is a nonlinearity applied element-wise . Note that the evaluation of all NNs can be parallelized on GPU . Distribution families need not be the same for each variable . Let φ , { φ ( 1 ) , . . . , φ ( d ) } represents all parameters of all d NNs . Without any constraint on its parameter φ ( j ) , neural network j models the conditional pdf pj ( xj |x−j ; φ ( j ) ) . Note that the product∏d j=1 pj ( xj |x−j ; φ ( j ) ) does not integrate to one ( i.e . it is not a joint pdf ) , since it does not decompose according to a DAG . We now show how one can constrain φ to make sure the product of all conditionals outputted by the NNs is a joint pdf . The idea is to define a new weighted adjacency matrixAφ similar to the one encountered in Section 2.3 , which can be directly used inside the constraint of Equation 3 to enforce acyclicity .
This work addresses the problem of learning the structure of directed acyclic graphs in the presence of nonlinearities. The proposed approach is an extension of the NOTEARS algorithm which uses a neural network for each node in the graph during structure learning. This adaptation allows for non-linear relationships to be easily modeled. In addition to the proposed adaptation, the authors employ a number of heuristics from the causal discovery literature to improve the efficacy of the search. Empirical results are provided which compare the proposed algorithm to prior art.
SP:c2dfaba3df490671f8ce20bf69df96d0887aa19d
LAMOL: LAnguage MOdeling for Lifelong Language Learning
1 INTRODUCTION . The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose ; this is isolated learning ( Chen & Liu , 2016 , p. 150 ) . In isolated learning , the model is unable to retain and accumulate the knowledge it has learned before . When a stream of tasks are joined to be trained sequentially , isolated learning faces catastrophic forgetting ( McCloskey & Cohen , 1989 ) due to a non-stationary data distribution that biases the model ( left figure of Figure 1 ) . In contrast , lifelong learning is designed to address a stream of tasks by accumulating interconnected knowledge between learned tasks and retaining the performance of those tasks . A human easily achieves lifelong learning , but this is nontrivial for a machine ; thus lifelong learning is a vital step toward artificial general intelligence . In this paper , we focus on lifelong language learning , where a machine achieves lifelong learning on a stream of natural language processing ( NLP ) tasks . To the best of our knowledge , lifelong language learning has been studied in only a few instances ; for sentiment analysis ( Chen et al. , 2015b ; Xia et al. , 2017 ) , conversational agents ( Lee , 2017 ) , word representation learning ( Xu et al. , 2018 ) , sentence representation learning ( Liu et al. , 2019 ) , text classification , and question answering ( d ’ Autume et al. , 2019 ) . However , in all previous work , the tasks in the stream are essentially the same task but in different domains . To achieve lifelong language learning on fundamentally different tasks , we propose LAMOL — LAnguage MOdeling for Lifelong language learning . It has been shown that many NLP tasks can be considered question answering ( QA ) ( Bryan McCann & Socher , 2018 ) . Therefore , we address multiple NLP tasks with a single model by training a language model ( LM ) that generates an answer based on the context and the question . Treating QA as language modeling is beneficial because the LM can be pre-trained on a large number of sentences without any labeling ( Radford et al. , 2019 ) ; however , this does not directly solve the problem of LLL . If we train an LM on a stream of tasks , catastrophic forgetting still occurs . However , as an LM is intrinsically a text generator , we can use it to answer questions while generating pseudo-samples of ∗Equal contribution . †Work done while at National Taiwan University . the previous task to be replayed later . LAMOL is inspired by the data-based approach for LLL in which a generator learns to generate samples in previous tasks ( middle of Figure 1 ) ( Hanul Shin & Kim , 2017 ; Kemker & Kanan , 2017 ) . In contrast to previous approaches , LAMOL needs no extra generator ( right of Figure 1 ) . LAMOL is also similar to multitask training , but the model itself generates data from previous tasks instead of using real data . Our main contributions in this paper are : • We present LAMOL , a simple yet effective method for LLL . Our method has the advantages of no requirements in terms of extra memory or model capacity . We also do not need to know how many tasks to train in advance and can always train on additional tasks when needed . • Experimental results show that our methods outperform baselines and other state-of-the-art methods by a considerable margin and approaches the multitasking upper bound within 2–3 % . • Furthermore , we propose adding task-specific tokens during pseudo-sample generation to evenly split the generated samples among all previous tasks . This extension stabilizes LLL and is particularly useful when training on a large number of tasks . • We analyze how different amounts of pseudo-samples affect the final performance of LAMOL , considering results both with and without the task-specific tokens . • We open-source our code to facilitate further LLL research . 2 RELATED WORK . Lifelong learning research is based on regularization , architecture , or data . Here is a brief survey of works in these three categories . 2.1 REGULARIZATION-BASED METHODS . In this approach , a constraint , i.e. , a regularization term , is added to minimize deviation from trained weights while updating the weights in a new task . Most regularization based methods estimate the importance of each parameter and add the importance as a constraint to the loss function . Elastic weight consolidation ( EWC ) ( Kirkpatrick et al. , 2017 ) calculates a Fisher information matrix to estimate the sensitivity of parameters as importance . Online EWC ( Schwarz et al. , 2018 ) is a transformed version of EWC . Instead of tracking the importance of parameters for each task , online EWC simply accumulates the importance of the stream of tasks . Synaptic intelligence ( SI ) ( Zenke et al. , 2017 ) assigns importance to each parameter according to its contribution to the change in the total loss . Memory aware synapses ( MAS ) ( Aljundi et al. , 2018 ) estimate importance via the gradients of the model outputs . In contrast to estimating the importance of weights , incremental moment matching ( IMM ) ( Lee et al. , 2017 ) matches the moment of weights between different tasks . 2.2 ARCHITECTURE-BASED METHODS . For this category , the main idea is to assign a dedicated capacity inside a model for each task . After completing a task , the weights are frozen and may not be changed thereafter . Some methods allow models to expand , whereas some fix the size but must allocate capacity for tasks at the beginning . Progressive neural networks ( Rusu et al. , 2016 ) utilize one column of the neural network per task . Once a new task is trained , progressive neural networks augment a new column of the neural network for the task while freezing the past trained columns . Columns that have been frozen are not allowed to change but are connected to the new column to transfer knowledge from old tasks . Towards Training Recurrent Neural Networks for Lifelong Learning ( Sodhani et al. , 2018 ) unifies Gradient episodic memory ( Lopez-Paz et al. , 2017 ) and Net2Net ( Chen et al. , 2015a ) . Using the curriculumbased setting , the model learns the tasks in easy-to-hard order . The model alleviates the forgetting problem by GEM method , and if it fails to learn the current task and has not been expanded yet , the model will expand to a larger model by the Net2Net approach . PathNet ( Fernando et al. , 2017 ) reuses subsets of a neural network to transfer knowledge between tasks . Unlike progressive neural networks , PathNet does not allow the model to expand . Instead , it builds a huge fixed-size model composed of a neural network and paths between different layers of the neural networks . While training a task , it selects the best combination of neural networks and paths for that particular task . Similar to progressive neural networks , selected parts are fixed to allow only inference and not training . Inspired by network pruning , PackNet ( Mallya & Lazebnik , 2018 ) prunes and re-trains the network iteratively to pack numerous tasks into a single huge model . This category has some drawbacks . When resources are limited , model expansion is prohibited . Also , some architecture-based methods require the number of tasks in advance to allocate the capacity for the tasks , which greatly reduces their practicality . 2.3 DATA-BASED METHODS . This method restricts weights through the data distribution of old tasks . One data-based approach keeps a small amount of real samples from old tasks , and the other distills the knowledge from old data and imagines pseudo-data of old tasks later on . While training a new task , the data or pseudo-data is used to prevent weights from greatly deviating from the previous status . Gradient episodic memory ( GEM ) ( Lopez-Paz et al. , 2017 ) preserves a subset of real samples from previous tasks . Utilizing these real samples during optimization helps somewhat to constrain parameter gradients . Averaged-GEM ( A-GEM ) ( Chaudhry et al. , 2018 ) is a more efficient version of GEM which achieves the same or even better performance than the original GEM . Learning without forgetting ( Li & Hoiem , 2017 ) minimizes the alteration of shared parameters by recording the outputs from old task modules on data from the new task before updating . Hanul Shin & Kim ( 2017 ) and Kemker & Kanan ( 2017 ) encode data from old tasks into a generative model system . The latter imitates the dual-memory system of the human brain , in that the model automatically decides which memory should be consolidated . Both methods replay pseudo-data of previous tasks using the generative model during training . d ’ Autume et al . ( 2019 ) investigates the performance of the episodic memory system on NLP problems . It distills the knowledge of previous tasks into episodic memory and replays it afterward . This work evaluates the method on two streams of tasks : question answering and text classification . 3 LAMOL . A pre-trained LM can generate a coherent sequence of text given a context . Thus , we propose LAMOL , a method of training a single LM that learns not only to answer the question given the context but also to generate the context , the question , and the answer given a generation token . That is , in LAMOL , a model plays the role of both LM and QA model . Hence , answering questions and generating pseudo-old samples can both be done by a single model . During LLL , these pseudo-old samples are trained with new samples from new tasks to help mitigate catastrophic forgetting . 3.1 DATA FORMATTING . Inspired by the protocol used by decaNLP ( Bryan McCann & Socher , 2018 ) , samples from the datasets we used are framed into a SQuAD-like scheme , which consists of context , question , and answer . Although the LM is simultaneously a QA model , the data format depends on the training objective . When training as a QA model , the LM learns to decode the answer after reading the context and question . On the other hand , when training as an LM , the LM learns to decode all three parts given a generation token . In addition to context , question , and answer , we add three special tokens : ANS Inserted between question and answer . As the context and question are known during inference , decoding starts after inputting ANS . EOS The last token of every example . Decoding stops when EOS is encountered . GEN The first token during pseudo-sample generation . Decoding starts after inputting GEN . The data formats for QA and LM training are shown in Figure 2 . 3.2 TRAINING . Assume a stream of tasks { T1 , T2 , . . . } , where the number of tasks may be unknown . Directly training the LM on these tasks sequentially results in catastrophic forgetting . Thus , before beginning training on a new task Ti , i > 1 , the model first generates pseudo samples T ′ i by top-k sampling that represent the data distribution of previous tasks T1 , . . . , Ti−1 . Then , the LM trains on the mixture of Ti and T ′ i . To balance the ratio between |Ti| and |T ′ i | , the LM generates γ|Ti| pseudo samples , where |Ti| denotes the number of samples in task Ti and γ is the sampling ratio . If the generated sample does not have exactly one ANS in it , then the sample is discarded . This happens in only 0.5 % -1 % of generated samples . During training , each sample is formatted into both the QA format and the LM format . Then , in the same optimization step , both formats are fed into the LM to minimize the QA loss LQA and LM loss LLM together . Overall , the loss is L = LQA + λLLM , where λ is the weight of the LM loss .
The paper presents a new NN architecture designed for life-long learning of natural language processing. As well depicted in Figure 2, the proposed network is trained to generate the correct answers and training samples at the same time. This prevents the "catastrophic forgetting" of an old task. Compared to the old methods that train a separate generator, the performance of the proposed method is noticeably good as shown in Fig 3. This demonstrates that the new life-long learning approach is effective in avoiding catastrophic forgetting.
SP:4aebddd56e10489765e302e291cf41589d02b530
LAMOL: LAnguage MOdeling for Lifelong Language Learning
1 INTRODUCTION . The current dominant paradigm for machine learning is to run an algorithm on a given dataset to produce a trained model specifically for a particular purpose ; this is isolated learning ( Chen & Liu , 2016 , p. 150 ) . In isolated learning , the model is unable to retain and accumulate the knowledge it has learned before . When a stream of tasks are joined to be trained sequentially , isolated learning faces catastrophic forgetting ( McCloskey & Cohen , 1989 ) due to a non-stationary data distribution that biases the model ( left figure of Figure 1 ) . In contrast , lifelong learning is designed to address a stream of tasks by accumulating interconnected knowledge between learned tasks and retaining the performance of those tasks . A human easily achieves lifelong learning , but this is nontrivial for a machine ; thus lifelong learning is a vital step toward artificial general intelligence . In this paper , we focus on lifelong language learning , where a machine achieves lifelong learning on a stream of natural language processing ( NLP ) tasks . To the best of our knowledge , lifelong language learning has been studied in only a few instances ; for sentiment analysis ( Chen et al. , 2015b ; Xia et al. , 2017 ) , conversational agents ( Lee , 2017 ) , word representation learning ( Xu et al. , 2018 ) , sentence representation learning ( Liu et al. , 2019 ) , text classification , and question answering ( d ’ Autume et al. , 2019 ) . However , in all previous work , the tasks in the stream are essentially the same task but in different domains . To achieve lifelong language learning on fundamentally different tasks , we propose LAMOL — LAnguage MOdeling for Lifelong language learning . It has been shown that many NLP tasks can be considered question answering ( QA ) ( Bryan McCann & Socher , 2018 ) . Therefore , we address multiple NLP tasks with a single model by training a language model ( LM ) that generates an answer based on the context and the question . Treating QA as language modeling is beneficial because the LM can be pre-trained on a large number of sentences without any labeling ( Radford et al. , 2019 ) ; however , this does not directly solve the problem of LLL . If we train an LM on a stream of tasks , catastrophic forgetting still occurs . However , as an LM is intrinsically a text generator , we can use it to answer questions while generating pseudo-samples of ∗Equal contribution . †Work done while at National Taiwan University . the previous task to be replayed later . LAMOL is inspired by the data-based approach for LLL in which a generator learns to generate samples in previous tasks ( middle of Figure 1 ) ( Hanul Shin & Kim , 2017 ; Kemker & Kanan , 2017 ) . In contrast to previous approaches , LAMOL needs no extra generator ( right of Figure 1 ) . LAMOL is also similar to multitask training , but the model itself generates data from previous tasks instead of using real data . Our main contributions in this paper are : • We present LAMOL , a simple yet effective method for LLL . Our method has the advantages of no requirements in terms of extra memory or model capacity . We also do not need to know how many tasks to train in advance and can always train on additional tasks when needed . • Experimental results show that our methods outperform baselines and other state-of-the-art methods by a considerable margin and approaches the multitasking upper bound within 2–3 % . • Furthermore , we propose adding task-specific tokens during pseudo-sample generation to evenly split the generated samples among all previous tasks . This extension stabilizes LLL and is particularly useful when training on a large number of tasks . • We analyze how different amounts of pseudo-samples affect the final performance of LAMOL , considering results both with and without the task-specific tokens . • We open-source our code to facilitate further LLL research . 2 RELATED WORK . Lifelong learning research is based on regularization , architecture , or data . Here is a brief survey of works in these three categories . 2.1 REGULARIZATION-BASED METHODS . In this approach , a constraint , i.e. , a regularization term , is added to minimize deviation from trained weights while updating the weights in a new task . Most regularization based methods estimate the importance of each parameter and add the importance as a constraint to the loss function . Elastic weight consolidation ( EWC ) ( Kirkpatrick et al. , 2017 ) calculates a Fisher information matrix to estimate the sensitivity of parameters as importance . Online EWC ( Schwarz et al. , 2018 ) is a transformed version of EWC . Instead of tracking the importance of parameters for each task , online EWC simply accumulates the importance of the stream of tasks . Synaptic intelligence ( SI ) ( Zenke et al. , 2017 ) assigns importance to each parameter according to its contribution to the change in the total loss . Memory aware synapses ( MAS ) ( Aljundi et al. , 2018 ) estimate importance via the gradients of the model outputs . In contrast to estimating the importance of weights , incremental moment matching ( IMM ) ( Lee et al. , 2017 ) matches the moment of weights between different tasks . 2.2 ARCHITECTURE-BASED METHODS . For this category , the main idea is to assign a dedicated capacity inside a model for each task . After completing a task , the weights are frozen and may not be changed thereafter . Some methods allow models to expand , whereas some fix the size but must allocate capacity for tasks at the beginning . Progressive neural networks ( Rusu et al. , 2016 ) utilize one column of the neural network per task . Once a new task is trained , progressive neural networks augment a new column of the neural network for the task while freezing the past trained columns . Columns that have been frozen are not allowed to change but are connected to the new column to transfer knowledge from old tasks . Towards Training Recurrent Neural Networks for Lifelong Learning ( Sodhani et al. , 2018 ) unifies Gradient episodic memory ( Lopez-Paz et al. , 2017 ) and Net2Net ( Chen et al. , 2015a ) . Using the curriculumbased setting , the model learns the tasks in easy-to-hard order . The model alleviates the forgetting problem by GEM method , and if it fails to learn the current task and has not been expanded yet , the model will expand to a larger model by the Net2Net approach . PathNet ( Fernando et al. , 2017 ) reuses subsets of a neural network to transfer knowledge between tasks . Unlike progressive neural networks , PathNet does not allow the model to expand . Instead , it builds a huge fixed-size model composed of a neural network and paths between different layers of the neural networks . While training a task , it selects the best combination of neural networks and paths for that particular task . Similar to progressive neural networks , selected parts are fixed to allow only inference and not training . Inspired by network pruning , PackNet ( Mallya & Lazebnik , 2018 ) prunes and re-trains the network iteratively to pack numerous tasks into a single huge model . This category has some drawbacks . When resources are limited , model expansion is prohibited . Also , some architecture-based methods require the number of tasks in advance to allocate the capacity for the tasks , which greatly reduces their practicality . 2.3 DATA-BASED METHODS . This method restricts weights through the data distribution of old tasks . One data-based approach keeps a small amount of real samples from old tasks , and the other distills the knowledge from old data and imagines pseudo-data of old tasks later on . While training a new task , the data or pseudo-data is used to prevent weights from greatly deviating from the previous status . Gradient episodic memory ( GEM ) ( Lopez-Paz et al. , 2017 ) preserves a subset of real samples from previous tasks . Utilizing these real samples during optimization helps somewhat to constrain parameter gradients . Averaged-GEM ( A-GEM ) ( Chaudhry et al. , 2018 ) is a more efficient version of GEM which achieves the same or even better performance than the original GEM . Learning without forgetting ( Li & Hoiem , 2017 ) minimizes the alteration of shared parameters by recording the outputs from old task modules on data from the new task before updating . Hanul Shin & Kim ( 2017 ) and Kemker & Kanan ( 2017 ) encode data from old tasks into a generative model system . The latter imitates the dual-memory system of the human brain , in that the model automatically decides which memory should be consolidated . Both methods replay pseudo-data of previous tasks using the generative model during training . d ’ Autume et al . ( 2019 ) investigates the performance of the episodic memory system on NLP problems . It distills the knowledge of previous tasks into episodic memory and replays it afterward . This work evaluates the method on two streams of tasks : question answering and text classification . 3 LAMOL . A pre-trained LM can generate a coherent sequence of text given a context . Thus , we propose LAMOL , a method of training a single LM that learns not only to answer the question given the context but also to generate the context , the question , and the answer given a generation token . That is , in LAMOL , a model plays the role of both LM and QA model . Hence , answering questions and generating pseudo-old samples can both be done by a single model . During LLL , these pseudo-old samples are trained with new samples from new tasks to help mitigate catastrophic forgetting . 3.1 DATA FORMATTING . Inspired by the protocol used by decaNLP ( Bryan McCann & Socher , 2018 ) , samples from the datasets we used are framed into a SQuAD-like scheme , which consists of context , question , and answer . Although the LM is simultaneously a QA model , the data format depends on the training objective . When training as a QA model , the LM learns to decode the answer after reading the context and question . On the other hand , when training as an LM , the LM learns to decode all three parts given a generation token . In addition to context , question , and answer , we add three special tokens : ANS Inserted between question and answer . As the context and question are known during inference , decoding starts after inputting ANS . EOS The last token of every example . Decoding stops when EOS is encountered . GEN The first token during pseudo-sample generation . Decoding starts after inputting GEN . The data formats for QA and LM training are shown in Figure 2 . 3.2 TRAINING . Assume a stream of tasks { T1 , T2 , . . . } , where the number of tasks may be unknown . Directly training the LM on these tasks sequentially results in catastrophic forgetting . Thus , before beginning training on a new task Ti , i > 1 , the model first generates pseudo samples T ′ i by top-k sampling that represent the data distribution of previous tasks T1 , . . . , Ti−1 . Then , the LM trains on the mixture of Ti and T ′ i . To balance the ratio between |Ti| and |T ′ i | , the LM generates γ|Ti| pseudo samples , where |Ti| denotes the number of samples in task Ti and γ is the sampling ratio . If the generated sample does not have exactly one ANS in it , then the sample is discarded . This happens in only 0.5 % -1 % of generated samples . During training , each sample is formatted into both the QA format and the LM format . Then , in the same optimization step , both formats are fed into the LM to minimize the QA loss LQA and LM loss LLM together . Overall , the loss is L = LQA + λLLM , where λ is the weight of the LM loss .
This paper studies the problem of lifelong language learning. The core idea underlying the algorithm includes two parts: 1. Consider the NLP tasks as QA and then train a LM model that generates an answer based on the context and the question; 2. to generate samples representing previous tasks before training on a new task.
SP:4aebddd56e10489765e302e291cf41589d02b530
Learning Underlying Physical Properties From Observations For Trajectory Prediction
1 INTRODUCTION . Games that follow Newton ’ s laws of physics despite being a relatively easy task for humans , remain to be a challenging task for artificially intelligent agents due to the requirements for an agent to understand underlying physical laws and relationships between available player ’ s actions and their effect in the environment . In order to predict the trajectory of a physical object that was shot using some shooting mechanism , one needs to understand the relationship between initial force that was applied to the object by a mechanism and its initial velocity , have a knowledge of hidden physical forces of the environment such as gravity and be able to use basic physical laws for the prediction . Humans , have the ability to quickly learn such physical laws and properties of objects in a given physical task from pure observations and experience with the environment . In addition , humans tend to use previously learned knowledge in similar tasks . As was found by researchers in human psychology , humans can transfer previously acquired abilities and knowledge to a new task if the domain of the original learning task overlaps with the novel one ( Council , 2000 ) . The problem of learning properties of the physical environment and its objects directly from observations and the problem of using previously acquired knowledge in a similar task are important to solve in AI as this is one of the basic abilities of human intelligence that humans learn during infancy ( Baillargeon , 1995 ) . Solving these two problems can bring AI research one step closer to achieving human-like or superhuman results in physical games . In this work we explore one of the possible approaches to these two problems by proposing a model that is able to learn underlying physical properties of objects and forces of the environment directly from observations and use the extracted physical properties in order to build a relationships between available in-game variables and related physical forces . Furthermore , our model then uses learned physical knowledge in order to accurately predict unseen objects trajectories in games that follow Newtonian physics and contain some shooting mechanism . We also explore the ability of our model to transfer learned knowledge by training a model in a 2D game and testing it in a 3D game that follows similar physics with no further training . Our approach combines modern deep learning techniques ( LeCun et al. , 2015 ) and well-known physics laws that were discovered by physicists hundreds of years ago . We show that our model automatically learns underlying physical forces directly from the small amount of observations , learns the relationships between learned physical forces with available in-game variables and uses them for prediction of unseen object ’ s trajectories . Moreover , we also show that our model allows us to easily transfer learned physical forces and knowledge to the game with similar task . In order to evaluate our model abilities to infer physical properties from observations and to predict unseen trajectories , we use two different games that follow Newtonian Physics . The first game that we use as a testing environment for our model is Science Birds . Science Birds is a clone of Angry Birds - a popular video game where the objective is to destroy all green pigs by shooting birds from a slingshot . The game is proven to be difficult for artificially intelligent playing agents that use deep learning and many agents have failed to solve the game in the past ( Renz et al. , 2019 ) . The second game that we are using as our testing environment is Basketball 3D shooter game . In this game the objective of a player is to shot a ball into a basket . In order to test the abilities of our model to transfer knowledge to a different game we first train our model on a small amount of shot trajectories from Science Birds game and then test trained model for predictions of the ball trajectory in the Basketball 3D shooting game . We compare the results of our proposed model that is augmented with physical laws against a two baseline models . The first baseline model learns to automatically extract features from observations without knowledge of physical laws , whereas the second baseline model learns to directly predict trajectories from the given in-game variables . 2 RELATED WORK . Previous AI work in predicting future dynamics of objects has involved using deep learning approaches such as : graph neural networks for prediction of interactions between objects and their future dynamics ( Battaglia et al . ( 2016 ) ; Watters et al . ( 2017 ) ; Sanchez-Gonzalez et al . ( 2018 ) ) , Bidirectional LSTM and Mixture Density network for Basketball Trajectory Prediction ( Zhao et al . ( 2017 ) ) and Neural Physics Engine ( Chang et al . ( 2016 ) ) . Some of the researchers also tried to combine actual physical laws with deep learning . In one of such works , researchers propose a model that learns physical properties of the objects from observations ( Wu et al . ( 2016 ) ) . Another work proposes to integrate a physics engine together with deep learning to infer physical properties ( Wu et al . ( 2015 ) ) However , most of the work on predicting future objects dynamics is focused on learning physics from scratch or uses some known physical properties in order to train a model . This could be a problem as in most real-world physical games the underlying physical properties are not known to the player unless one has an access to the source code of a physics engine . Because of that , these properties have to be learned directly from experience with the environment without any supervision on the actual values of physical properties . Another important point , is that instead of learning physics from scratch we can use to our benefit already discovered and well-established laws of physics . In this work we propose an approach that combines classical feedforward networks with well-known physical laws in order to guide our model learning process . By doing so , our model learns physical properties directly from observations without any direct supervision on actual values of these properties . Another contribution is that our model learns from a very small training dataset and generalizes well to the entire space . Furthermore , learned values can be easily interpreted by humans which allows us to use them in any other task in the presented test domains and can be easily transferred to other games with similar physics . 3 APPROACH . 3.1 BASELINE MODELS . In order to measure the advantages of combining classical physical laws together with deep learning we are comparing our model against pure deep learning approaches with similar architectures . 3.1.1 ENCODER BASELINE MODEL . Our first baseline model is based on the idea of autoencoders ( Rumelhart et al . ( 1986 ) ) . Contrary to the proposed model in section 3.2 this model learns to automatically discover features from observations . It takes a sequence of points T = { ( x0 , y0 ) , ( x1 , y1 ) , ... , ( xn , yn ) } as its input and encodes it to a latent space Tenc . The encoded trajectory is then used by decoder to reconstruct the trajectory . The second part of this baseline model consists of another MLP that learns to associate a relative position of a physical object that generated the trajectory with learned latent space Tenc . More formally , given trajectory T as input , encoder fencoder , and decoder fdecoder , our model reconstructs a trajectory T̂ from latent space as follows : T̂ = fdecoder ( fencoder ( { ( x0 , y0 ) , ( x1 , y1 ) , ... , ( xn , yn ) } ) ) ( 1 ) Once the trajectory is reconstructed , we compute the loss using Mean Squared Error and update the weights of our networks : 1 n n∑ i=1 ( T − T̂ ) 2 ( 2 ) The second part of this baseline model is a another MLP fassociate that learns to associate given initial relative position of a physical object ( xr0 , yr0 ) with derived in a previous step encoded trajectory Tenc : ˆTenc = fassociate ( ( xr0 , yr0 ) ) ( 3 ) In order to update the weights of fassociate we compute the loss using Mean Squared Error between two derived encodings Tenc and ˆTenc . After that , we predict trajectory using derived ˆTenc as follows : T̂ = fdecoder ( ˆTenc ) 3.1.2 SIMPLE BASELINE MODEL . In order to evaluate the advantages of using observations and an encoder-decoder scheme , we use the second baseline model that does not use encoder-decoder and directly learns to predict trajectory from the given in-game forces or relative position of a physical object . More formally , given relative position of a physical object ( xr0 , yr0 ) and MLP fsimple , we compute the trajectory as follows : T̂ = fsimple ( ( xr0 , yr0 ) ) ( 4 ) 3.2 PHYSICS AWARE MODEL . Similarly to the first baseline model presented in section 3.1.1 , Physics Aware Network ( PhysANet ) consists of two parts : a neural network that discovers physical forces of the environment and action that generated given observations and a neural network that learns the relationship between the ingame actions or forces and predicted physical values . We further refer to these two parts as InferNet and RelateNet . 3.2.1 INFERNET . The goal of InferNet is to extract physical forces that have generated a given trajectory { ( x0 , y0 ) , ( x1 , y1 ) ... ( xn , yn ) } using guidance from known physical equations . The discovered physical forces are then plugged to the projectile motion equation in order to calculate the trajectory . InferNet consists of two internal small MLPs that are trained together as shown on Figure 2 ( Left ) . The first MLP takes in a batch of trajectories and predicts a single value of gravity force for all of them . The second MLP takes in a batch of trajectories and for each trajectory in a batch it predicts an initial velocity and angle of a shot . These predicted values are then inserted into a projectile motion equation in order to calculate the resulting trajectory . The projectile motion equation is defined as follows ( Walker ( 2010 ) ) : y = h+ x tan ( θ ) − gx 2 2V 20 cos ( θ ) 2 ( 5 ) In equation 5 , h is the initial height , g is gravity , θ is the angle of a shot and V0 is initial velocity . Once it had calculated the trajectory we compute the loss between observed trajectory T and predicted trajectory T̂ using Mean Squared Error in a similar way as was defined in equation 2 . 3.2.2 RELATENET . The goal of RelateNet that is shown on Figure 2 ( Right ) is to learn the relationship between ingame variables and physics forces predicted by InferNet . This network tries to predict extracted by InferNet forces directly from the given in-game values such as relative position of a bird . The in-game variables can be any variables with continuous or discrete domain that can be chosen by the playing agent in order to make a shot . As an example , in-game variables can be the initial forces of the shot or object ’ s relative position to the shooting mechanism . RelateNet consists of two internal MLPs where first MLP predicts initial velocities and second MLP predicts initial angles . In order to update the weights of both internal MLPs , we calculate the MSE between values predicted by InferNet and values predicted by RelateNet . More details on the architecture of the PhysANet can be found in the Appendix A .
The problem addressed by this paper is the estimation of trajectories of moving objects thrown / launched by a user, in particular in computer games like angry birds or basketball simulation games. A deep neural network is trained on a small dataset of ~ 300 trajectories and estimates the underlying physical properties of the trajectory (initial position, direction and strength of initial force etc.). A new variant of deep network is introduced, which is based on an encoder-decoder model, the decoder being a fully handcrafted module using known physics (projectile motion).
SP:bce4d9d2825454f2b345f4650abac10efee7c2fb
Learning Underlying Physical Properties From Observations For Trajectory Prediction
1 INTRODUCTION . Games that follow Newton ’ s laws of physics despite being a relatively easy task for humans , remain to be a challenging task for artificially intelligent agents due to the requirements for an agent to understand underlying physical laws and relationships between available player ’ s actions and their effect in the environment . In order to predict the trajectory of a physical object that was shot using some shooting mechanism , one needs to understand the relationship between initial force that was applied to the object by a mechanism and its initial velocity , have a knowledge of hidden physical forces of the environment such as gravity and be able to use basic physical laws for the prediction . Humans , have the ability to quickly learn such physical laws and properties of objects in a given physical task from pure observations and experience with the environment . In addition , humans tend to use previously learned knowledge in similar tasks . As was found by researchers in human psychology , humans can transfer previously acquired abilities and knowledge to a new task if the domain of the original learning task overlaps with the novel one ( Council , 2000 ) . The problem of learning properties of the physical environment and its objects directly from observations and the problem of using previously acquired knowledge in a similar task are important to solve in AI as this is one of the basic abilities of human intelligence that humans learn during infancy ( Baillargeon , 1995 ) . Solving these two problems can bring AI research one step closer to achieving human-like or superhuman results in physical games . In this work we explore one of the possible approaches to these two problems by proposing a model that is able to learn underlying physical properties of objects and forces of the environment directly from observations and use the extracted physical properties in order to build a relationships between available in-game variables and related physical forces . Furthermore , our model then uses learned physical knowledge in order to accurately predict unseen objects trajectories in games that follow Newtonian physics and contain some shooting mechanism . We also explore the ability of our model to transfer learned knowledge by training a model in a 2D game and testing it in a 3D game that follows similar physics with no further training . Our approach combines modern deep learning techniques ( LeCun et al. , 2015 ) and well-known physics laws that were discovered by physicists hundreds of years ago . We show that our model automatically learns underlying physical forces directly from the small amount of observations , learns the relationships between learned physical forces with available in-game variables and uses them for prediction of unseen object ’ s trajectories . Moreover , we also show that our model allows us to easily transfer learned physical forces and knowledge to the game with similar task . In order to evaluate our model abilities to infer physical properties from observations and to predict unseen trajectories , we use two different games that follow Newtonian Physics . The first game that we use as a testing environment for our model is Science Birds . Science Birds is a clone of Angry Birds - a popular video game where the objective is to destroy all green pigs by shooting birds from a slingshot . The game is proven to be difficult for artificially intelligent playing agents that use deep learning and many agents have failed to solve the game in the past ( Renz et al. , 2019 ) . The second game that we are using as our testing environment is Basketball 3D shooter game . In this game the objective of a player is to shot a ball into a basket . In order to test the abilities of our model to transfer knowledge to a different game we first train our model on a small amount of shot trajectories from Science Birds game and then test trained model for predictions of the ball trajectory in the Basketball 3D shooting game . We compare the results of our proposed model that is augmented with physical laws against a two baseline models . The first baseline model learns to automatically extract features from observations without knowledge of physical laws , whereas the second baseline model learns to directly predict trajectories from the given in-game variables . 2 RELATED WORK . Previous AI work in predicting future dynamics of objects has involved using deep learning approaches such as : graph neural networks for prediction of interactions between objects and their future dynamics ( Battaglia et al . ( 2016 ) ; Watters et al . ( 2017 ) ; Sanchez-Gonzalez et al . ( 2018 ) ) , Bidirectional LSTM and Mixture Density network for Basketball Trajectory Prediction ( Zhao et al . ( 2017 ) ) and Neural Physics Engine ( Chang et al . ( 2016 ) ) . Some of the researchers also tried to combine actual physical laws with deep learning . In one of such works , researchers propose a model that learns physical properties of the objects from observations ( Wu et al . ( 2016 ) ) . Another work proposes to integrate a physics engine together with deep learning to infer physical properties ( Wu et al . ( 2015 ) ) However , most of the work on predicting future objects dynamics is focused on learning physics from scratch or uses some known physical properties in order to train a model . This could be a problem as in most real-world physical games the underlying physical properties are not known to the player unless one has an access to the source code of a physics engine . Because of that , these properties have to be learned directly from experience with the environment without any supervision on the actual values of physical properties . Another important point , is that instead of learning physics from scratch we can use to our benefit already discovered and well-established laws of physics . In this work we propose an approach that combines classical feedforward networks with well-known physical laws in order to guide our model learning process . By doing so , our model learns physical properties directly from observations without any direct supervision on actual values of these properties . Another contribution is that our model learns from a very small training dataset and generalizes well to the entire space . Furthermore , learned values can be easily interpreted by humans which allows us to use them in any other task in the presented test domains and can be easily transferred to other games with similar physics . 3 APPROACH . 3.1 BASELINE MODELS . In order to measure the advantages of combining classical physical laws together with deep learning we are comparing our model against pure deep learning approaches with similar architectures . 3.1.1 ENCODER BASELINE MODEL . Our first baseline model is based on the idea of autoencoders ( Rumelhart et al . ( 1986 ) ) . Contrary to the proposed model in section 3.2 this model learns to automatically discover features from observations . It takes a sequence of points T = { ( x0 , y0 ) , ( x1 , y1 ) , ... , ( xn , yn ) } as its input and encodes it to a latent space Tenc . The encoded trajectory is then used by decoder to reconstruct the trajectory . The second part of this baseline model consists of another MLP that learns to associate a relative position of a physical object that generated the trajectory with learned latent space Tenc . More formally , given trajectory T as input , encoder fencoder , and decoder fdecoder , our model reconstructs a trajectory T̂ from latent space as follows : T̂ = fdecoder ( fencoder ( { ( x0 , y0 ) , ( x1 , y1 ) , ... , ( xn , yn ) } ) ) ( 1 ) Once the trajectory is reconstructed , we compute the loss using Mean Squared Error and update the weights of our networks : 1 n n∑ i=1 ( T − T̂ ) 2 ( 2 ) The second part of this baseline model is a another MLP fassociate that learns to associate given initial relative position of a physical object ( xr0 , yr0 ) with derived in a previous step encoded trajectory Tenc : ˆTenc = fassociate ( ( xr0 , yr0 ) ) ( 3 ) In order to update the weights of fassociate we compute the loss using Mean Squared Error between two derived encodings Tenc and ˆTenc . After that , we predict trajectory using derived ˆTenc as follows : T̂ = fdecoder ( ˆTenc ) 3.1.2 SIMPLE BASELINE MODEL . In order to evaluate the advantages of using observations and an encoder-decoder scheme , we use the second baseline model that does not use encoder-decoder and directly learns to predict trajectory from the given in-game forces or relative position of a physical object . More formally , given relative position of a physical object ( xr0 , yr0 ) and MLP fsimple , we compute the trajectory as follows : T̂ = fsimple ( ( xr0 , yr0 ) ) ( 4 ) 3.2 PHYSICS AWARE MODEL . Similarly to the first baseline model presented in section 3.1.1 , Physics Aware Network ( PhysANet ) consists of two parts : a neural network that discovers physical forces of the environment and action that generated given observations and a neural network that learns the relationship between the ingame actions or forces and predicted physical values . We further refer to these two parts as InferNet and RelateNet . 3.2.1 INFERNET . The goal of InferNet is to extract physical forces that have generated a given trajectory { ( x0 , y0 ) , ( x1 , y1 ) ... ( xn , yn ) } using guidance from known physical equations . The discovered physical forces are then plugged to the projectile motion equation in order to calculate the trajectory . InferNet consists of two internal small MLPs that are trained together as shown on Figure 2 ( Left ) . The first MLP takes in a batch of trajectories and predicts a single value of gravity force for all of them . The second MLP takes in a batch of trajectories and for each trajectory in a batch it predicts an initial velocity and angle of a shot . These predicted values are then inserted into a projectile motion equation in order to calculate the resulting trajectory . The projectile motion equation is defined as follows ( Walker ( 2010 ) ) : y = h+ x tan ( θ ) − gx 2 2V 20 cos ( θ ) 2 ( 5 ) In equation 5 , h is the initial height , g is gravity , θ is the angle of a shot and V0 is initial velocity . Once it had calculated the trajectory we compute the loss between observed trajectory T and predicted trajectory T̂ using Mean Squared Error in a similar way as was defined in equation 2 . 3.2.2 RELATENET . The goal of RelateNet that is shown on Figure 2 ( Right ) is to learn the relationship between ingame variables and physics forces predicted by InferNet . This network tries to predict extracted by InferNet forces directly from the given in-game values such as relative position of a bird . The in-game variables can be any variables with continuous or discrete domain that can be chosen by the playing agent in order to make a shot . As an example , in-game variables can be the initial forces of the shot or object ’ s relative position to the shooting mechanism . RelateNet consists of two internal MLPs where first MLP predicts initial velocities and second MLP predicts initial angles . In order to update the weights of both internal MLPs , we calculate the MSE between values predicted by InferNet and values predicted by RelateNet . More details on the architecture of the PhysANet can be found in the Appendix A .
This paper proposes an architecture that encodes a known physics motion equation of a trajectory of a moving object. The modeled equation has 3 variables and the network works in a latent space- contrary to taking raw images. It uses an auxiliary network (named InferNet) to train the final one used at inference time (named RelateNet). The former aims to reconstruct the input sequence of positions representing trajectory, and has intermediate 3 latent variables that correspond to the 3 variables of the modeled equation, while as decoder it uses the modeled known equation itself. The latter is a mapping from the relative position of the object to 2 latent variables of the former InferNet, and is trained with MSE loss. At inference, RelateNet takes as input the relative position of the object, predicts 2 variables of the equation and finally uses the motion equation to calculate the trajectory.
SP:bce4d9d2825454f2b345f4650abac10efee7c2fb
High-Frequency guided Curriculum Learning for Class-specific Object Boundary Detection
1 INTRODUCTION . Class-specific object boundary extraction from images is a fundamental problem in Computer Vision ( CV ) . It has been used as a basic module for several applications including object localization [ Yu et al . ( 2018a ) ; Wang et al . ( 2015 ) ] , 3D reconstruction [ Lee et al . ( 2009 ) ; Malik & Maydan ( 1989 ) ; Zhu et al . ( 2018 ) ] , image generation [ Isola et al . ( 2017 ) ; Wang et al . ( 2018 ) ] , multi-modal image alignment Kuse & Shen ( 2016 ) , and organ feature extraction from medical images [ Maninis et al . ( 2016 ) ] . Inspired from the sweeping success of deep neural networks in several CV fields , recent works [ Yu et al . ( 2017 ) ; Acuna et al . ( 2019 ) ; Yu et al . ( 2018b ) ] designed ConvNet-based architectures for object boundary detection and demonstrated impressive results . However , we notice that the results from these methods , as shown in Figure 1b , still suffer from significant false-alarms and misdetections even in regions without any clutter . We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations , other pixels pose a higher level of difficulties , for instance , region pixels with an appearance similar to boundarypixels ; or boundary pixels with insignificant edge strengths ( ex : camouflaged regions ) . Therefore , the training process needs to account for different levels of learning complexity around different pixels to achieve better performance levels . In classical CV literature , the different levels of pixel complexities are naturally addressed by decomposing the task into a set of sequential sub-tasks of increasing complexities [ Leu & Chen ( 1988 ) ; Lamdan & Wolfson ( 1988 ) ; Kriegman & Ponce ( 1990 ) ; Ramesh ( 1995 ) ] . Most often , boundary detection problem has been decomposed into three sub-tasks : ( a ) low-level edge detection such as Canny [ Canny ( 1986 ) ] ; ( b ) semantic tagging/labeling of edge pixels [ Prasad et al . ( 2006 ) ] and ( c ) edge linking/refining [ Jevtić et al . ( 2009 ) ] in ambiguous regions . These approaches first solve the problem for simpler pixels ( with sufficient edge strength ) and then reason about harder pixels in the regions with ambiguous or missing evidence . However , with the advent of ConvNets , this classical perspective towards boundary extraction problem has been overlooked . New end-to-end trainable ConvNets have pushed the boundaries of state-of-the-art significantly compared to classical methods . However , we believe that classical multi-stage problem-solving schemes can help to improve the performance of ConvNet models . A parallel machine learning field , Curriculum Learning [ Bengio et al . ( 2009 ) ] also advocates this kind of multi-stage training schemes which train the network with a smoother objective first and later with the target task objective . These schemes are proven to improve the generalization of the models and convergences of training processes in several applications . Motivated by these factors , this work devises a curriculum-learning inspired two-stage training scheme for object boundary extraction that trains the networks for simpler tasks first ( subtasks a and b ) and then , in the second stage , trains to solve the more complex sub-task ( c ) . Our experimental results on a simulated dataset and a real-world aerial image dataset demonstrate that this systematic training indeed results in better performances . As mentioned already , the task of predicting object boundaries is mostly rooted in identifiable higher-frequency image locations . We believe that explicit augmentation of high-frequency contents to ConvNet will improve the convergence of training processes . Hence , this work designs a simple fully convolutional network ( FCN ) that takes in also high-frequency bands of the image along with the RGB input . Here in this work , we choose to use high-frequency coefficients from wavelet decomposition [ Stephane ( 1999 ) ] of the input image and augment them to conv features at different levels . These coefficients encode local features which are vital in representing sharp boundaries . Our empirical results convey that this explicit high-frequency augmentation helps the model to converge faster , especially in the first stage of curriculum learning . In summary , our contributions in this work are the following : • A novel two-stage training scheme ( inspired from curriculum-learning ) to learn classspecific object boundaries . • A novel ConvNet augmented by high-frequency wavelets . • A thorough ablation study on a simulated MNIST digit-contour dataset . • Experiments with a challenging aerial image dataset for road contour extraction • A real-world application of road contour extraction for aligning geo-parcels to aerial im- agery . Related Work : The problem of extracting object boundaries has been extensively studied in both classical and modern literature of CV . Most of the classical methods start with low-level edge detectors and use local/global features to attach semantics to the detected pixels . They , later , use object-level understanding or mid-level Gestalt cues to reason about missing edge links and occluded boundaries . The work by Prasad et al . ( 2006 ) used local texture patterns and a linear SVM classifier to classify edge pixels given by Canny edge detector . The work by Mairal et al . ( 2008 ) reasoned on low-level edges , but learned dictionaries on multiscale RGB patches with sparse coding and used the reconstruction error curves as features for a linear logistic classifier . The work of Hariharan et al . ( 2011 ) proposed a detector that combines low-level edge detections and semantic outputs from pre-trained object detectors to localize class-specific contours . Several recent works Yang et al . ( 2016 ) ; Yu et al . ( 2017 ) adopted fully convolutional networks ( FCN ) for the task of semantic boundary extraction . The work by Yang et al . ( 2016 ) proposed a FCN-based encoder-decoder architecture for object contour detection . The work of Bertasius et al . ( 2015 ) first used VGG-based network to locate binary semantic edges and then used deep semantic segmentation networks to obtain category labels . More recently , the work of Yu et al . ( 2017 ) proposed an FCN architecture , CASENet , with a novel shared concatenation scheme that fuses low-level features with higher conv layer features . The shared concatenation replicates lower layer features to separately concatenate each channel of the class activation map in the final layer . Then , a K-grouped 1 × 1 conv is performed on fused features to generate a semantic boundary map with K channels , in which the k-th channel represents the edge map for the k-th category . A few recent works Acuna et al . ( 2019 ) ; Yu et al . ( 2018b ) integrated an alignment module into the network to account for noise in the contour labels . Most of these networks are trained in an end-to-end manner , using crossentropy-based objective functions . These objectives treat all boundary pixels equally irrespective of the complexity around them . Unlike existing methods , we use explicit high-frequency augmentation to ConvNet and train it in a curriculum learning scheme that accounts for different levels of pixel complexities with two stages . 2 THE PROPOSED CURRICULUM LEARNING SCHEME . Curriculum Learning & Multi-Stage Training : Curriculum learning ( CL ) or multi-stage training schemes are motivated from the observation that humans and animals seem to learn better when trained with a curriculum like strategy : start with easier tasks and gradually increase the difficulty level of the tasks . A pioneering work of Bengio et al . ( 2009 ) introduced CL concepts to machine learning fields . This work proposed a set of CL schemes for the applications of shape recognition and language modeling ; demonstrated better performance and faster convergence . This work established curriculum learning as a continuation method . Continuation methods [ Allgower & Georg ( 2012 ) ] start with a smoothed objective function and gradually move to less smoothed functions . In other terms , these methods consider a class of objective functions that can be expressed as , Cλ ( θ ) = ( 1− λ ) Co ( θ ) + λCt ( θ ) ( 1 ) where Co ( θ ) is smoother or simple objective and Ct ( θ ) is the target objective we wish to optimize . There are several ways to choose Co ; it can either be the same loss function as Ct but solving the task on simpler examples , or be a proxy task simpler than the target task . In general , λ is a ( binary ) variable that takes values zero or one . It is set to zero initially and later increases to one . The epoch where it changes from zero to one is referred to as the switch epoch . Curriculum Learning in CV : CL-inspired training schemes are recently gaining attention in CV fields . Recently some popular CV architectures leveraged CL based schemes to improve model generalization and training stabilities . In FlowNet 2.0 [ Ilg et al . ( 2017 ) ] for optical flow prediction , simpler training data are fed into the network first and then the more difficult dataset . The object detection framework of Zhang et al . ( 2016 ) first trains simpler networks ( proposal , refiner-nets ) and then trains the final output-net in the end . Here we propose a two-stage CL scheme for object boundary detection methods . The proposed CL based training scheme for learning Object Boundaries : ConvNets for classspecific object boundary detection are in general trained with multi-label cross entropy-based objectives [ Yu et al . ( 2017 ) ] . Ct ( θ ) = − ∑ k ∑ p ( βYk ( p ) log Ŷk ( p ; θ ) + ( 1− β ) ( 1− Yk ( p ) ) log ( 1− Ŷk ( p ; θ ) ) ) ( 2 ) where θ denotes the weights of the network ; and p and k represent indices of pixel and class labels respectively . Ŷ and Y represent prediction and groundtruth label maps . β is the percentage of non-edge pixels in the image to account for skewness of sample numbers [ Yu et al . ( 2017 ) ] . This objective treats all the contour pixels equally and does not account for the complexity of the task around them . Here , we consider this as the target objective function , Ct , that we wish to optimize . We start the training , however , with a simpler task Co. We believe the pixels with strong edge strength are easy to localize and semantically identify . Hence , we propose to solve the task around those pixels in the first stage . We take element-wise multiplication between canny edge map of the input and dilated groundtruth label to prepare supervisory signal Z for this stage . Z = EI YD ( 3 ) where EI is the Canny edge map of image I and YD the dilated groundtruth map . This is as shown in Figure 2e . The objective function for this stage becomes ; Co ( θ ) = − ∑ k ∑ p ( βZk ( p ) log Ẑk ( p ; θ ) + ( 1− β ) ( 1− Zk ( p ) ) log ( 1− Ẑk ( p ; θ ) ) ) ( 4 ) Since we use a dilated version of GT in preparing Z , it also contains some non-object contour pixels . However , these might be refined in the second stage of CL , when trained with Y ( Eq 2 ) . Hence , the CL objective function in Eq 1 uses Eq 4 and Eq 2 as initial and target objective functions respectively . In the CL training scheme , we set the switch epoch as T/2 , where T is the total number of training epochs .
The suggest two improvements to boundary detection models: (1) a curriculum learning approach, and (2) augmenting CNNs with features derived from a wavelet transform. For (1), they train half of the epochs with a target boundary that is the intersection between a Canny edge filter and the dilated groundtruth. The second half of epochs is with the normal groundtruth. For (2), they compute multiscale wavelet transforms, and combine it with each scale of CNN features. They find on a toy MNIST example that the wavelet transform doesn’t impact results very much and curriculum learning seems to provide some gains. On the Aerial Road Contours dataset, they find an improvement of ~15% mAP over the prior baseline (CASENet).
SP:f6af733aa873bf6ee0f69ec868a2d7a493a0dd0b
High-Frequency guided Curriculum Learning for Class-specific Object Boundary Detection
1 INTRODUCTION . Class-specific object boundary extraction from images is a fundamental problem in Computer Vision ( CV ) . It has been used as a basic module for several applications including object localization [ Yu et al . ( 2018a ) ; Wang et al . ( 2015 ) ] , 3D reconstruction [ Lee et al . ( 2009 ) ; Malik & Maydan ( 1989 ) ; Zhu et al . ( 2018 ) ] , image generation [ Isola et al . ( 2017 ) ; Wang et al . ( 2018 ) ] , multi-modal image alignment Kuse & Shen ( 2016 ) , and organ feature extraction from medical images [ Maninis et al . ( 2016 ) ] . Inspired from the sweeping success of deep neural networks in several CV fields , recent works [ Yu et al . ( 2017 ) ; Acuna et al . ( 2019 ) ; Yu et al . ( 2018b ) ] designed ConvNet-based architectures for object boundary detection and demonstrated impressive results . However , we notice that the results from these methods , as shown in Figure 1b , still suffer from significant false-alarms and misdetections even in regions without any clutter . We hypothesize that although boundary detection is simple at some pixels that are rooted in identifiable high-frequency locations , other pixels pose a higher level of difficulties , for instance , region pixels with an appearance similar to boundarypixels ; or boundary pixels with insignificant edge strengths ( ex : camouflaged regions ) . Therefore , the training process needs to account for different levels of learning complexity around different pixels to achieve better performance levels . In classical CV literature , the different levels of pixel complexities are naturally addressed by decomposing the task into a set of sequential sub-tasks of increasing complexities [ Leu & Chen ( 1988 ) ; Lamdan & Wolfson ( 1988 ) ; Kriegman & Ponce ( 1990 ) ; Ramesh ( 1995 ) ] . Most often , boundary detection problem has been decomposed into three sub-tasks : ( a ) low-level edge detection such as Canny [ Canny ( 1986 ) ] ; ( b ) semantic tagging/labeling of edge pixels [ Prasad et al . ( 2006 ) ] and ( c ) edge linking/refining [ Jevtić et al . ( 2009 ) ] in ambiguous regions . These approaches first solve the problem for simpler pixels ( with sufficient edge strength ) and then reason about harder pixels in the regions with ambiguous or missing evidence . However , with the advent of ConvNets , this classical perspective towards boundary extraction problem has been overlooked . New end-to-end trainable ConvNets have pushed the boundaries of state-of-the-art significantly compared to classical methods . However , we believe that classical multi-stage problem-solving schemes can help to improve the performance of ConvNet models . A parallel machine learning field , Curriculum Learning [ Bengio et al . ( 2009 ) ] also advocates this kind of multi-stage training schemes which train the network with a smoother objective first and later with the target task objective . These schemes are proven to improve the generalization of the models and convergences of training processes in several applications . Motivated by these factors , this work devises a curriculum-learning inspired two-stage training scheme for object boundary extraction that trains the networks for simpler tasks first ( subtasks a and b ) and then , in the second stage , trains to solve the more complex sub-task ( c ) . Our experimental results on a simulated dataset and a real-world aerial image dataset demonstrate that this systematic training indeed results in better performances . As mentioned already , the task of predicting object boundaries is mostly rooted in identifiable higher-frequency image locations . We believe that explicit augmentation of high-frequency contents to ConvNet will improve the convergence of training processes . Hence , this work designs a simple fully convolutional network ( FCN ) that takes in also high-frequency bands of the image along with the RGB input . Here in this work , we choose to use high-frequency coefficients from wavelet decomposition [ Stephane ( 1999 ) ] of the input image and augment them to conv features at different levels . These coefficients encode local features which are vital in representing sharp boundaries . Our empirical results convey that this explicit high-frequency augmentation helps the model to converge faster , especially in the first stage of curriculum learning . In summary , our contributions in this work are the following : • A novel two-stage training scheme ( inspired from curriculum-learning ) to learn classspecific object boundaries . • A novel ConvNet augmented by high-frequency wavelets . • A thorough ablation study on a simulated MNIST digit-contour dataset . • Experiments with a challenging aerial image dataset for road contour extraction • A real-world application of road contour extraction for aligning geo-parcels to aerial im- agery . Related Work : The problem of extracting object boundaries has been extensively studied in both classical and modern literature of CV . Most of the classical methods start with low-level edge detectors and use local/global features to attach semantics to the detected pixels . They , later , use object-level understanding or mid-level Gestalt cues to reason about missing edge links and occluded boundaries . The work by Prasad et al . ( 2006 ) used local texture patterns and a linear SVM classifier to classify edge pixels given by Canny edge detector . The work by Mairal et al . ( 2008 ) reasoned on low-level edges , but learned dictionaries on multiscale RGB patches with sparse coding and used the reconstruction error curves as features for a linear logistic classifier . The work of Hariharan et al . ( 2011 ) proposed a detector that combines low-level edge detections and semantic outputs from pre-trained object detectors to localize class-specific contours . Several recent works Yang et al . ( 2016 ) ; Yu et al . ( 2017 ) adopted fully convolutional networks ( FCN ) for the task of semantic boundary extraction . The work by Yang et al . ( 2016 ) proposed a FCN-based encoder-decoder architecture for object contour detection . The work of Bertasius et al . ( 2015 ) first used VGG-based network to locate binary semantic edges and then used deep semantic segmentation networks to obtain category labels . More recently , the work of Yu et al . ( 2017 ) proposed an FCN architecture , CASENet , with a novel shared concatenation scheme that fuses low-level features with higher conv layer features . The shared concatenation replicates lower layer features to separately concatenate each channel of the class activation map in the final layer . Then , a K-grouped 1 × 1 conv is performed on fused features to generate a semantic boundary map with K channels , in which the k-th channel represents the edge map for the k-th category . A few recent works Acuna et al . ( 2019 ) ; Yu et al . ( 2018b ) integrated an alignment module into the network to account for noise in the contour labels . Most of these networks are trained in an end-to-end manner , using crossentropy-based objective functions . These objectives treat all boundary pixels equally irrespective of the complexity around them . Unlike existing methods , we use explicit high-frequency augmentation to ConvNet and train it in a curriculum learning scheme that accounts for different levels of pixel complexities with two stages . 2 THE PROPOSED CURRICULUM LEARNING SCHEME . Curriculum Learning & Multi-Stage Training : Curriculum learning ( CL ) or multi-stage training schemes are motivated from the observation that humans and animals seem to learn better when trained with a curriculum like strategy : start with easier tasks and gradually increase the difficulty level of the tasks . A pioneering work of Bengio et al . ( 2009 ) introduced CL concepts to machine learning fields . This work proposed a set of CL schemes for the applications of shape recognition and language modeling ; demonstrated better performance and faster convergence . This work established curriculum learning as a continuation method . Continuation methods [ Allgower & Georg ( 2012 ) ] start with a smoothed objective function and gradually move to less smoothed functions . In other terms , these methods consider a class of objective functions that can be expressed as , Cλ ( θ ) = ( 1− λ ) Co ( θ ) + λCt ( θ ) ( 1 ) where Co ( θ ) is smoother or simple objective and Ct ( θ ) is the target objective we wish to optimize . There are several ways to choose Co ; it can either be the same loss function as Ct but solving the task on simpler examples , or be a proxy task simpler than the target task . In general , λ is a ( binary ) variable that takes values zero or one . It is set to zero initially and later increases to one . The epoch where it changes from zero to one is referred to as the switch epoch . Curriculum Learning in CV : CL-inspired training schemes are recently gaining attention in CV fields . Recently some popular CV architectures leveraged CL based schemes to improve model generalization and training stabilities . In FlowNet 2.0 [ Ilg et al . ( 2017 ) ] for optical flow prediction , simpler training data are fed into the network first and then the more difficult dataset . The object detection framework of Zhang et al . ( 2016 ) first trains simpler networks ( proposal , refiner-nets ) and then trains the final output-net in the end . Here we propose a two-stage CL scheme for object boundary detection methods . The proposed CL based training scheme for learning Object Boundaries : ConvNets for classspecific object boundary detection are in general trained with multi-label cross entropy-based objectives [ Yu et al . ( 2017 ) ] . Ct ( θ ) = − ∑ k ∑ p ( βYk ( p ) log Ŷk ( p ; θ ) + ( 1− β ) ( 1− Yk ( p ) ) log ( 1− Ŷk ( p ; θ ) ) ) ( 2 ) where θ denotes the weights of the network ; and p and k represent indices of pixel and class labels respectively . Ŷ and Y represent prediction and groundtruth label maps . β is the percentage of non-edge pixels in the image to account for skewness of sample numbers [ Yu et al . ( 2017 ) ] . This objective treats all the contour pixels equally and does not account for the complexity of the task around them . Here , we consider this as the target objective function , Ct , that we wish to optimize . We start the training , however , with a simpler task Co. We believe the pixels with strong edge strength are easy to localize and semantically identify . Hence , we propose to solve the task around those pixels in the first stage . We take element-wise multiplication between canny edge map of the input and dilated groundtruth label to prepare supervisory signal Z for this stage . Z = EI YD ( 3 ) where EI is the Canny edge map of image I and YD the dilated groundtruth map . This is as shown in Figure 2e . The objective function for this stage becomes ; Co ( θ ) = − ∑ k ∑ p ( βZk ( p ) log Ẑk ( p ; θ ) + ( 1− β ) ( 1− Zk ( p ) ) log ( 1− Ẑk ( p ; θ ) ) ) ( 4 ) Since we use a dilated version of GT in preparing Z , it also contains some non-object contour pixels . However , these might be refined in the second stage of CL , when trained with Y ( Eq 2 ) . Hence , the CL objective function in Eq 1 uses Eq 4 and Eq 2 as initial and target objective functions respectively . In the CL training scheme , we set the switch epoch as T/2 , where T is the total number of training epochs .
The main idea of the paper is adding a curriculum learning-based extension to CASEnet, a boundary detection method from 2017. In the first phase, the loss emphasizes easier examples with high gradient in the image, and in the second phase, the method is trained on all boundary pixels. This change seems to improve edge detection performance on a toy MNIST and an aerial dataset.
SP:f6af733aa873bf6ee0f69ec868a2d7a493a0dd0b
MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees
1 INTRODUCTION . Deep Reinforcement Learning ( DRL ) has achieved many recent breakthroughs in challenging domains such as Go ( Silver et al. , 2016 ) . While using neural networks for encoding state representations allow DRL agents to learn policies for tasks with large state spaces , the learned policies are not interpretable , which hinders their use in safety-critical applications . Some recent works leverage programs and decision trees as representations for interpreting the learned agent policies . PIRL ( Verma et al. , 2018 ) uses program synthesis to generate a program in a Domain-Specific Language ( DSL ) that is close to the DRL agent policy . The design of the DSL with desired operators is a tedious manual effort and the enumerative search for synthesis is difficult to scale for larger programs . In contrast , Viper ( Bastani et al. , 2018 ) learns a Decision Tree ( DT ) policy by mimicking the DRL agent , which not only allows for a general representation for different policies , but also allows for verification of these policies using integer linear programming solvers . Viper uses the DAGGER ( Ross et al. , 2011 ) imitation learning approach to collect state action pairs for training the student DT policy given the teacher DRL policy . It modifies the DAGGER algorithm to use the Q-function of teacher policy to prioritize states of critical importance during learning . However , learning a single DT for the complete policy leads to some key shortcomings such as i ) less faithful representation of original agent policy measured by the number of mispredictions , ii ) lower overall performance ( reward ) , and iii ) larger DT sizes that make them harder to interpret . In this paper , we present MOËT ( Mixture of Expert Trees ) , a technique based on Mixture of Experts ( MOE ) ( Jacobs et al. , 1991 ; Jordan and Xu , 1995 ; Yuksel et al. , 2012 ) , and reformulate its learning procedure to support DT experts . MOE models can typically use any expert as long as it is a differentiable function of model parameters , which unfortunately does not hold for DTs . Similar to MOE training with Expectation-Maximization ( EM ) algorithm , we first observe that MOËT can be trained by interchangeably optimizing the weighted log likelihood for experts ( independently from one another ) and optimizing the gating function with respect to the obtained experts . Then , we propose a procedure for DT learning in the specific context of MOE . To the best of our knowledge we are first to combine standard non-differentiable DT experts , which are interpretable , with MOE model . Existing combinations which rely on differentiable tree or treelike models , such as soft decision trees ( Irsoy et al. , 2012 ) and hierarchical mixture of experts ( Zhao et al. , 2019 ) are not interpretable . We adapt the imitation learning technique of Viper to use MOËT policies instead of DTs . MOËT creates multiple local DTs that specialize on different regions of the input space , allowing for simpler ( shallower ) DTs that more accurately mimic the DRL agent policy within their regions , and combines the local trees into a global policy using a gating function . We use a simple and interpretable linear model with softmax function as the gating function , which returns a distribution over DT experts for each point in the input space . While standard MOE uses this distribution to average predictions of DTs , we also consider selecting just one most likely expert tree to improve interpretability . While decision boundaries of Viper DT policies must be axis-perpendicular , the softmax gating function supports boundaries with hyperplanes of arbitrary orientations , allowing MOËT to more faithfully represent the original policy . We evaluate our technique on four different environments : CartPole , Pong , Acrobot , and Mountaincar . We show that MOËT achieves significantly better rewards and lower misprediction rates with shallower trees . We also visualize the Viper and MOËT policies for Mountaincar , demonstrating the differences in their learning capabilities . Finally , we demonstrate how a MOËT policy can be translated into an SMT formula for verifying properties for CartPole game using the Z3 theorem prover ( De Moura and Bjørner , 2008 ) under similar assumptions made in Viper . In summary , this paper makes the following key contributions : 1 ) We propose MOËT , a technique based on MOE to learn mixture of expert decision trees and present a learning algorithm to train MOËT models . 2 ) We use MOËT models with a softmax gating function for interpreting DRL policies and adapt the imitation learning approach used in Viper to learn MOËT models . 3 ) We evaluate MOËT on different environments and show that it leads to smaller , more faithful , and performant representations of DRL agent policies compared to Viper while preserving verifiability . 2 RELATED WORK . Interpretable Machine Learning : In numerous contexts , it is important to understand and interpret the decision making process of a machine learning model . However , interpretability does not have a unique definition that is widely accepted . Accoding to Lipton ( Lipton , 2016 ) , there are several properties which might be meant by this word and we adopt the one which Lipton names transparency which is further decomposed to simulability , decomposability , and algorithmic transparency . A model is simulable if a person can in reasonable time compute the outputs from given inputs and in that way simulate the model ’ s inner workings . That holds for small linear models and small decision trees ( Lipton , 2016 ) . A model is decomposable if each part of a models admits an intuitive explanation , which is again the case for simple linear models and decision trees ( Lipton , 2016 ) . Algorithmic transparency is related to our understanding of the workings of the training algorithm . For instance , in case of linear models the shape of the error surface and properties of its unique minimum towards which the algorithm converges are well understood ( Lipton , 2016 ) . MOËT models focus on transparency ( as we discuss at the end of Section 5 ) . Explainable Machine Learning : There has been a lot of recent interest in explaining decisions of black-box models ( Guidotti et al. , 2018a ; Doshi-Velez and Kim , 2017 ) . For image classification , activation maximization techniques can be used to sample representative input patterns ( Erhan et al. , 2009 ; Olah et al. , 2017 ) . TCAV ( Kim et al. , 2017 ) uses human-friendly high-level concepts to associate their importance to the decision . Some recent works also generate contrastive robust explanations to help users understand a classifier decision based on a family of neighboring inputs ( Zhang et al. , 2018 ; Dhurandhar et al. , 2018 ) . LORE ( Guidotti et al. , 2018b ) explains behavior of a blackbox model around an input of interest by sampling the black-box model around the neighborhood of the input , and training a local DT over the sampled points . Our model presents an approach that combines local trees into a global policy . Tree-Structured Models : Irsoy et al . ( Irsoy et al. , 2012 ) propose a a novel decision tree architecture with soft decisions at the internal nodes where both children are chosen with probabilities given by a sigmoid gating function . Similarly , binary tree-structured hierarchical routing mixture of experts ( HRME ) model , which has classifiers as non-leaf node experts and simple regression models as leaf node experts , were proposed in ( Zhao et al. , 2019 ) . Both models are unfortunately not interpretable . Knowledge Distillation and Model Compression : We rely on ideas already explored in fields of model compression ( Bucilu et al. , 2006 ) and knowledge distillation ( Hinton et al. , 2015 ) . The idea is to use a complex well performing model to facilitate training of a simpler model which might have some other desirable properties ( e.g. , interpretability ) . Such practices have been applied to approximate decision tree ensemble by a single tree ( Breiman and Shang , 1996 ) , but this is different from our case , since we approximate a neural network . In a similar fashion a neural network can be used to train another neural network ( Furlanello et al. , 2018 ) , but neural networks are hard to interpret and even harder to formally verify , so this is also different from our case . Such practices have also been applied in the field of reinforcement learning in knowledge and policy distillation ( Rusu et al. , 2016 ; Koul et al. , 2019 ; Zhang et al. , 2019 ) , which are similar in spirit to our work , and imitation learning ( Bastani et al. , 2018 ; Ross et al. , 2011 ; Abbeel and Ng , 2004 ; Schaal , 1999 ) , which provide a foundation for our work . 3 MOTIVATING EXAMPLE : GRIDWORLD . We now present a simple motivating example to showcase some of the key differences between Viper and MOËT approaches . Consider the N ×N Gridworld problem shown in Figure 1a ( for N = 5 ) . The agent is placed at a random position in a grid ( except the walls denoted by filled rectangles ) and should find its way out . To move through the grid the agent can choose to go up , left , right or down at each time step . If it hits the wall it stays in the same position ( state ) . State is represented using two integer values ( x , y coordinates ) which range from ( 0 , 0 ) —bottom left to ( N − 1 , N − 1 ) —top right . The grid can be escaped through either left doors ( left of the first column ) , or right doors ( right of the last column ) . A negative reward of −0.1 is received for each agent action ( negative reward encourages the agent to find the exit as fast as possible ) . An episode finishes as soon as an exit is reached or if 100 steps are made whichever comes first . The optimal policy ( π∗ ) for this problem consists of taking the left ( right resp . ) action for each state below ( above resp . ) the diagonal . We used π∗ as a teacher and used imitation learning approach of Viper to train an interpretable DT policy that mimics π∗ . The resulting DT policy is shown in Figure 1b . The DT partitions the state space ( grid ) using lines perpendicular to x and y axes , until it separates all states above diagonal from those below . This results in a DT of depth 3 with 9 nodes . On the other hand , the policy learned by MOËT is shown in Figure 1c . The MOËT model with 2 experts learns to partition the space using the line defined by a linear function 1.06x + 1.11y = 4 ( roughly the diagonal of the grid ) . Points on the different sides of the line correspond to two different experts which are themselves DTs of depth 0 always choosing to go left ( below ) or right ( above ) . We notice that DT policy needs much larger depth to represent π∗ while MOËT can represent it as only one decision step . Furthermore , with increasing N ( size of the grid ) , complexity of DT will grow , while MOËT complexity stays the same ; we empirically confirm this for N = [ 5 , 10 ] . For N = 5 , 6 , 7 , 8 , 9 , 10 DT depths are 3 , 4 , 4 , 4 , 4 , 5 and number of nodes are 9 , 11 , 13 , 15 , 17 , 21 respectively . In contrast , MOËT models of same complexity and structure as the one shown in Figure 1c are learned for all values of N ( models differ in the learned partitioning linear function ) .
The paper proposes an extension to the Viper[1] method for interpreting and verifying deep RL policies by learning a mixture of decision trees to mimic the originally learned policy. The proposed approach can imitate the deep policy better compared with Viper while preserving verifiability. Empirically the proposed method demonstrates improvement in terms of cumulative reward and misprediction rate over Viper in four benchmark tasks.
SP:91fbd1f4774de6619bd92d37e1a1b1e7f2ed96f3
MoET: Interpretable and Verifiable Reinforcement Learning via Mixture of Expert Trees
1 INTRODUCTION . Deep Reinforcement Learning ( DRL ) has achieved many recent breakthroughs in challenging domains such as Go ( Silver et al. , 2016 ) . While using neural networks for encoding state representations allow DRL agents to learn policies for tasks with large state spaces , the learned policies are not interpretable , which hinders their use in safety-critical applications . Some recent works leverage programs and decision trees as representations for interpreting the learned agent policies . PIRL ( Verma et al. , 2018 ) uses program synthesis to generate a program in a Domain-Specific Language ( DSL ) that is close to the DRL agent policy . The design of the DSL with desired operators is a tedious manual effort and the enumerative search for synthesis is difficult to scale for larger programs . In contrast , Viper ( Bastani et al. , 2018 ) learns a Decision Tree ( DT ) policy by mimicking the DRL agent , which not only allows for a general representation for different policies , but also allows for verification of these policies using integer linear programming solvers . Viper uses the DAGGER ( Ross et al. , 2011 ) imitation learning approach to collect state action pairs for training the student DT policy given the teacher DRL policy . It modifies the DAGGER algorithm to use the Q-function of teacher policy to prioritize states of critical importance during learning . However , learning a single DT for the complete policy leads to some key shortcomings such as i ) less faithful representation of original agent policy measured by the number of mispredictions , ii ) lower overall performance ( reward ) , and iii ) larger DT sizes that make them harder to interpret . In this paper , we present MOËT ( Mixture of Expert Trees ) , a technique based on Mixture of Experts ( MOE ) ( Jacobs et al. , 1991 ; Jordan and Xu , 1995 ; Yuksel et al. , 2012 ) , and reformulate its learning procedure to support DT experts . MOE models can typically use any expert as long as it is a differentiable function of model parameters , which unfortunately does not hold for DTs . Similar to MOE training with Expectation-Maximization ( EM ) algorithm , we first observe that MOËT can be trained by interchangeably optimizing the weighted log likelihood for experts ( independently from one another ) and optimizing the gating function with respect to the obtained experts . Then , we propose a procedure for DT learning in the specific context of MOE . To the best of our knowledge we are first to combine standard non-differentiable DT experts , which are interpretable , with MOE model . Existing combinations which rely on differentiable tree or treelike models , such as soft decision trees ( Irsoy et al. , 2012 ) and hierarchical mixture of experts ( Zhao et al. , 2019 ) are not interpretable . We adapt the imitation learning technique of Viper to use MOËT policies instead of DTs . MOËT creates multiple local DTs that specialize on different regions of the input space , allowing for simpler ( shallower ) DTs that more accurately mimic the DRL agent policy within their regions , and combines the local trees into a global policy using a gating function . We use a simple and interpretable linear model with softmax function as the gating function , which returns a distribution over DT experts for each point in the input space . While standard MOE uses this distribution to average predictions of DTs , we also consider selecting just one most likely expert tree to improve interpretability . While decision boundaries of Viper DT policies must be axis-perpendicular , the softmax gating function supports boundaries with hyperplanes of arbitrary orientations , allowing MOËT to more faithfully represent the original policy . We evaluate our technique on four different environments : CartPole , Pong , Acrobot , and Mountaincar . We show that MOËT achieves significantly better rewards and lower misprediction rates with shallower trees . We also visualize the Viper and MOËT policies for Mountaincar , demonstrating the differences in their learning capabilities . Finally , we demonstrate how a MOËT policy can be translated into an SMT formula for verifying properties for CartPole game using the Z3 theorem prover ( De Moura and Bjørner , 2008 ) under similar assumptions made in Viper . In summary , this paper makes the following key contributions : 1 ) We propose MOËT , a technique based on MOE to learn mixture of expert decision trees and present a learning algorithm to train MOËT models . 2 ) We use MOËT models with a softmax gating function for interpreting DRL policies and adapt the imitation learning approach used in Viper to learn MOËT models . 3 ) We evaluate MOËT on different environments and show that it leads to smaller , more faithful , and performant representations of DRL agent policies compared to Viper while preserving verifiability . 2 RELATED WORK . Interpretable Machine Learning : In numerous contexts , it is important to understand and interpret the decision making process of a machine learning model . However , interpretability does not have a unique definition that is widely accepted . Accoding to Lipton ( Lipton , 2016 ) , there are several properties which might be meant by this word and we adopt the one which Lipton names transparency which is further decomposed to simulability , decomposability , and algorithmic transparency . A model is simulable if a person can in reasonable time compute the outputs from given inputs and in that way simulate the model ’ s inner workings . That holds for small linear models and small decision trees ( Lipton , 2016 ) . A model is decomposable if each part of a models admits an intuitive explanation , which is again the case for simple linear models and decision trees ( Lipton , 2016 ) . Algorithmic transparency is related to our understanding of the workings of the training algorithm . For instance , in case of linear models the shape of the error surface and properties of its unique minimum towards which the algorithm converges are well understood ( Lipton , 2016 ) . MOËT models focus on transparency ( as we discuss at the end of Section 5 ) . Explainable Machine Learning : There has been a lot of recent interest in explaining decisions of black-box models ( Guidotti et al. , 2018a ; Doshi-Velez and Kim , 2017 ) . For image classification , activation maximization techniques can be used to sample representative input patterns ( Erhan et al. , 2009 ; Olah et al. , 2017 ) . TCAV ( Kim et al. , 2017 ) uses human-friendly high-level concepts to associate their importance to the decision . Some recent works also generate contrastive robust explanations to help users understand a classifier decision based on a family of neighboring inputs ( Zhang et al. , 2018 ; Dhurandhar et al. , 2018 ) . LORE ( Guidotti et al. , 2018b ) explains behavior of a blackbox model around an input of interest by sampling the black-box model around the neighborhood of the input , and training a local DT over the sampled points . Our model presents an approach that combines local trees into a global policy . Tree-Structured Models : Irsoy et al . ( Irsoy et al. , 2012 ) propose a a novel decision tree architecture with soft decisions at the internal nodes where both children are chosen with probabilities given by a sigmoid gating function . Similarly , binary tree-structured hierarchical routing mixture of experts ( HRME ) model , which has classifiers as non-leaf node experts and simple regression models as leaf node experts , were proposed in ( Zhao et al. , 2019 ) . Both models are unfortunately not interpretable . Knowledge Distillation and Model Compression : We rely on ideas already explored in fields of model compression ( Bucilu et al. , 2006 ) and knowledge distillation ( Hinton et al. , 2015 ) . The idea is to use a complex well performing model to facilitate training of a simpler model which might have some other desirable properties ( e.g. , interpretability ) . Such practices have been applied to approximate decision tree ensemble by a single tree ( Breiman and Shang , 1996 ) , but this is different from our case , since we approximate a neural network . In a similar fashion a neural network can be used to train another neural network ( Furlanello et al. , 2018 ) , but neural networks are hard to interpret and even harder to formally verify , so this is also different from our case . Such practices have also been applied in the field of reinforcement learning in knowledge and policy distillation ( Rusu et al. , 2016 ; Koul et al. , 2019 ; Zhang et al. , 2019 ) , which are similar in spirit to our work , and imitation learning ( Bastani et al. , 2018 ; Ross et al. , 2011 ; Abbeel and Ng , 2004 ; Schaal , 1999 ) , which provide a foundation for our work . 3 MOTIVATING EXAMPLE : GRIDWORLD . We now present a simple motivating example to showcase some of the key differences between Viper and MOËT approaches . Consider the N ×N Gridworld problem shown in Figure 1a ( for N = 5 ) . The agent is placed at a random position in a grid ( except the walls denoted by filled rectangles ) and should find its way out . To move through the grid the agent can choose to go up , left , right or down at each time step . If it hits the wall it stays in the same position ( state ) . State is represented using two integer values ( x , y coordinates ) which range from ( 0 , 0 ) —bottom left to ( N − 1 , N − 1 ) —top right . The grid can be escaped through either left doors ( left of the first column ) , or right doors ( right of the last column ) . A negative reward of −0.1 is received for each agent action ( negative reward encourages the agent to find the exit as fast as possible ) . An episode finishes as soon as an exit is reached or if 100 steps are made whichever comes first . The optimal policy ( π∗ ) for this problem consists of taking the left ( right resp . ) action for each state below ( above resp . ) the diagonal . We used π∗ as a teacher and used imitation learning approach of Viper to train an interpretable DT policy that mimics π∗ . The resulting DT policy is shown in Figure 1b . The DT partitions the state space ( grid ) using lines perpendicular to x and y axes , until it separates all states above diagonal from those below . This results in a DT of depth 3 with 9 nodes . On the other hand , the policy learned by MOËT is shown in Figure 1c . The MOËT model with 2 experts learns to partition the space using the line defined by a linear function 1.06x + 1.11y = 4 ( roughly the diagonal of the grid ) . Points on the different sides of the line correspond to two different experts which are themselves DTs of depth 0 always choosing to go left ( below ) or right ( above ) . We notice that DT policy needs much larger depth to represent π∗ while MOËT can represent it as only one decision step . Furthermore , with increasing N ( size of the grid ) , complexity of DT will grow , while MOËT complexity stays the same ; we empirically confirm this for N = [ 5 , 10 ] . For N = 5 , 6 , 7 , 8 , 9 , 10 DT depths are 3 , 4 , 4 , 4 , 4 , 5 and number of nodes are 9 , 11 , 13 , 15 , 17 , 21 respectively . In contrast , MOËT models of same complexity and structure as the one shown in Figure 1c are learned for all values of N ( models differ in the learned partitioning linear function ) .
The paper proposes a method (MOET) to distillate a reinforcement learning policy represented by a deep neural network into an ensemble of decision trees. The main objective of this procedure is to obtain an "interpretable" and verifiable policy while maintaining the performance of the policy. The authors build over the previously published algorithm Viper (Bastani et al, 2018), which distillates deep policies into a single decision tree using the DAGGER procedures, i.e. alternation of imitation learning of an expert policy and of additional data-sampling from the newly learned policy. In the VIPER algorithm decision trees are chosen because their structured nature allows to formally prove properties of the policy they represent when the environments dynamics are known and expressible in closed form.
SP:91fbd1f4774de6619bd92d37e1a1b1e7f2ed96f3
Implementing Inductive bias for different navigation tasks through diverse RNN attrractors
1 INTRODUCTION . Spatial navigation is an important task that requires a correct internal representation of the world , and thus its mechanistic underpinnings have attracted the attention of scientists for a long time ( O ’ Keefe & Nadel , 1978 ) . A standard tool for navigation is a euclidean map , and this naturally leads to the hypothesis that our internal model is such a map . Artificial navigation also relies on SLAM ( Simultaneous localization and mapping ) which is based on maps ( Kanitscheider & Fiete , 2017a ) . On the other hand , both from an ecological view and from a pure machine learning perspective , navigation is firstly about reward acquisition , while exploiting the statistical regularities of the environment . Different tasks and environments lead to different statistical regularities . Thus it is unclear which internal representations are optimal for reward acquisition . We take a functional approach to this question by training recurrent neural networks for navigation tasks with various types of statistical regularities . Because we are interested in internal representations , we opt for a two-phase learning scheme instead of end-to-end learning . Inspired by the biological phenomena of evolution and development , we first pre-train the networks to emphasize several aspects of their internal representation . Following pre-training , we use Q-learning to modify the network ’ s readout weights for specific tasks while maintaining its internal connectivity . We evaluate the performance for different networks on a battery of simple navigation tasks with different statistical regularities and show that the internal representations of the networks manifest in differential performance according to the nature of tasks . The link between task performance and network structure is understood by probing networks ’ dynamics , exposing a low-dimensional manifold of slow dynamics in phase space , which is clustered into three major categories : continuous attractor , discrete attractors , and unstructured chaotic dynamics . The different network attractors encode different priors , or inductive bias , for specific tasks which corresponds to metric or topology invariances in the tasks . By combining networks with different inductive biases we could build a modular system with improved multiple-task learning . Overall we offer a paradigm which shows how dynamics of recurrent networks implement different priors for environments . Pre-training , which is agnostic to specific tasks , could lead to dramatic difference in the network ’ s dynamical landscape and affect reinforcement learning of different navigation tasks . 2 RELATED WORK . Several recent papers used a functional approach for navigation ( Cueva & Wei , 2018 ; Kanitscheider & Fiete , 2017b ; Banino et al. , 2018 ) . These works , however , consider the position as the desired output , by assuming that it is the relevant representation for navigation . These works successfully show that the recurrent network agent could solve the neural SLAM problem and that this could result in units of the network exhibiting similar response profiles to those found in neurophysiological experiments ( place and grid cells ) . In our case , the desired behavior was to obtain the reward , and not to report the current position . Another recent approach did define reward acquisition as the goal , by applying deep RL directly to navigation problems in an end to end manner ( Mirowski et al. , 2016 ) . The navigation tasks relied on rich visual cues , that allowed evaluation in a state of the art setting . This richness , however , can hinder the greater mechanistic insights that can be obtained from the systematic analysis of toy problems – and accordingly , the focus of these works is on performance . Our work is also related to recent works in neuroscience that highlight the richness of neural representations for navigation , beyond Euclidian spatial maps ( Hardcastle et al. , 2017 ; Wirth et al. , 2017 ) . Our pre-training is similar to unsupervised , followed by supervised training ( Erhan et al. , 2010 ) . In the past few years , end-to-end learning is a more dominant approach ( Graves et al. , 2014 ; Mnih et al. , 2013 ) . We highlight the ability of a pre-training framework to manipulate network dynamics and the resulting internal representations and study their effect as inductive bias . 3 RESULTS . 3.1 TASK DEFINITION . Navigation can be described as taking advantage of spatial regularities of the environment to achieve goals . This view naturally leads to considering a cognitive map as an internal model of the environment , but leaves open the question of precisely which type of map is to be expected . To answer this question , we systematically study both a space of networks – emphasizing different internal models – and a space of tasks – emphasizing different spatial regularities . To allow a systematic approach , we design a toy navigation problem , inspired by the Morris water maze ( Morris , 1981 ) . An agent is placed in a random position in a discretized square arena ( size 15 ) , and has to locate the reward location ( yellow square , Fig 1A ) , while only receiving input ( empty/wall/reward ) from the 8 neighboring positions . The reward is placed in one of two possible locations in the room according to an external context signal , and the agent can move in one of the four cardinal directions . At every trial , the agent is placed in a random position in the arena , and the network ’ s internal state is randomly initialized as well . The platform location is constant across trials for each context ( see Methods ) . The agent is controlled by a RNN that receives the proximal sensory input , as well as a feedback of its own chosen action ( Fig . 1B ) . The network ’ s output is a value for each of the 4 possible actions , the maximum of which is chosen to update the agent ’ s position . We use a vanilla RNN ( see Appendix for LSTM units ) described by : ht+1 = ( 1− 1 τ ) ht + 1 τ tanh ( Wht +Wif ( zt ) +WaAt +WcCt ) ( 1 ) Q ( ht ) = Woht + bo ( 2 ) where ht is the activity of neurons in the networks ( 512 neurons as default ) , W is connectivity matrix , τ is a timescale of update . The sensory input f ( zt ) is fed through connections matrixWs , and action feedback is fed throughWa . The context signal Ct is fed through matrix Wc . The network outputs a Q function , which is computed by a linear transformation of its hidden state . Beyond the basic setting ( Fig . 1A ) , we design several variants of the task to emphasize different statistical regularities ( Fig . 1C ) . In all cases , the agent begins from a random position and has to reach the context-dependent reward location in the shortest time using only proximal input . The ” Hole ” variant introduces a random placement of obstacles ( different numbers and positions ) in each trial . The ” Bar ” variant introduces a horizontal or vertical bar in random positions in each trial . The various ” Scale ” tasks stretch the arena in the horizontal or vertical direction while maintaining the relative position of the rewards . The ” Implicit context ” task is similar to the basic setting , but the external context input is eliminated , and instead , the color of the walls indicates the reward position . For all these tasks , the agent needs to find a strategy that tackles the uncertain elements to achieve the goals . Despite the simple setting of the game , the tasks are not trivial due to identical visual inputs in most of the locations and various uncertain elements adding to the task difficulty . 3.2 TRAINING FRAMEWORK . We aim to understand the interaction between internal representation and the statistical regularities of the various tasks . In principle , this could be accomplished by end-to-end reinforcement learning of many tasks , using various hyper-parameters to allow different solutions to the same task . We opted for a different approach - both due to computation efficiency ( see Appendix III ) and due to biological motivations . A biological agent acquires navigation ability during evolution and development , which shapes its elementary cognitive ability such as spatial or object memory . This shaping provides a scaffold upon which the animal could adapt and learn quickly to perform diverse tasks during life . Similarly , we divide learning into two phases , a pre-training phase that is task agnostic and a Q learning phase that is task-specific ( Fig . 2A ) . During pre-training we modify the network ’ s internal and input connectivity , while Q learning only modifies the output . Pre-training is implemented in an environment similar to the basic task , with an arena size chosen randomly between 10 to 20 . The agent ’ s actions are externally determined as a correlated random walk , instead of being internally generated by the agent . Inspired by neurophysiological findings , we emphasize two different aspects of internal representation - landmark memory ( Identity of the last encountered wall ) and position encoding ( O ’ Keefe & Nadel , 1978 ) . We thus pre-train the internal connectivity to generate an ensemble of networks with various hyperparameters that control the relative importance of these two aspects , as well as which parts of the connectivityW , Wa , Wiare modified . We term networks emphasizing the two aspects respectively MemNet and PosNet , and call the naive random network RandNet ( Fig . 2A ) . This is done by stochastic gradient descent on the following objective function : S = −α n∑ i=1 P̂ ( zt ) logP ( zt ) − β n∑ i=1 ÎtlogP ( It ) − γ n∑ i=1 ÂtlogP ( At ) ( 3 ) with z = ( x , y ) for position , I for landmark memory ( identity of the last wall encountered ) , A for action . The term on action serves as a regularizer . The three probability distributions are estimated from hidden states of the RNN , given by : P ( I|ht ) = exp ( Wmht + bm ) ∑ m ( exp ( Wmht + bm ) ) ( 4 ) P ( A|ht−1 , ht ) = exp ( Wa [ ht−1 , ht ] + ba ) ∑ a exp ( Wa [ ht−1 , ht ] + ba ) ( 5 ) P ( z|ht ) = exp ( ( z − ( Wpht + bp ) ) 2/σ2 ) ∑ z exp ( ( z − ( Wpht + bp ) ) 2/σ2 ) ( 6 ) where Wm , Wp , Wa are readout matrices from hidden states and [ ht−1 , ht ] denotes the concatenation of last and current hidden states . Tables 1,2,3 in the Appendix show the hyperparameter choices for all networks . The ratio between α and β controls the tradeoff between position and memory . The exact values of the hyperparameters were found through trial and error . Having obtained this ensemble of networks , we use a Q-learning algorithm with TD-lambda update for the network ’ s outputs , which are Q values . We utilize the fact that only the readout matrix Wo is trained to use a recursive least square method which allows a fast update of weights for different tasks ( Sussillo & Abbott , 2009 ) . This choice leads to a much better convergence speed when compared to stochastic gradient descent . The update rule used is : Wo ( n+ 1 ) = Wo ( n ) − e ( n ) P ( n ) H ( n ) T ( 7 ) P ( n+ 1 ) = ( C ( n+ 1 ) + αI ) −1 ( 8 ) C ( n+ 1 ) = λC ( n ) +H ( n ) TH ( n ) ( 9 ) e ( n ) = WoH ( n ) − Y ( n ) ( 10 ) whereH is a matrix of hidden states over 120 time steps , αI is a regularizer and λ controls forgetting rate of past data . We then analyze the test performance of all networks on all tasks ( Figure 2B and Table 3 in appendix ) . Figure 2B , C show that there are correlations between different tasks and between different networks . We quantify this correlation structure by performing principal component analysis of the performance matrix . We find that the first two PCs in task space explain 79 % of the variance . The first component corresponds to the difficulty ( average performance ) of each task , while the coefficients of the second component are informative regarding the nature of the tasks ( Fig . 2B , right ) : Bar ( -0.49 ) , Hole ( -0.25 ) , Basic ( -0.21 ) , Implicit context ( -0.12 ) , ScaleX ( 0.04 ) , ScaleY ( 0.31 ) , Scale ( 0.74 ) . We speculate these numbers characterize the importance of two different invariances inherent in the tasks . Negative coefficients correspond to metric invariance . For example , when overcoming dynamic obstacles , the position remains invariant . This type of task was fundamental to establish metric cognitive maps in neuroscience ( O ’ Keefe & Nadel , 1978 ) . Positive coefficients correspond to topological invariance , defined as the relation between landmarks unaffected by the metric information . Observing the behavior of networks for the extreme tasks of this axis indeed confirms the speculation . Fig . 3A shows that the successful agent overcomes the scaling task by finding a set of actions that captures the relations between landmarks and reward , thus generalizing to larger size arenas . Fig3B shows that the successful agent in the bar task uses a very different strategy . An agent that captures the metric invariance could adjust trajectories and reach the reward each time when the obstacle is changed . This ability is often related to the ability to use shortcuts ( O ’ Keefe & Nadel , 1978 ) . The other tasks intepolate between the two extremes , due to the presence of both elements in the tasks . For instance , the implicit context task requires the agent to combine landmark memory ( color of the wall ) with position to locate the reward . We thus define metric and topological scores by using a weighted average of task performance using negative and positive coefficients respectively . Fig . 3C shows the various networks measured by the two scores . We see that random networks ( blue ) can achieve reasonable performance with some hyperparameter choices , but they are balanced with respect to the metric topological score . On the other hand , PostNet networks are pushed to the metric side and MemNet networks to the topological side . This result indicates that the inductive bias achieved via task agnostic pre-training is manifested in the performance of networks on various navigation tasks .
This paper studies the internal representations of recurrent neural networks trained on navigation tasks. By varying the weight of different terms in an objective used for supervised pre-training, RNNs are created that either use path integration or landmark memory for navigation. The paper shows that the pretraining method leads to differential performance when the readout layer of these networks networks is trained using Q-learning on different variants of a navigation task. The main result of the paper is obtained by finding the slow points of the dynamics of the trained RNNs. The paper finds that the RNNs pre-trained to use path integration contain 2D continuous attractors, allowing position memory. On the other hand, the RNNs pre-trained for landmark memory contain discrete attractors corresponding to the different landmarks.
SP:ddc70109c59cf0db7fe020300ab762a5ac57bd93
Implementing Inductive bias for different navigation tasks through diverse RNN attrractors
1 INTRODUCTION . Spatial navigation is an important task that requires a correct internal representation of the world , and thus its mechanistic underpinnings have attracted the attention of scientists for a long time ( O ’ Keefe & Nadel , 1978 ) . A standard tool for navigation is a euclidean map , and this naturally leads to the hypothesis that our internal model is such a map . Artificial navigation also relies on SLAM ( Simultaneous localization and mapping ) which is based on maps ( Kanitscheider & Fiete , 2017a ) . On the other hand , both from an ecological view and from a pure machine learning perspective , navigation is firstly about reward acquisition , while exploiting the statistical regularities of the environment . Different tasks and environments lead to different statistical regularities . Thus it is unclear which internal representations are optimal for reward acquisition . We take a functional approach to this question by training recurrent neural networks for navigation tasks with various types of statistical regularities . Because we are interested in internal representations , we opt for a two-phase learning scheme instead of end-to-end learning . Inspired by the biological phenomena of evolution and development , we first pre-train the networks to emphasize several aspects of their internal representation . Following pre-training , we use Q-learning to modify the network ’ s readout weights for specific tasks while maintaining its internal connectivity . We evaluate the performance for different networks on a battery of simple navigation tasks with different statistical regularities and show that the internal representations of the networks manifest in differential performance according to the nature of tasks . The link between task performance and network structure is understood by probing networks ’ dynamics , exposing a low-dimensional manifold of slow dynamics in phase space , which is clustered into three major categories : continuous attractor , discrete attractors , and unstructured chaotic dynamics . The different network attractors encode different priors , or inductive bias , for specific tasks which corresponds to metric or topology invariances in the tasks . By combining networks with different inductive biases we could build a modular system with improved multiple-task learning . Overall we offer a paradigm which shows how dynamics of recurrent networks implement different priors for environments . Pre-training , which is agnostic to specific tasks , could lead to dramatic difference in the network ’ s dynamical landscape and affect reinforcement learning of different navigation tasks . 2 RELATED WORK . Several recent papers used a functional approach for navigation ( Cueva & Wei , 2018 ; Kanitscheider & Fiete , 2017b ; Banino et al. , 2018 ) . These works , however , consider the position as the desired output , by assuming that it is the relevant representation for navigation . These works successfully show that the recurrent network agent could solve the neural SLAM problem and that this could result in units of the network exhibiting similar response profiles to those found in neurophysiological experiments ( place and grid cells ) . In our case , the desired behavior was to obtain the reward , and not to report the current position . Another recent approach did define reward acquisition as the goal , by applying deep RL directly to navigation problems in an end to end manner ( Mirowski et al. , 2016 ) . The navigation tasks relied on rich visual cues , that allowed evaluation in a state of the art setting . This richness , however , can hinder the greater mechanistic insights that can be obtained from the systematic analysis of toy problems – and accordingly , the focus of these works is on performance . Our work is also related to recent works in neuroscience that highlight the richness of neural representations for navigation , beyond Euclidian spatial maps ( Hardcastle et al. , 2017 ; Wirth et al. , 2017 ) . Our pre-training is similar to unsupervised , followed by supervised training ( Erhan et al. , 2010 ) . In the past few years , end-to-end learning is a more dominant approach ( Graves et al. , 2014 ; Mnih et al. , 2013 ) . We highlight the ability of a pre-training framework to manipulate network dynamics and the resulting internal representations and study their effect as inductive bias . 3 RESULTS . 3.1 TASK DEFINITION . Navigation can be described as taking advantage of spatial regularities of the environment to achieve goals . This view naturally leads to considering a cognitive map as an internal model of the environment , but leaves open the question of precisely which type of map is to be expected . To answer this question , we systematically study both a space of networks – emphasizing different internal models – and a space of tasks – emphasizing different spatial regularities . To allow a systematic approach , we design a toy navigation problem , inspired by the Morris water maze ( Morris , 1981 ) . An agent is placed in a random position in a discretized square arena ( size 15 ) , and has to locate the reward location ( yellow square , Fig 1A ) , while only receiving input ( empty/wall/reward ) from the 8 neighboring positions . The reward is placed in one of two possible locations in the room according to an external context signal , and the agent can move in one of the four cardinal directions . At every trial , the agent is placed in a random position in the arena , and the network ’ s internal state is randomly initialized as well . The platform location is constant across trials for each context ( see Methods ) . The agent is controlled by a RNN that receives the proximal sensory input , as well as a feedback of its own chosen action ( Fig . 1B ) . The network ’ s output is a value for each of the 4 possible actions , the maximum of which is chosen to update the agent ’ s position . We use a vanilla RNN ( see Appendix for LSTM units ) described by : ht+1 = ( 1− 1 τ ) ht + 1 τ tanh ( Wht +Wif ( zt ) +WaAt +WcCt ) ( 1 ) Q ( ht ) = Woht + bo ( 2 ) where ht is the activity of neurons in the networks ( 512 neurons as default ) , W is connectivity matrix , τ is a timescale of update . The sensory input f ( zt ) is fed through connections matrixWs , and action feedback is fed throughWa . The context signal Ct is fed through matrix Wc . The network outputs a Q function , which is computed by a linear transformation of its hidden state . Beyond the basic setting ( Fig . 1A ) , we design several variants of the task to emphasize different statistical regularities ( Fig . 1C ) . In all cases , the agent begins from a random position and has to reach the context-dependent reward location in the shortest time using only proximal input . The ” Hole ” variant introduces a random placement of obstacles ( different numbers and positions ) in each trial . The ” Bar ” variant introduces a horizontal or vertical bar in random positions in each trial . The various ” Scale ” tasks stretch the arena in the horizontal or vertical direction while maintaining the relative position of the rewards . The ” Implicit context ” task is similar to the basic setting , but the external context input is eliminated , and instead , the color of the walls indicates the reward position . For all these tasks , the agent needs to find a strategy that tackles the uncertain elements to achieve the goals . Despite the simple setting of the game , the tasks are not trivial due to identical visual inputs in most of the locations and various uncertain elements adding to the task difficulty . 3.2 TRAINING FRAMEWORK . We aim to understand the interaction between internal representation and the statistical regularities of the various tasks . In principle , this could be accomplished by end-to-end reinforcement learning of many tasks , using various hyper-parameters to allow different solutions to the same task . We opted for a different approach - both due to computation efficiency ( see Appendix III ) and due to biological motivations . A biological agent acquires navigation ability during evolution and development , which shapes its elementary cognitive ability such as spatial or object memory . This shaping provides a scaffold upon which the animal could adapt and learn quickly to perform diverse tasks during life . Similarly , we divide learning into two phases , a pre-training phase that is task agnostic and a Q learning phase that is task-specific ( Fig . 2A ) . During pre-training we modify the network ’ s internal and input connectivity , while Q learning only modifies the output . Pre-training is implemented in an environment similar to the basic task , with an arena size chosen randomly between 10 to 20 . The agent ’ s actions are externally determined as a correlated random walk , instead of being internally generated by the agent . Inspired by neurophysiological findings , we emphasize two different aspects of internal representation - landmark memory ( Identity of the last encountered wall ) and position encoding ( O ’ Keefe & Nadel , 1978 ) . We thus pre-train the internal connectivity to generate an ensemble of networks with various hyperparameters that control the relative importance of these two aspects , as well as which parts of the connectivityW , Wa , Wiare modified . We term networks emphasizing the two aspects respectively MemNet and PosNet , and call the naive random network RandNet ( Fig . 2A ) . This is done by stochastic gradient descent on the following objective function : S = −α n∑ i=1 P̂ ( zt ) logP ( zt ) − β n∑ i=1 ÎtlogP ( It ) − γ n∑ i=1 ÂtlogP ( At ) ( 3 ) with z = ( x , y ) for position , I for landmark memory ( identity of the last wall encountered ) , A for action . The term on action serves as a regularizer . The three probability distributions are estimated from hidden states of the RNN , given by : P ( I|ht ) = exp ( Wmht + bm ) ∑ m ( exp ( Wmht + bm ) ) ( 4 ) P ( A|ht−1 , ht ) = exp ( Wa [ ht−1 , ht ] + ba ) ∑ a exp ( Wa [ ht−1 , ht ] + ba ) ( 5 ) P ( z|ht ) = exp ( ( z − ( Wpht + bp ) ) 2/σ2 ) ∑ z exp ( ( z − ( Wpht + bp ) ) 2/σ2 ) ( 6 ) where Wm , Wp , Wa are readout matrices from hidden states and [ ht−1 , ht ] denotes the concatenation of last and current hidden states . Tables 1,2,3 in the Appendix show the hyperparameter choices for all networks . The ratio between α and β controls the tradeoff between position and memory . The exact values of the hyperparameters were found through trial and error . Having obtained this ensemble of networks , we use a Q-learning algorithm with TD-lambda update for the network ’ s outputs , which are Q values . We utilize the fact that only the readout matrix Wo is trained to use a recursive least square method which allows a fast update of weights for different tasks ( Sussillo & Abbott , 2009 ) . This choice leads to a much better convergence speed when compared to stochastic gradient descent . The update rule used is : Wo ( n+ 1 ) = Wo ( n ) − e ( n ) P ( n ) H ( n ) T ( 7 ) P ( n+ 1 ) = ( C ( n+ 1 ) + αI ) −1 ( 8 ) C ( n+ 1 ) = λC ( n ) +H ( n ) TH ( n ) ( 9 ) e ( n ) = WoH ( n ) − Y ( n ) ( 10 ) whereH is a matrix of hidden states over 120 time steps , αI is a regularizer and λ controls forgetting rate of past data . We then analyze the test performance of all networks on all tasks ( Figure 2B and Table 3 in appendix ) . Figure 2B , C show that there are correlations between different tasks and between different networks . We quantify this correlation structure by performing principal component analysis of the performance matrix . We find that the first two PCs in task space explain 79 % of the variance . The first component corresponds to the difficulty ( average performance ) of each task , while the coefficients of the second component are informative regarding the nature of the tasks ( Fig . 2B , right ) : Bar ( -0.49 ) , Hole ( -0.25 ) , Basic ( -0.21 ) , Implicit context ( -0.12 ) , ScaleX ( 0.04 ) , ScaleY ( 0.31 ) , Scale ( 0.74 ) . We speculate these numbers characterize the importance of two different invariances inherent in the tasks . Negative coefficients correspond to metric invariance . For example , when overcoming dynamic obstacles , the position remains invariant . This type of task was fundamental to establish metric cognitive maps in neuroscience ( O ’ Keefe & Nadel , 1978 ) . Positive coefficients correspond to topological invariance , defined as the relation between landmarks unaffected by the metric information . Observing the behavior of networks for the extreme tasks of this axis indeed confirms the speculation . Fig . 3A shows that the successful agent overcomes the scaling task by finding a set of actions that captures the relations between landmarks and reward , thus generalizing to larger size arenas . Fig3B shows that the successful agent in the bar task uses a very different strategy . An agent that captures the metric invariance could adjust trajectories and reach the reward each time when the obstacle is changed . This ability is often related to the ability to use shortcuts ( O ’ Keefe & Nadel , 1978 ) . The other tasks intepolate between the two extremes , due to the presence of both elements in the tasks . For instance , the implicit context task requires the agent to combine landmark memory ( color of the wall ) with position to locate the reward . We thus define metric and topological scores by using a weighted average of task performance using negative and positive coefficients respectively . Fig . 3C shows the various networks measured by the two scores . We see that random networks ( blue ) can achieve reasonable performance with some hyperparameter choices , but they are balanced with respect to the metric topological score . On the other hand , PostNet networks are pushed to the metric side and MemNet networks to the topological side . This result indicates that the inductive bias achieved via task agnostic pre-training is manifested in the performance of networks on various navigation tasks .
This paper explores how pre-training a recurrent network on different navigational objectives confers different benefits when it comes to solving downstream tasks. First, networks are pretrained on an objective that either emphasizes position (path integration) or landmark memory (identity of the last wall encountered). This pretraining generates recurrent networks of two classes, called PosNets and MemNets (in addition to no pre-training, called RandNets). Surprisingly, the authors found that pre-training confers different benefits that manifests as differential performance of PosNets and MemNets across the suite. Some evidence is provided that this difference has to do with the requirements of the task. Moreover, the authors show how the different pretraining manifests as different dynamical structures (measured using fixed point analyses) present in the networks after pre-training. In particular, the PosNets contained a 2D plane attractor (used to readout position), whereas the MemNets contained clusters of fixed points (corresponding to the previously encountered landmark).
SP:ddc70109c59cf0db7fe020300ab762a5ac57bd93